An analysis and prioritization process for a future NOAA observational system from space, with an emphasis on operational applications, is presented.
The National Oceanic and Atmospheric Administration (NOAA) mission is “to understand and predict changes in climate, weather, oceans, and coasts, to share that knowledge and information with others, and to conserve and manage coastal and marine ecosystems and resources” (www.noaa.gov/about-our-agency). Global observations of the Earth system (atmosphere, oceans, land and ice surfaces, and the biosphere) are the foundation for meeting this mission, which serves society by protecting life and property and supporting a robust economy. Simmons et al. (2016) present an excellent summary of the Earth system and the observations (emphasis on space observations) and modeling that are needed to understand and predict it. As this paper makes clear, observations from space are a key component of the Earth observing system and are the major observation types that determine the accuracy of weather forecasts in the time range of up to two weeks. NOAA, the National Aeronautics and Space Administration (NASA), and their international partners play a major role in providing NOAA with the observations from space required to support its mission.
The current series of NOAA weather satellites is expected to provide operational satellite observations for terrestrial and space weather applications into the late 2020s and the early 2030s. As planning for satellite acquisition requires long lead times, it is necessary to begin planning for next-generation systems that will follow the current series of satellites. Beginning in 2014, the National Environmental Satellite, Data, and Information Service (NESDIS) began a comprehensive study of the future of the U.S. civil environmental remote sensing satellite system. This study is known as the NOAA Satellite Observing System Architecture (NSOSA) study. As discussed in Volz et al. (2016), St. Germain (2018), and NOAA (2018), the NSOSA study was tasked with finding the most cost-effective constellation architectures for NOAA, over a wide range of possible future budget levels and with very limited constraints on legacy continuation. The NSOSA study took a “clean sheet” look at satellite observational needs as well as the constellation concepts that could be formulated to meet those needs. Given the pace of rapid change in satellite and launch technology, satellite business models, and data use, the intent was to challenge the long-established constellation architecture of a small number of large U.S. government–owned satellites in geostationary [the current Geostationary Operational Environmental Satellite-R (GOES-R) series] and single low-Earth orbits [the current Joint Polar Satellite System (JPSS) series].
The NSOSA study, illustrated in Fig. 1 (St. Germain 2018), consisted of two major elements: 1) a value model for satellite observational and strategic objectives (requirements, upper-left boxes in Fig. 1) that spanned a wide range of capability (from somewhat below the current capability to well above), and 2) a collection of constellation alternatives that included evolutionary legacy continuation, innovative reconfiguration of legacy choices and augmentations, and radical replacement of all elements of the legacy satellite architecture. So, for example, both modest upgrades of current geostationary capabilities with new technology and complete replacement of all geostationary with low- or medium-orbit systems needed to considered and fairly compared.
The ultimate goal of the NSOSA study was not to make firm decisions about all aspects of the next generation of NOAA weather satellites. For example, the study was not expected to recommend specific instruments on those satellites. The goal was to determine the most cost-effective satellite architectures.
To address the first element (development of a value model) of the NSOSA study, NESDIS initiated the Space Platform Requirements Working Group (SPRWG) under the University of Colorado’s Cooperative Institute for Research in Environmental Sciences (CIRES) to provide an analysis of the future needs and priorities for weather, space weather, and environmental (excluding land mapping) space-based observations for the 2030 time frame and beyond (see sidebar “NOAA framing statement”).
NOAA FRAMING STATEMENT
From 2016 to 2018, NOAA undertook an extensive and comprehensive cost–benefit analysis of options for the future NOAA space-based observing system. Because observation needs are a key driver of the future architecture, NOAA solicited the aid of an expert panel of government, cooperative institute, academic, and industry scientists to inform the analysis. NOAA asked this team, which we called the Space Platform Requirements Working Group (SPRWG), to analyze, evaluate, and consolidate a high-level set of satellite measurements and performance parameters that could serve as a basic set of observing system capabilities. NOAA then used the SPRWG’s output to quantify the overall performance of over 150 possible satellite constellations. NOAA appreciates the effort, expertise, and energy the SPRWG brought to this task. The SPRWG’s output has been, and will continue to be, tremendously informative as NOAA analyzes its future needs and continues to be a leader in operational environmental observation, prediction, and warning.
This paper introduces the NSOSA process and summarizes the SPRWG’s contribution to the process, which is an analysis of space-based observations, including a prioritized list of observational objectives (upper-left box in Fig. 1) and the quantitative attributes of each objective at three levels of performance. The key result from this analysis is the Environmental Data Record (EDR) Value Model (EVM), which is the foundation for NOAA’s assessment of many potential architectures for its future observing system. The complete SPRWG report is available online (SPRWG 2018).
The SPRWG was not involved with designing or prioritizing specific satellite missions; that is the role of the NSOSA Architecture Development Team (ADT), which was composed primarily of technical experts from outside of NOAA (The Aerospace Corporation, The Johns Hopkins Applied Physics Laboratory, NASA JPL, MIT Lincoln Laboratory, and NASA GSFC). SPRWG was only charged with developing a set of observational objectives and their attributes (science requirements) and prioritizing them with respect to their improvement over a study threshold level, which is often below the current capability. The ADT develops alternative satellite constellations and orbits and scores them against the SPRWG objectives. This paper is not intended to be a complete summary of the NSOSA process and it does not provide any “answers” in the sense of specific architectures or constellations for NOAA in 2030 and beyond. The ADT results and potential constellations that score highly against the SPRWG requirements and priorities are, or will be, described elsewhere (e.g., Volz et al. 2016; St. Germain 2018; St. Germain et al. 2018; NOAA 2018; Maier 2018). We realize that these references are only internally reviewed by NOAA prior to public presentation and do not appear in standard journals yet, but the ADT process is still underway. Additional publications on results are in review or in preparation.
SPRWG membership.
The SPRWG membership included the user and research community from NESDIS, NASA, all NOAA operational line offices [the National Weather Service (NWS), the National Marine Fisheries Service (NMFS), the National Ocean Service (NOS)], and the NOAA Office of Oceanic and Atmospheric Research (OAR), as well as other stakeholder organizations, such as NOAA Cooperative Institutes, academia, and private industry. The SPRWG used its members’ expert knowledge of the types of measurement data needed to develop operational products (e.g., forecasts and warnings) from space-based observations related to weather and water, the oceans, space weather, and the general Earth environment.
SPRWG was formed in October 2015, and over the course of its planning held five meetings through June 2017 in Washington, D.C., and Boulder, Colorado. In January 2016 SPRWG conducted a Town Hall at the AMS Annual Meeting in New Orleans. In addition to these meetings, SPRWG conducted its work through many conference calls and e-mail exchanges. Figure 2 shows the SPRWG members and other participants in the July 2016 meeting.
SPRWG tasks.
A key element of the NSOSA study process is the EVM, which provides the most important objectives for meeting NOAA’s observations from space, their performance attributes at different levels of capability, and their priorities for improving the performance of the objectives from a study threshold level (a level below which the objective has little or no value) to a maximum effective level (the level above which further improvements are not possible, useful, or cost effective). The EVM plays a central role in the ADT’s assessment of the value of different space architecture alternatives. The most important part of SPRWG’s analysis was to inform the NSOSA ADT’s development of the EVM.
Iterative nature of NSOSA process.
An important part of the NSOSA process was its iterative nature. The architecture development process proceeded in four cycles. The development of the EVM, and the formation of the SPRWG, started before the formal start of the architecture development and proceeded in sync with it. The cycles were as follows:
Cycle 1: An introductory cycle in which the complete NSOSA process was tested for practicality and effectiveness using a draft set of observational objectives, performance levels, and notional priorities developed by SPRWG.
Cycle 2: The primary design cycle where major alternatives were explored. The cycle was conducted twice, referred to as cycles 2a and 2b (Di Pietro 2015). The EVM was largely complete for cycle 2a and was in its final form at the beginning of cycle 2b.
Cycles 3 and 4: Refinement cycles where the favored approaches were expanded in depth of coverage. The EVM in cycles 3 and 4 was the same as in cycle 2b.
Throughout the process, the ADT developed a number of architecture alternatives that met the EVM objectives at different levels of performance, that is, each architecture was scored against the EVM objectives and their performance attributes. In each cycle it was a goal to have alternatives that spanned a wide cost and performance range. The results were then reviewed and discussed with NOAA management, NOAA line offices, the SPRWG, and various NOAA stakeholders. The analysis at the end of each cycle was used to influence the work of the next cycle.
The ADT team looked in particular for overall constellation configurations that consistently performed near the top of the cost–benefit frontier (discussed later) and could be scaled in cost by the addition/deletion of individual platforms or individual instrument upgrades/downgrades. These alternatives were seen as robust choices providing NOAA with a space architecture that would be capable of reliably providing a baseline level of service with high reliability while also providing high return on investment options for increased capability.
NSOSA and SPRWG priorities.
For the NSOSA study, and for the SPRWG process, operational NOAA functions, such as weather forecasting and warnings of harmful algal blooms, are considered as highest priority and are defined as those which result in government actions that affect public safety or economic livelihood. Non-operational NOAA functions, such as research on weather, oceans, air quality, and climate change, are considered as the next priority. Other functions, such as those conducted by NASA or other agencies and international partners, are out of scope.
Because of the priority for NOAA operational functions, SPRWG paid less explicit attention to the important areas of climate and other long-term Earth observations and their continuity. However, many of the objectives and their performance attributes (such as atmospheric temperature and water vapor, sea surface temperature, and height) considered by SPRWG are important climate variables, and their accuracy, precision, and stability were implicitly considered for their value for climate in addition to weather forecasting and other operational needs.
The SPRWG considered whether the current operational functions and their priorities might change significantly by 2030 and concluded that the functions of protecting life and property would remain similar to the present functions. However, advances in science and technology could lead to major or even revolutionary advances in making operational Earth observations from space to support these functions. In particular, emerging technologies could revolutionize the most important measurements and their impact. For example, we see opportunities in areas such as continuous observations in the day–night band (Román et al. 2018), improving technology to make wind measurements from time-separated infrared (IR) soundings (Maschhoff et al. 2016) or lidar profiles (Atlas et al. 2015), and constellations of CubeSats (Gasiewski et al. 2013) to support emerging needs for data assimilation globally on a more continuous basis than done today. The U.S. National Research Council's (NRC) second decadal survey for Earth observations from space (National Academies of Sciences, Engineering, and Medicine 2018) includes other examples of exciting potential opportunities for NOAA’s future space observing systems.
BACKGROUND AND REFERENCE MATERIALS.
There have been many studies carried out by the NRC, U.S. agencies (including NASA and NOAA), the U.S. National Science and Technology Council (NSTC), the World Meteorological Organization (WMO), the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the European Space Agency (ESA), and other organizations that have analyzed the importance and value of Earth observations from space and made specific recommendations for future observing systems. SPRWG used these studies, many of which SPRWG members participated in, as a foundation for ascertaining the requirements for the next-generation NOAA satellite system.
The WMO has published several documents creating a vision for the WMO Integrated Global Observing System (WIGOS), the most recent and still under development being the “Vision of the WIGOS Space-based Component Systems in 2040” (WMO 2017). This document is intended to guide the efforts of WMO Member States in the evolution of satellite-based observing systems. It is based on anticipation of user requirements and technological capabilities in 2040. WMO also publishes a Rolling Review of Requirements, which attempts to collect observational requirements to meet the needs of all WMO programs (www.wmo.int/pages/prog/www/OSY/Documentation/RRR-process.pdf).
NOAA and the WMO have carried out extensive studies of user requirements of observations from different types of observing systems, including observations from space. NOAA’s Technology, Planning and Integration for Observation (TPIO) has worked closely with NOAA program leaders and Subject Matter Experts (SMEs) to document observing requirements in an extensive database called the Consolidated Observing User Requirement List (COURL), sometimes referred to as the Consolidated Observing Requirement List (CORL). TPIO provided SPRWG with an updated COURL in February 2017.
SPRWG also made extensive use of the WMO Observation Systems Capability Analysis and Review (OSCAR) tool (WMO 2013; see “Relevant websites” sidebar). This tool is an important building block of the WMO Integrated Global Observing System. OSCAR summarizes user requirements for observations in WMO application areas, as well as attributes and capabilities of space- and surface-based observing systems.
Another useful document was the Earth Observation Handbook 2015 (ESA 2014), which provided much information on current and planned missions. SPRWG used this reference extensively in developing its understanding of the current capability of objectives in the EVM.
RELEVANT WEBSITES
Main OSCAR page: www.wmo-sat.info/oscar/
Overview of space-based capabilities: www.wmo-sat.info/oscar/spacecapabilities
Review of satellite observation capabilities: www.wmo-sat.info/oscar/observingmissions [lists the satellite observation capabilities as identified in the “Vision for the GOS in 2025” and the Implementation Plan for the Evolution of Global Observing Systems (EGOS-IP).]
Gap analyses by variable: www.wmo-sat.info/oscar/gapanalyses
OSCAR user’s manual: www.wmo.int/pages/prog/sat/documents/OSCAR_User_Manual-22-08-13.pdf
Space weather glossary: www.swpc.noaa.gov/content/space-weather-glossary
Summary of observations used by NOAA Space Weather Prediction Center: www.swpc.noaa.gov/content/space-weather-glossary
TPIO NOSIA glossary: https://nosc.noaa.gov/tpio/main/nosia_glossary.html
The most important principle governing the U.S. civil Earth-observing systems is that the overall set of observations must yield a balanced portfolio of observations (the National Plan for Civil Earth Observations is a document addressing the national set of requirements for Earth observations, including space-based observations; OSTP 2014). Balances of different types are important in establishing priorities for a number of reasons, including providing support for diverse parts of the NOAA mission and supporting very different communities within a constrained budget. Thus, compromise is a key feature of any planning and prioritization process.
SPRWG used these documents, other studies that have appeared in the scientific peer-reviewed literature, and numerical weather prediction forecast experiment results from observing system simulation experiments (OSSEs) and observing system experiments (OSEs) (e.g., Hoffman and Atlas 2016) to inform its analysis. OSSE systems used in this study included an advanced “state of the art” global modeling system based on NOAA’s Global Forecast System (GFS) and a regional modeling system based on the Hurricane Weather Research and Forecasting Model (HWRF) forecast system. These OSSE systems allow impact assessment of various types of potential new observations and made use of a standard suite of verification metrics. The result is a synthesis of many sources of information.
THE EDR VALUE MODEL.
A key element of the NSOSA study is the EVM, which plays a central role in assessing the value of different satellite and observational architecture alternatives. Appendix C in the full report (SPRWG 2018) describes the terminology and concepts used in the EVM and gives a simple example of an EVM with five objectives.
The EVM approach is based on Multi-Attribute Utility Theory (MAUT) as used in decision analysis. The basis for MAUT, which addresses decision-making under many complex conditions and constraints, may be found in Keeney (1982), Keeney and Raiffa (1993), and Hammond et al. (2002). Specifically, the goal is to develop a utility function, which takes as input all of the performance attributes of an architecture alternative and returns a real number that is referred to as the utility of the alternative. The utility is intended to have the property such that if decision-makers (in this case NOAA leadership) are presented with two alternatives, the preference for one over the other will map directly with the larger utility value. The objective is to produce what is called an efficient frontier plot (Fig. 3).
An efficient frontier plot displays a point for the utility–cost pair for each of the architecture alternatives under study. As with computing a single utility value, we must be able to estimate cost as a single value; total life cycle cost is a typical choice for transforming multiyear costs into a single value. The NSOSA study used average annual cost (AAC), the average value of cost required to provide a level of capability in steady state from 2028 to 2050 (the time window of the study).
An efficient frontier plot can be used for a variety of decision-making and analysis purposes. In the plot (e.g., Fig. 3), an assumed budget corresponds to a vertical line, with alternatives to both the left and the right of that budget line. If the budget is too low, then no alternatives are affordable and the process has broken down. Similarly, there may be alternatives with higher budgets representing the opportunity for increased value with greater funding. The slope of the “efficient frontier” at the point where it intercepts the budget line represents the cost–benefit tradeoff at that budget. In general, the alternatives that populate an area around the budget line-efficient frontier intercept are of primary interest.
Decision theory tells us that the optimal choice will lie along this frontier, and that interior points should be avoided. Logic dictates that any interior point could be replaced by a point with higher utility at the same cost by moving upward within the cloud of alternatives until the frontier is reached. In an architecture development process, it is important to examine the properties of points close to the frontier in areas of interest (i.e., close to cost constraints) and observe any commonalities. For example, do all alternatives close to the frontier share common features, such as particular orbital distributions? If so, those common features are important to identify even if an exact preferred configuration is not to be selected until later. Or, do all alternatives close to the frontier neglect an important mission support area of NOAA, which would result in an unbalanced program if implemented? Since both cost and utility value have many uncertainties, it would be inappropriate to simply find the highest utility point at an acceptable budget and declare that point the preferred alternative without more closely investigating how it relates to nearby points, and whether or not the judgments can be considered robust. The NSOSA study made extensive use of uncertainty analysis in both value and cost to judge the significance of differences between alternatives near the efficient frontier. These consisted of varying the costs as described by NOAA (2018) and Yeakel and Maier (2018). The sensitivity to value was studied by making small changes in rank order of objectives as well as varying the performance scores across a plausible range of values. The level of uncertainty in value as reflected in SPRWG discussions turned out to correspond to only minor alternative rank reorderings, and these variations for the most part do not affect the architecture choices.
The EVM is a list of functional objectives and their attributes that are required to support NOAA mission service areas, as well as certain strategic objectives that are not associated with EDRs. For example, a functional objective is “provide real-time imagery over the continental U.S. (CONUS).” An example of a strategic objective is “develop and maintain international partnerships.”
International considerations in developing the EVM.
The EVM provides a list of objectives or requirements to support NOAA’s mission service areas in 2030 and beyond. It is well recognized that international partners will play an important role in meeting these objectives. For example, Europe (EUMETSAT), Japan, India, and South Korea provide images from geostationary satellites and other valuable observations such as atmospheric soundings from infrared, microwave, and radio occultation sensors from low-Earth-orbiting (LEO) satellites. These data are shared freely with NOAA under the guidelines of free and open data exchange provided by WMO Resolution 40 (www.wmo.int/pages/prog/www/ois/Operational_Information/Publications/Congress/Cg_XII/res40_en.html). In return, NOAA provides its satellite data freely to its partners, and indeed all users. It has been estimated that NOAA receives approximately 3 times more meteorological data from its international partners than NOAA provides the international community (www.nesdis.noaa.gov/content/why-does-noaa-collaborate-internationally).
Early in the NSOSA process, SPRWG and the ADT agreed to consider reliable, low-risk foreign sources (e.g., EUMETSAT, Japan, and South Korea) as partners whose space-based Earth-observing systems would be considered part of the baseline. The team assumed that these partners’ projected systems would have availability and reliability commensurate with those of U.S. systems and thus their capabilities would be considered jointly with NOAA capabilities in meeting EVM objectives in all alternative architectures.
The ADT provided SPRWG with the NOAA Program of Record (POR) 2025 (Table 1) as a reference. This POR gives the missions that NOAA expects and is relying on in 2025, and includes several foreign missions. The POR2025 does not represent the actual constellation used or planned by NOAA at any point in time. For example, the number of Constellation Observing System for Meteorology, Ionosphere and Climate-2 (COSMIC-2) Global Navigation Satellite System (GNSS) radio occultation (RO) satellites will be reduced from 12 to 6 as the high-inclination part of COSMIC-2 has been cancelled. In addition, NOAA makes some use of a number of satellites not in the POR2025. Examples may be found in the 2018 decadal survey, which provides an updated program of record for NASA and NOAA for the period 2017–27 in their appendix A. According to the ground rules of the NSOSA study, none of these differences from the POR2025 are relevant to the NSOSA study since all architecture alternatives are scored against the EVM.
Summary of POR2025 U.S. and international geostationary weather satellites, polar weather satellites, and weather satellites in other orbits (source: SPRWG 2018).
DEVELOPMENT OF THE EVM.
The development of the EVM began with the establishment of four groups of objectives. The first group (Group A) consisted of functional objectives that support mainly weather nowcasting and short-range forecasting and warnings, and medium-range weather forecasting (numerical weather prediction). The second group (Group B) consisted of functional objectives that support space weather. The third and fourth groups consisted of nonfunctional objectives, communications (Group C) and strategic (Group D) objectives, respectively. As the process of developing the EVM began, we also decided, through discussions with NOAA, that the objectives in the communications group were not well posed for this process, and so this group was addressed in a different process.
For each of the functional objectives in Groups A and B, it was necessary to define the objectives, the attributes of each objective, and the performance values of the attributes at three levels (discussed below). The SPRWG created four subgroups of subject matter experts from its members: 1) Nowcasting (Chris Velden, Chair), 2) Numerical Weather Prediction (James Yoe and Robert Atlas, Co-Chairs), 3) Space Weather (Terry Onsager), and 4) Oceanography (Michael Ford and Pam Emch, Co-Chairs). These subgroups were responsible for developing the EVM objectives, attributes, and performance levels and determining the rank orders of the objectives in their areas. The EVM evolved considerably over time during the three cycles of the study. We found this iterative process to be extremely important, in fact essential, in developing a document that could be used to inform the NSOSA process.
The final objectives for Groups A and B were determined through discussions among SPRWG members and users of NOAA observations, including weather and space weather forecasters and numerical weather prediction (NWP) experts. We used the scientific literature and previous studies as appropriate, as well as the COURL and OSCAR list of requirements. In the end, SPRWG created 19 objectives in Group A, and coincidentally, 19 objectives in Group B. We formulated these 38 objectives fairly early in the process (by March 2016). The Group A and B objectives used in the EVM are summarized in Tables 2 and 3.
Ranking of Group A objectives (terrestrial weather).
Ranking of Group B objectives (space weather).
While there are some similarities, the OSCAR and COURL set of observational requirements are quite different from the SPRWG objectives. COURL and OSCAR present many more objectives than SPRWG (more than 1,500 for COURL, 588 for OSCAR). COURL presents requirements requirements for products developed from observations that are needed by a variety of users, while SPRWG presents objectives in terms of measurements that are used to produce many different products that support a large number of disparate users. OSCAR has 588 “variables” such as temperature, cloud cover, and specific humidity that support specific applications, for example, climate, agricultural meteorology, aeronautical meteorology, atmospheric chemistry, global and regional NWP, ocean applications, and space weather. COURL provides more than 1,500 “environmental parameters,” such as atmospheric temperature, water vapor, chemical constituents, sea surface temperature and height, solar imagery, and many more, often with multiple entries for the same or similar parameter, but used for different purposes. Both sets of requirements were useful for determining and checking for reasonableness the values of the objectives we developed for this study.
The SPRWG chose to build the EVM in terms of measurements rather than products for several reasons:
The products are derived from measurements. In general, many products are derived from a single measurement. In decision analysis terms, it is more appropriate to work with the root element to avoid potential problems in overcounting the value when there are many derived products with similar characteristics.
The subject of the NSOSA study is NOAA satellite systems, whose role is to collect measurements. The cost of the satellite is mostly determined by the instruments (the cost of launch and the satellite bus play a lesser role). The cost of the instruments is driven by the measurements they must produce. Thus, the cost of the NSOSA alternative set is driven by the measurements it must produce and the performance characteristics of those measurements.
The number of measurements necessary to largely encompass the products is modest (38 measurements in the case of the EVM). This is a tractable number to score the performance of over 150 alternative space architectures.
After determining the objectives, SPRWG set attributes for each objective. An attribute of an objective is a characteristic that defines the properties of the objective. For example, attributes of a temperature sounding system include accuracy, vertical and horizontal resolution, and frequency of update rate, among others. SPRWG established three levels of performance for each attribute, based on its estimate of the likely needs and capabilities in the 2030s:
Study Threshold (ST): The threshold or lowest level of performance on the specific attribute that would have value. SPRWG assumed that objectives that fall below this level are of little or no use to NOAA and will not be part of any future architecture. The ST level of performance is often below the current capability for that objective.
Expected (EXP): What the community expects for this attribute in the 2030 time frame. This level is often close to the current capability, but this is not a requirement. In some cases, the EXP level considerably exceeds the current level, as it should where there is an expectation of a substantial increase in quality or quantity of the attribute required to support operational functions.
Maximum Effective (ME): The highest level of performance on the specific attribute that can reasonably be considered to be worth pursuing. That is, there would be little or no additional value for outperforming the ME level.
In the temperature sounding example, the ST, EXP, and ME levels for accuracy might be 2.0, 1.0, and 0.5 K. This means that a system that produced an accuracy of less than 2.0 K would be nearly useless and would not be worth providing. An accuracy of 1.0 K would be what the user community expects for the 2030 time frame, and a value of 0.5 K would mean that any system with an accuracy greater than 0.5 K would have a marginal increased impact on users and would not be worth the increased cost.
It is important to understand that the Study Threshold and Maximum Effective levels in the EVM do not correspond to lower and upper bounds for system acquisition. The ST and ME levels in the EVM establish a trade space (MITRE 2012) that is deliberately structured to be larger than would be established in a system acquisition. The ST and ME levels anchor the “ruler” that we use to measure value; they do not define the precise limits of requirements on future programs. Following MAUT established practices, the “tradeable range” should bracket the “sweet spot” of cost versus value trades. Later system acquisitions can home in on the most cost-effective performance range within the broader study limits.
The OSCAR and COURL also specify levels of performance that SPRWG interpreted as corresponding roughly to the SPRWG levels. The OSCAR Threshold is the minimum requirement to be met to ensure that observations are useful; it corresponds to the SPRWG ST level of performance. The OSCAR Breakthrough is an intermediate level which, if achieved, would result in a significant improvement for the targeted application optimum cost–benefit ratio; it corresponds roughly to the SPRWG EXP level. Finally, the OSCAR Goal is an ideal requirement above which further improvements are not necessary; it corresponds to the SPRWG ME level.
COURL specifies requirements at two levels of performance, Threshold and Objective. SPRWG interprets these to correspond to the ST and ME levels of performance, respectively.
For comparison with these possible future levels of performance, SPRWG also estimated the capability of the objectives based on the POR2025. Capabilities of the current (ca. 2017) satellite systems are included in detailed “two pagers” that describe each objective in Groups A and B and are available in the full report (SPRWG 2018).
One of the ground rules of the study was that an objective not in the POR2025 was assigned an ST level of zero capability (none). Another assumption in the overall architecture planning process was that every architecture will provide all the objectives to at least the ST level.
The ST–ME range of performance establishes the “tradable range” in developing various future architecture alternatives. It is the performance level over which NOAA will trade alternatives. It is important that the lower end of the tradable range be affordable with considerable room to spare. The ST level represents the performance level at which value has effectively disappeared, and so is normally below the current performance level, at least for any measurement that is currently collected, since measurements we collect and use have obvious positive value. What we prioritize is not the absolute importance of an objective, it is the movement of the objective’s performance from the ST to the ME level. If the ST level represents mature and effective performance because the associated measurement is mature and fully exploited, then we expect little return from going much above that level. This is in contrast to areas where there is no capability or low maturity at the ST level and considerable room for enhancement. The concept of basing priorities on improvements of capability over the ST level rather than absolute priority of the objective was new to SPRWG members.
Finally, it was necessary to assign an effectiveness scale E to the EXP level of each objective. The effectiveness scale is a number between 0 and 100 that determines how far above the ST level the objective is achieved. It is used by the ADT in scoring the various architecture alternatives. The value E for every objective is by definition 0 for the ST level and 100 for the ME level. The value associated with meeting the EXP level varies between 0 and 100 and was assigned by SPRWG. A value of 50 means that meeting the EXP level is 50% of the total value of meeting the ME level. A value of 70 means that 70% of the value of attaining the ME level is met by attaining the EXP level and only 30% more value is accrued by a further increase of performance to the ME level. The higher the value assigned to the EXP level, the less additional value there is to achieve the ME level. The EXP value score represents SPRWG’s judgment on how much of the total ST-to-ME value shift has been captured by the time the performance level reaches the level assessed as “community expectation.” In some cases this value may be well below 50% (when the community expectations leave a lot of room for improvement), and sometimes it may be well above 50%. In general we find the EXP value scores to be above 50% for more mature observations and below 50% for less mature observations.
Definition of the performance attributes.
The various performance attributes used to describe the objectives in Groups A and B are listed and defined briefly in the EVM (SPRWG 2018). Most are straightforward, but a few require explicit definitions.
- Ground-projected instantaneous field of view (GIFOV): GIFOV, which is applied to images, is a measure of the horizontal scale of the smallest feature on the ground at the subsatellite point that can be measured by the sensor. It is related to the instantaneous field of view (IFOV), which is the angular field of view of the sensor independent of height, by the relationshipwhere H is the height of the sensor above the ground. GIFOV is often called “horizontal resolution” (e.g., in COURL), and sometimes ground sampling distance (GSD), horizontal footprint, or pixel size.
Horizontal resolution: SPRWG uses the common definition of horizontal resolution for numerical models, in which it is the spacing between model grid points, and observations such as vertical soundings in which it is the average spacing between observation points. Thus, a system with an average spacing between observations of 100 km is defined as having a horizontal resolution of 100 km.
Accuracy: Closeness of an observation to the true value, that is root-mean-square (RMS) error. This includes both random and bias errors.
Sampling frequency (equivalently sampling interval or update rate): Average time interval between consecutive measurements at the same point or area of the environment.
Latency: Because SPRWG is representing user needs, we define latency as the time from the sensor completing the observation to the time the observation or product is available to the primary NOAA users, for example, NWS forecasters or the National Centers for Environmental Prediction (NCEP). Thus, it includes the time from the sensor observation to the time received by the ground receptor site plus the time to process the data. The processing time depends on the observation or product and can be a substantial fraction of the total latency.
Priorities of objectives and swing weights.
The ST–ME swing defines the tradeable range for performance within the EVM. Within the overall NSOSA study there was likewise a tradeable range of future costs. The acceptable range of costs was discussed in the SPWRG study Terms of Reference (TOR; appendix A in SPRWG 2018). As a practical matter, future budgets for space system acquisition are unlikely to be vastly larger or smaller than current budgets unless major new factors come into play. A concern in all studies of this type is the possibility that the two tradeable ranges, one in value and one in cost, will have no technically feasible intersection (in terms of alternative system concepts). If the process is to lead to robust decision-making and accommodate strategic priorities, then the intersection space must be rich. Part of the role of the early cycles was to check and ensure that a wide range of system alternatives had simultaneously acceptable value and cost while not making untenable assumptions about future technology.
Assuming there are many alternatives within the tradeable range, then prioritization of performance improvements above the zero-value threshold level (the ST level) is essential to establish the efficient frontier. SPRWG prioritized the objectives in Group A (weather and oceans) and Group B (space weather) according to its collective judgment and in consultation with knowledgeable colleagues on how improvements in the performance of objectives would lead to improvements in meeting NOAA’s mission. NOAA senior management prioritized the Group D (strategic) objectives and the interleaved the Group A, B, and D objectives according to their integrated perspective on NOAA mission and strategic goals.
Early in the process SPRWG decided to provide rank orders for increasing the performance of each objective from the ST to ME levels in Groups A and B separately. The two user communities of the Group A (weather and oceans) and Group B (space weather) are so different that SPRWG members felt that they could not make decisions on the relative priorities for both groups combined. Furthermore, the SPRWG felt that making the priority ranking across these disparate fields was more appropriate for NOAA executive leadership. Thus, the NOAA/NESDIS leadership determined the integrated priorities among all three groups. One might expect the prioritization process to be difficult and contentious, especially given the broad NOAA mission and the large number of disparate observations required to support it. However, the process went smoothly, and in the end, there was widespread agreement among SPRWG members and the NOAA/NESDIS leadership.
It is important to reemphasize that the EVM approach demands that objectives be prioritized according to their potential value for improvement in capability over the ST level, not the objective itself. For example, the most important objective in absolute terms might have such a high performance level at the ST level that it is ranked relatively low in terms of improvement to the ME level compared to a less important objective with little or no capability at the ST level. As illustrated in Fig. 4, the objectives with a high absolute priority (very important to NOAA’s operational mission) and a low-level of capability (or no capability at all), rank highest in EVM priorities.
After the ST, EXP, and ME levels of performance and the rank order for each objective were determined, SPRWG then developed the swing weights associated with the two groups of objectives. The swing weights quantify the priority of increasing the performance of one objective from the ST to ME level versus the priority of increasing the performance of another objective from the ST to ME levels. The swing weights vary between 0 and 1 and the sum over all the objectives must equal 1.
For example, if objectives X and Y have swing weights of 0.04 and 0.01, respectively, improving objective X from the ST to ME level is judged to be 4 times more valuable than improving objective Y from the ST to ME level.
Before ranking the list of objectives in order of priority for improvement and assigning swing weights, SPRWG had lengthy discussions and debates on the objectives and the process and how to best accommodate uncertainties and judgments of its diverse group of subject matter experts. A small group of objectives emerged from these discussions as being of highest priority, another group as being significantly lower in priority, but still important; and a third group of objectives in between. As these discussions proceeded, we developed a qualitative set of principles that we found useful in developing the final rankings for improvements from a threshold base level and the assignment of swing weights:
The difference between swing weights of adjacent priorities should be small because of significant uncertainty in priorities between neighboring priorities.
The decrease of weights with decreasing priorities should be smooth.
The lowest priority objectives are still important and their weights should not approach zero.
There is a group of highest priorities near the top and another group of lowest priorities near the bottom. The rate of decrease of swing weights should be relatively flat in these groups with steeper decrease in between, suggesting a hyperbolic tangent type of curve.
Swing weights of prioritized objectives.
The SPRWG considered the “balance beam” model of determining the swing weights of the objectives (see the “EVM Terminology and Concepts” paper in appendix C of SPRWG 2018), but found it cumbersome to apply systematically with 19 objectives. Thus, as an alternative, we adopted an empirical mathematical model to determine the weights and made spot checks with balance beam criteria. After discussion and experimentation with several models, we chose a hyperbolic tangent model to reflect the principle that there should be relatively small differences in weights between closely ranked objectives near the top and bottom of the prioritized list, but a significant difference between the weights of the highest and lowest ranked objectives. In the hyperbolic tangent model, the priorities among objectives in Groups A and B near the top (1–5) and bottom (16–19) of the rank order change more slowly than the priorities of objectives in the middle of the range (6–15).
The use of the balance beam and the hyperbolic tangent models was synergistic. There is no a priori reason to expect that the swing weights would follow a hyperbolic tangent model, or any other curve. The SPRWG used balance beam arguments to reveal the overall shape of the preference curve. This suggested a hyperbolic tangent type of relationship. Then, taking the mathematical curve, it was possible to test the implied balance beam relationships. That, in turn, allowed tuning of the curve parameters. Using these approaches jointly, it was possible to build a set of weights consistently reflecting consensus priority inputs.
The hyperbolic tangent model is admittedly simple and cannot account for large, abrupt shifts in priority (if they existed) between objectives ranked closely to each other. However, the model has the desirable property that the assumptions are clear, in contrast to a subjective approach in which many arbitrary decisions would have to be justified individually. They also have the advantage that changes in the rate of change of priorities and the overall shapes of the changes in priorities of the objectives can be easily and consistently varied. The ADT also carried out an extensive sensitivity analysis on the results, using the SPRWG principles for relative certainty and uncertainty, in ranking to test the robustness of the overall results. This process is not described here, as it was not part of the SPRWG process, but will be described in other publications (Wendoloski et al. 2018).
For objectives near mid, the swings of any two objectives from ST to ME is roughly equal in priority to the swing of the highest priority objective from ST to ME. The rank order and swing weights of the objectives in Groups A and B are summarized in Tables 2 and 3, respectively. The ratio of the swing weights of objective (i) to the swing weight of the highest-priority objective (objective 1) for Groups A and B is depicted in Fig. 5.
After adopting the model, we examined its results to test our assumptions and the “reasonableness” of the model. We concluded that the model produced swing weights that produced reasonable priorities among the Group A and B objectives. Figure 5 is a graphical illustration of the mathematical model of the swing weights, and illustrates how the model satisfies the qualitative principles agreed upon by the SPRWG. The reader can easily see how the curve in Fig. 5 meets all the principles agreed upon by the SPRWG.
The priorities and swing weights for the objectives in Group D (Strategic objectives) were determined by NOAA senior leadership.
FINAL EVM.
The EVM presents objectives in three groups:
Group A: Weather and ocean and related objectives
Group B: Space weather objectives
Group C: Not addressed by SPRWG and so not in the EVM; treated separately by the ADT and NOAA leadership
Group D: Strategic objectives
There are 19 objectives each in Groups A and B, and six objectives in Group D, for a total of 44 objectives. The objectives in Groups A and B are associated with certain instruments or types of instruments that measure properties of the atmosphere, oceans, land, and cryosphere using passive or active remote sensing techniques. Some of the objectives (e.g., Non-RT Global Weather Imagery Visible and IR other than ocean color, objective 3 in Group A) support many different products used by NOAA line offices (e.g., cloud-top height, land surface temperature, ocean surface temperature, snow cover, and sea/lake ice concentration). The products listed in the EVM are examples only; we did not attempt to include an exhaustive list.
Because many of the objectives listed in the EVM and their attributes have complexities that are difficult to include in a single spreadsheet, SPRWG developed a short, approximately two-page, summary of each objective. These “two pagers,” presented in the full report, describe the objective; how it is used; current satellite systems that meet the objective; the Program of Record 2025 and current capability; ST, EXP, and ME levels; and sources of information that went into making these estimates. Characteristics of the objectives that are important, but too subtle or complex to capture in a single spreadsheet, are included. Finally, they summarize the rationale for the priorities of the objective.
The combined list of objectives, their priorities for improvement, and their swing weights (as determined by NOAA leadership) are listed in Table 4. The swing weights for the 44 objectives were discussed at great length and the result was agreement that the tanh model be used with the parameters N = 44, p = 1.2, eps = 0.1, R = 4, and mid = 13 (Fig. 6). Note that the priority for improvement from ST to ME level of the top 13 objectives approximately equals the priority for improvement from ST to ME of objectives 14–44.
Overall priorities of objectives (established by NOAA).
Finally, the EVM spreadsheet for cycle 2b (the final EVM) is included in the online supplement (https://doi.org/10.1175/BAMS-D-18-0180.2).
We realize that the objectives, their performance attributes, and the priorities presented in the EVM are to some extent subjective, since they are ultimately based on the judgment of a relatively small number of subject matter experts. However, the process considered the peer-reviewed scientific literature and planning documents as summarized above, as well as the input and review of many scientists, engineers, and policy-makers. Every observational objective and its attributes in the EVM were justified based on peer-reviewed literature as well as user input in the descriptive “two pagers” that are part of the full report. Every effort was being made to make the complex process as science-based and transparent as possible. However, because of the subjective component of the process, the final quantitative “results,” such as performance attributes, rank orders, and swing weights, should be considered “soft” in that small differences (approximately 15%) in estimated values are considered acceptable. The priorities within Groups A and B should also be considered somewhat flexible in that the difference between close priorities (e.g., 9 and 10) should not be considered significant.
Ultimately, the question is whether or not uncertainties in priorities are great enough to significantly alter the overall results. This was a question for the ADT rather than the SPRWG. As noted above, the ADT did a sensitivity study, using the SPRWG principles for the swing weights, of how much the overall results of the NSOSA study would be affected by different priority selections within the principles given. The study showed that the overall results had little sensitivity to the modeled uncertainties, and so all of the major conclusions of the study were robust to modeled uncertainties.
USE OF THE EVM IN DESIGNING AND EVALUATING THE COST EFFECTIVENESS OF DIFFERENT SPACE ARCHITECTURES.
The NSOSA process is still a work in progress, and a final plan, including prioritized missions, has not yet been developed. Furthermore, describing the NSOSA process other than the SPRWG process (Fig. 1) and architectures that have been analyzed and are being considered by NOAA leadership is outside the scope of this paper. However, documents and reports already exist that show the role of the EVM in the design and evaluation process (Fig. 1) as well as provide examples of emerging high-value architectures. For example, NOAA (2018) presents examples of several architecture alternatives that used the SPRWG EVM. Section 3.3 of this document, “Prioritizing the Objectives’ Relative Performance,” describes how a given space architecture is scored using the EVM to measure the architecture’s ability to meet NOAA’s mission requirements. Section 3.4, “Building Options and Estimating the Costs,” describes how the costs of the various constellations are estimated. Chapter 4, “A Hundred Constellations from Which to Choose,” shows examples of the performance score of different constellations plotted against estimated cost on an efficient frontier plot. Finally, sections 4.5 and 4.6 discuss the properties of several types (called series) of architectures. The so-called 80-Series Hybrid Architecture is illustrated and consists of 1) mixed platforms in geostationary orbit, 2) moderate LEO disaggregation, 3) instrument technology insertion, 4) operationalizing space weather, and 5) commercial data and services outsourcing. These five aspects of the 80-Series Hybrid Architecture are then described.
SUMMARY OF THE PROCESS AND ASSESSMENT OF THE EVM.
We have summarized the activities of the Space Platform Requirements Working Group (SPRWG) from 2015 through 2017. The main accomplishment is the production of the EDR Value Model (EVM) to inform the NOAA Satellite Observing System Architecture (NSOSA) study. The EVM is a Multi-Attribute Utility Theory (MAUT)-based value model used as part of the NSOSA study to assess alternative environmental remote sensing satellite constellations and their associated architectures. The success of the model can be judged in two ways. First, it has proven effective in the task for which it was intended, providing value assessments in the study to add to the body of information that decision-makers may find useful to inform future architecture choice. Second, the model generally follows established MAUT principles for informing future decisions. Specifically:
The EVM is (largely) preferentially complete. This means that decision-makers systematically prefer alternatives with higher scores over lower ones, and rarely invoke decision factors other than those in the model. The only factors not included in the model are various unquantified risks (it is generally understood that attempting to quantify all risk types is unproductive) and some types of measurement continuity. Also, mappings between the EVM and other assessment sources should not show glaring gaps.
The EVM is economical in its choices. It contains no objectives with near-zero priorities and all of the objectives are clearly of importance to identified stakeholders. At the same time the total number of objectives is not overwhelming and it has proven possible to score a large number of alternatives (greater than 150) against the model.
The EVM is stakeholder complete (at least mostly). Stakeholders find their needs and requirements among the EVM objectives, and all objectives have identifiable stakeholders.
Preferential independence. Scores on EVM objectives do not depend on each other, and preferences for performance levels are not interdependent. Factors that would break down independence have been effectively dealt with through the setting of ST to ME levels.
Cost correlation. Moving from the ST to ME levels has clear cost implications. The largest cost contributors can be traced to EVM elements so the consequences of cost trades can be identified.
Trade space preservation. There are many alternatives that score above the ST level but have costs below likely budget floors. The space of value and cost feasible alternatives is rich and many trades can be (and were) examined in the NSOSA study.
Legacy independence. The EVM can be readily applied to alternatives that look entirely different than the legacy satellite constellation architecture. Where these “radical alternatives” are found to be non–cost effective, the EVM can be used to identify what drives these judgments, and upon what assumptions the conclusions depend (Maier 2018).
Finally, while other processes have been used to develop lists of observational requirements, which are described in many WMO reports (e.g., OSCAR) as well as NOAA’s COURL, the MAUT model and process is the one chosen by NOAA to inform its development of potential future architectures, and it is important for transparency to document this process. Some may disagree with certain aspects of the requirements or priorities for improvement, but that would be the case for any study. It is inherent in a multi-stakeholder decision situation with limited budgets that not all worthwhile performance desires will be satisfied. However, we are confident that the overall requirements and priorities for improvement are consistent with the many studies (e.g., WMO, ESA) referenced in the paper and appendix F: Bibliography and References.
ACKNOWLEDGMENTS
Monica Coakley (MIT Lincoln Labs) provided significant input and assistance to the development of the EVM. Martin Yapur was very cooperative in providing current results from NOAA TPIO. We also thank NESDIS management including Steve Volz, Tom Burns, Karen St. Germain, and Frank Gallagher for their leadership and support. This study was sponsored by NOAA/NESDIS through the Cooperative Institute for Research in the Environmental Sciences (CIRES) at the University of Colorado. We thank Waleed Abladati and Ted DeMaria for their support. Johannes Loschnigg and Jeff Reaves provided valuable logistical and editorial support for the SPRWG. An extensive bibliography and additional references, which were used in the SPRWG study, are presented in the full report (SPRWG 2018).
REFERENCES
Atlas, R., and Coauthors, 2015: Observing system simulation experiments (OSSEs) to evaluate the potential impact of an optical autocovariance wind lidar (OAWL) on numerical weather prediction. J. Atmos. Oceanic Technol., 32, 1593–1613, https://doi.org/10.1175/JTECH-D-15-0038.1.
Di Pietro, D., 2015: A systems engineering approach to architecture development. 25th Annual Int. Symp. of the International Council on Systems Engineering, Seattle, WA, International Council on Systems Engineering, 15 pp., https://ntrs.nasa.gov/search.jsp?R=20150004445.
ESA, 2014: The Earth Observation Handbook 2015. CEOS, 47 pp., http://database.eohandbook.com.
Gasiewski, A. J., B. T. Sanders, and D. Gallaher, 2013: CubeSat based sensors for global weather forecasting. 2013 US National Committee of URSI National Radio Science Meeting, Boulder, CO, IEEE, https://doi.org/10.1109/USNC-URSI-NRSM.2013.6525008.
Hammond, J. S., R. L. Kenney, and H. Raiffa, 2002: Smart Choices: A Practical Guide to Making Better Decisions. Crown Business, 256 pp.
Hoffman, R. N., and R. Atlas, 2016: Future observing system simulation experiments. Bull. Amer. Meteor. Soc., 97, 1601–1616, https://doi.org/10.1175/BAMS-D-15-00200.1.
Keeney, R. L., 1982: Decision analysis: An overview. Oper. Res., 30, 803–838, https://doi.org/10.1287/opre.30.5.803.
Keeney, R. L., and H. Raiffa, 1993: Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press, 592 pp.
Maier, M. W., 2018: Is there a case for a radical change to weather satellite constellations? 2018 IEEE Aerospace Conference, Big Sky, MT, IEEE, https://doi.org/10.1109/AERO.2018.8396559.
Maschhoff, K. R., J. J. Polizotti, H. H. Aumann, and J. Susskind, 2016: MISTiC Winds: A microwave-satellite constellation approach to high resolution observations of the atmosphere using infrared sound and 3D winds measurements. Proc. SPIE, 9978, 997804, https://doi.org/10.1117/12.2239272.
MITRE, 2012: Technology, Planning and Integration from Observation (TPIO) Trade-Space Analysis. MITRE Report Project No. 1411 NOA8-AA, 53 pp., www.nesdis.noaa.gov/sites/default/files/asset/document/noaa_trade_space_analysis_guide_30march2012b.pdf.
National Academies of Sciences, Engineering, and Medicine, 2018. Thriving on Our Changing Planet: A Decadal Strategy for Earth Observation from Space. National Academies Press, 716 pp., https://doi.org/10.17226/24938.
NOAA, 2018: Notice of Availability of a NOAA Satellite Observation System Architecture Study Draft Report and Public Meeting. Fed. Regist., 83 (105), 24975, www.federalregister.gov/documents/2018/05/31/2018-11599/notice-of-availability-of-a-noaa-satellite-observing-system-architecture-study-draft-report-and.
OSTP, 2014: National Plan for Civil Earth Observation. National Science and Technology Council, Executive Office of the President, 71 pp., https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/NSTC/2014_national_plan_for_civil_earth_observations.pdf.
Román, M. O., and Coauthors, 2018: NASA’s Black Marble nighttime lights product suite. Remote Sens. Environ., 210, 113–143, https://doi.org/10.1016/j.rse.2018.03.017.
Simmons, A., and Coauthors, 2016: Observation and integrated Earth-system science: A roadmap for 2016–2025. Adv. Space Res., 57, 2037–2103, https://doi.org/10.1016/j.asr.2016.03.008.
SPRWG, 2018: NOAA Space Platform Requirements Working Group (SPRWG) Final (Cycle 2b) Report. NOAA/NESDIS, 177 pp., www.nesdis.noaa.gov/sites/default/files/SPRWG_Final_Report_20180325_Posted.pdf.
St. Germain, K., Ed., 2018: Overview of the NOAA Satellite Observing Systems Architecture (NSOSA). NOAA, 14 pp., www.space.commerce.gov/wp-content/uploads/2018-06-NSOSA.pdf.
St. Germain, F. W. Gallagher III, M. W. Maier, M. Coakley, F. Adams, C. Zuffada, and J. R. Piepmeier, 2018: The NSOSA Satellites Observing Systems Architecture (NSOSA) study results. 14th Annual Symp. on New Generation Operational Environmental Satellite Systems, Austin, TX, Amer. Meteor. Soc., 7.6, https://ams.confex.com/ams/98Annual/webprogram/Paper333325.html.
Volz, S., M. W. Maier, and D. Di Pietro, 2016: The NOAA Satellite Observing System Architecture Study. 2016 IEEE Int. Geoscience and Remote Sensing Symp., Beijing, China, IEEE, https://doi.org/10.1109/IGARSS.2016.7730439.
Wendoloski, E. B., M. W. Maier, M. Coakley and T. J. Hall, 2018: Value variance in constellation architecture studies. 14th Symp . on New Generation Operational Environmental Satellite Systems, Austin, TX, Amer. Meteor. Soc., 707, https://ams.confex.com/ams/98Annual/webprogram/Paper325091.html.
WMO, 2013: Observing Systems Capability Analysis and Review (OSCAR) tool. Version 2015-12-12, www.wmo-sat.info/oscar/.
WMO, 2017: Vision of the WIGOS Space-based Component Systems in 2040. 16 pp., www.wmo.int/pages/prog/sat/meetings/documents/IPET-SUP-3_INF_06-01_WIGOS-Vision-Space2040-Draft1-1.pdf.
Yeakel, K., and M. W. Maier, 2018: Cost variance analysis in constellation architecture studies. 14th Symp . on New Generation Operational Environmental Satellite Systems, Austin, TX, Amer. Meteor. Soc., 709, https://ams.confex.com/ams/98Annual/webprogram/Paper328676.html.