Multidecadal climate data records require independent assessment and application of data management and software engineering maturity techniques.
The National Oceanic and Atmospheric Administration (NOAA) initially worked with the National Research Council to define climate data records (CDRs) and associated elements needed for their successful generation. The National Research Council (2004) defined a CDR as a time series of measurements of sufficient length, consistency, and continuity to determine climate variability and change. They further segmented satellite-based CDRs into fundamental CDRs (FCDRs), which are calibrated and quality-controlled sensor data that have been improved over time, and thematic CDRs (TCDRs), which are geophysical variables derived from the FCDRs, such as sea surface temperature and cloud fraction. The roles and responsibilities for climate sensors and processing continued to evolve [for details, see National Research Council (2005, 2007, 2008)] and funding for a CDR program (CDRP) began in late 2008 (fiscal year 2009).
ASSESSING THE READINESS OF A CDR FOR TRANSITION FROM RESEARCH TO OPERATIONS.
The evolution of a CDR is an iterative cycle of both science and systems engineering, with each cycle advancing the state of knowledge as well as process maturity. Maturity models are used in a variety of industries to capture best practices and establish benchmarks that identify specific levels associated with those practices. For example, the National Aeronautics and Space Administration (NASA) uses an assessment of technical readiness for its flight missions and the software industry uses a capability maturity model to improve software reliability and reduce cost.
With the evolution of a CDR, each release of a product results in advances to the science that are usually described in a peer-reviewed journal article. Over an extended time, however, this means that no single peer-reviewed article contains all the steps necessary to ensure the reproducibility of the scientific results. As a consequence, it is becoming increasingly difficult to understand how a particular product is generated or should be applied. Recognizing these problems, in 2010 the directors of the Global Climate Observing System and World Climate Research Programme jointly authored a “dear colleague” letter urging agencies to “ensure transparency, traceability, and good scientific judgment in the generation of data records that underpin climate research and climate change monitoring.” They further noted that, “it is very confusing and frustrating for the non-experts as to which one of these (many) products they can use in their research and analysis, and the necessary documents to describe their attributes…[d]o not exist” (C. Richter and G. Asrar 2010, personal communication).
In addition to openness and transparency, another challenge is that climate data records need to be produced consistently and continuously over many decades. This need for continuity is at odds with the typical research agency approach to funding peer-reviewed proposals for only 3 years (Hollmann et al. 2013). The CDRP was designed to identify the more mature products begun under research funding and provide a means for improved documentation and long-term sustainment.
Key factors for determining mature CDRs are their description in multiple peer-reviewed journal articles, widespread use in the community, and their use in assessments. For example, observations of tropospheric and stratospheric temperatures from satellites have been the subject of numerous papers and the evolution of the methodology was the subject of a major study by the U.S. Climate Change Science Program (Karl et al. 2006). Other assessments have been conducted under the auspices of the World Climate Research Programme (www.sparc-climate.org/publications/sparc-reports/ and www.gewex.org/activities/assessments/) and the NASA Earth Observing System (Wielicki et al. 1995) and from these the CDRP gleaned best practices that are summarized in the evolution of a CDR.
Figure 1 (top) illustrates the evolution of a CDR and is an attempt to capture this iterative cycle of best practices that have emerged over the past 20–30 years. Prior to launch, an initial retrieval algorithm is developed based on simulations of the expected response of the instrument. Careful characterization of the actual instrument prior to launch is essential for creating accurate and stable observations. Postlaunch, the next step is to characterize actual instrument performance in orbit. This usually involves a dedicated field or measurement campaign to thoroughly ensure proper calibration of the instrument and validation of the algorithm. Based on these new data, the retrieval algorithm is usually updated, followed by a reprocessing of the initial data. At this point the actual performance of the instrument and algorithm relative to the requirements is assessed. If the instrument and algorithm are judged as meeting the requirements, then the algorithm is approved for routine use. A second iterative cycle allows for further validation studies, improved understanding of the performance of the instrument and algorithm across the annual cycle around the globe and additional documentation of all aspects of instrument and algorithm performance.
Iterative cycle of maturity of (top) a CDR and (bottom) the six levels of maturity.
Citation: Bulletin of the American Meteorological Society 97, 9; 10.1175/BAMS-D-15-00015.1
One of the early examples of this cycle is given by McClain et al. (1985). They describe how initial coefficients for a multichannel SST algorithm were first generated using a forward radiative transfer model and a set of global atmospheric profiles and how, once in orbit, the coefficients for the algorithm were continuously updated and recomputed over a wide set of conditions. This basic approach continued to evolve over the next several decades, culminating in the Pathfinder SST program (Casey et al. 2010) and version 5.2 has been transitioned into the CDRP.
The evolution of the High Resolution Infrared Sounder (HIRS) channel 12 brightness temperature, a measure of upper-tropospheric humidity, follows this iterative pathway but with a start in a question about the water vapor feedback and global warming. Lindzen (1990) wrote an article noting that the water vapor feedback greatly amplified anthropogenic warming in climate models and that observations of upper-tropospheric water vapor were sparse and models had difficulty in reproducing observed values. This paper generated considerable controversy and led to major efforts to examine existing satellite observations of upper-tropospheric water vapor (Wu et al. 1993; Bates et al. 1996). Further improvements in water vapor radiative transfer and instrumentation were also undertaken (Kley et al. 2000). This iterative process has continued, and this FCDR now extends for over three decades (Shi and Bates 2011) and has been transitioned into the CDRP.
Attempts to transfer research results to operations have been fraught with challenges so numerous that a National Research Council report on the issue (National Research Council 2000) was subtitled “Crossing the Valley of Death.” The CDRP was aware of these challenges and sought the input of the National Academies, who identified key elements of successful climate data record generation programs (National Research Council 2004). These key elements help address questions such as “How is the CDR performing?” and “What needs to be done next?” The CDRP identified a set of metrics that would capture the suite of NRC key elements defining the maturity of CDRs.
As described in Bates and Privette (2012), these activities include the areas of software readiness, metadata, documentation, product validation, public access, and utility. Six steps of increasing maturity (Fig. 1, bottom) were identified for each of these thematic areas: the first two steps were identified as belonging to research activities, the next two moved toward the transition to initial operations, and the most mature two steps characterized fully operational information products. The CDRP has applied this maturity assessment to 30 CDRs and a similar assessment has been applied to an additional 25 European CDRs (J. Schulz 2015, personal communication). We found that products with a longer history did in fact rate higher in maturity. Researchers also tended to spend less time on software, metadata, documentation, and public access than on product validation and utility. Thus, the CDRP staff has taken a lead role in working with the research community to improve these aspects of the CDRs.
This maturity model serves as the basis for identification of research CDRs that were sufficiently developed to be candidates for transition to operations. The choice of CDRs to initially transition was further focused on those that drew their heritage from the polar-orbiting NOAA and Defense Meteorological Satellite Program (DMSP) operational observations and could be extended to the future NOAA polar orbiters. Of the 30 CDRs transitioned to date (Table 1), 25 have used sensors on the NOAA and DMSP operational satellites. Products include those identified as critical to monitoring, such as ozone and solar irradiance, and understanding and modeling, such as clouds and heat fluxes. For a few critical variables, such as mean layer temperature and sea surface temperature, there are several different products reflecting slightly different approaches.
List of NOAA climate data records and principal investigators (PIs) as of 14 Oct 2015. Program details and fact sheets are available online (www.ncdc.noaa.gov/cdr).
RESEARCH TO OPERATIONS FRAMEWORK FOR CDRS.
Transitioning CDRs to initial operational capability.
The CDR program has adopted a two-phase transition approach—commonly used in the U.S. Department of Defense programs and elsewhere—to providing full operational capabilities. The first phase, or initial operational capability (IOC), is achieved when a CDR meets minimal requirements for product generation, description, archiving, and stewardship. This corresponds to levels 3 and 4 of maturity. The second phase, or transition to full operational capability (FOC), is achieved when a CDR may be independently generated in operations without reliance on the original investigator and meets all maturity requirements of description, archiving, and stewardship (levels 5 and 6 of maturity).
The IOC is characterized by the application of a quantitative maturity matrix, documentation of algorithm development, archiving and public release of source code and data, and provisions for feedback from the scientific community. The FOC is characterized by the evolution of the CDR (including the complete record and supporting data, documentation, source code, and ongoing stewardship activities) to an easily maintainable state within NOAA operations. Further, NOAA is fully capable of the sustained forward extension of the data record. This sustainment includes ensuring ongoing CDR quality assessment and validation, exercising configuration and version control, and ensuring the timely release of incremental extensions to the time series. Note that the CDRP anticipates significant algorithm upgrades to occasionally occur as fostered through external research programs. The CDRP provides an avenue to replace existing algorithms and datasets with improved versions once they are successfully demonstrated, validated, and available.
The process for achieving IOC status is outlined in Fig. 2. There are six key steps, including assessment, submission, transfer, validation, archival, and access, and these steps focus on three items: code, documents, and data. The process is a collaborative effort between the principal investigator and multiple branches of NOAA’s National Centers for Environmental Information (NCEI). (The CDR program was established at the National Climatic Data Center, now part of NCEI.) Candidate CDRs first undergo a scientific assessment to determine their suitability for transition to operations. This includes a peer-reviewed scientific assessment of the maturity matrix to assure the product is at level-3 maturity in all aspects.
CDR program research-to-operations process diagram.
Citation: Bulletin of the American Meteorological Society 97, 9; 10.1175/BAMS-D-15-00015.1
After assessment approves a CDR algorithm and product for transition to IOC, marked by the key decision point (KDP) triangle in Fig. 2, an integrated product team (CDR IPT) is established that includes experts in science, software development, and information preservation. The team then obtains a copy of the software code, works with the principal investigator (PI) to create the needed documentation, and creates a copy of the product dataset and metadata in the CDRP format. The package then undergoes a submission process, highlighted by the completion of a submission agreement to place it into the NOAA archive. This process includes assessing the research algorithm’s conformance with CDRP security and coding standards. The codes, documentation, and datasets are placed under configuration and version control. This transfer step is the critical point where the PI and the IPT fully collaborate and often have to iterate until the code, documents, and data are all fully compliant with the CDRP IOC standards. Verification and archival steps follow and then the published CDR code, documentation, and data are made available to the public at the CDRP website (www.ncdc.noaa.gov/cdr/index.html). Before this public release, an operational readiness review is conducted where the code, documents, and data are given one final review and upon successful completion the product is declared to be at IOC.
Since the IOC process involves many different parts of NCEI interacting with a principal investigator, coordination of all personnel involved is important. Roles and responsibilities for the integrated product team include a transition project manager who provides overall process, schedule, and reporting coordination; an operations and maintenance project manager who maintains the dataset, provides ongoing quality assurance, and manages change requests; the principal investigator, who is responsible for updates to the source code, documentation, and the dataset; a subject matter expert who is a scientist at NCEI and is familiar with the scientific aspects for the specific CDR being transitioned and can independently ensure the scientific quality of the CDR; and representatives of the archive, operations, access, and information technology branches at NCEI (Fig. 2). This team approach, where the principal investigator is supported by experts in data management, software engineering, and science, has proven to be a key in successful transitions.
As of October 2015, the CDRP has transitioned 30 CDRs from research to initial operations and 1 to full operations. The average time from initial grant award to a principal investigator to the IOC phase was approximately 60 months and about 84 months to full operations. The transition from initial to full operations has been difficult and variable, as detailed below. The costs involved in transition have been tracked and these have been used to formulate a parametric cost model. This empirical cost model helps establish an iterative annual cycle of prioritization and management for the CDR program wherein candidate CDRs are subject to scientific prioritization and maturity assessment, which can then be used to forecast the cost and schedule for the upcoming year(s) of transition and operations.
A primary document required for IOC is the climate algorithm theoretical basis document (CATBD). This is because, as noted above, most CDRs evolve over time and no single peer-reviewed journal article can be referenced to provide a full description of the processing steps used. This document provides the scientific basis of remote sensing retrieval algorithm(s) by detailing the physical theory, mathematical procedures, and assumptions. In particular, it details the following:
observing system overview,
algorithm description,
test datasets and outputs,
practical considerations, and
assumptions.
This document is the link between the observing system, the physics of the retrieval of the CDR, and pragmatic considerations required to produce the CDR.
Routine production during IOC means that additional uses of a CDR can be supported. Figure 3 illustrates the data flow for a variety of climate products. Subsets of both fundamental CDRs and thematic CDRs are identified as essential climate variables as described in Bojinski et al. (2014). Additional user community needs have been identified including near-real-time production, an interim CDR (generated within several days of observation using official CDR algorithms and processes projected onto current data sources), and climate information records that distill CDR information down to a specific, easy to use index. For example, the CDRP and the Cooperative Institute for Climate and Satellites–North Carolina (CICS-NC) have developed a daily produced outgoing longwave radiation interim CDR used by the energy industry, specifically natural gas utility companies, to estimate temperatures in the eastern United States to inform a 2-week Madden–Julian oscillation forecast. These efforts leverage NOAA’s prior and current investments in CDRs and demonstrate the value of incorporating long-term historical weather and climate variability in private–public sector decision-making.
CDR data flow pathways to products and services.
Citation: Bulletin of the American Meteorological Society 97, 9; 10.1175/BAMS-D-15-00015.1
NOAA’s CDR Program also focuses on CDRs that have wide application to industry and the public:
water, drought, and floods (AghaKouchak and Nakhjiri 2012; Adler et al. 2003; Ashouri et al. 2015);
energy and renewable energy (Stackhouse et al. 2011; Zhang et al. 2006; Schreck et al. 2013); and
extreme weather, hurricanes, and coastal hazards (Curtis et al. 2007; Hennon et al. 2015; Rozoff et al. 2015).
Transitioning CDRs to full operational capabilities.
The initial research-to-operations processes for achieving IOC are an interim step to FOC. At present, the definitions, milestones, and characteristics of FOC have not been fully developed. Although the CDR maturity matrix provides useful metrics, it does not allow for the in-depth analysis of the computer software nor does it assess the complexity of the transition of that software to operations. To determine the level of effort required to transition an existing CDR to full operations, it was decided to perform a case study and follow the transition of a single CDR to full operations; techniques learned could then be applied to other cases.
The optimal interpolation sea surface temperature (OISST) analysis originated by Reynolds et al. (2007) has been improved and revised numerous times over the years. The OISST is widely used and has been cited over 600 times. Reynolds liked to joke that his career in SST products was launched in 1982 when aerosols from the El Chichón volcanic eruption interfered with the new satellite multichannel SST retrievals. He told his supervisor he could develop a fix by using in situ data to correct the aerosol bias in the satellite data and would be done in 6 months. Thirty years later, it was time for Reynolds to retire; the effort to document and optimize his software, developed over those many years, illustrates one of the greatest challenges to sustaining long-term production of CDRs at FOC.
The CDRP identified the OISST CDR as the candidate to undergo software analysis and transition to full operations. As summarized in Table 1 of Banzon et al. (2014), OISST has evolved through several versions, changing from a legacy lower time–space resolution product to a higher time–space resolution product, and blending several different datasets. As a result, the OISST software was a mixture of several generations of code. This is typical of many CDR efforts, since the emphasis is on the science and not the technical implementation. However, it poses a problem for transitioning the software to and maintaining it in full operations. Although the OISST product has been distributed to many users, the transition to full operations ensures reliable and robust processing as well as a cost-effective and sustainable future.
The FOC software technical assessment of OISST included 1) a cost–benefits analysis of the transition, 2) evaluation of product and code-base description, 3) technical code-base evaluation, and 4) recommendations for transitioning. The cost–benefit analysis found that the product was being widely used and had also been widely cited in the peer-reviewed literature. Evaluation of the code base revealed that the product was well described in several peer-reviewed papers; however, the actual code contained several legacy routines, some of which were no longer used. The most cost-effective approach to improving the code was to refactor it: a disciplined approach to improve the internal design of the code without changing the functionality. Because of the wide use of the product, the decision was made to proceed with the refactoring.
The lessons learned through the refactoring of the OISST codes are likely applicable to many CDRs throughout the community. First, the involvement of the principal investigator is critical since the investigator understands and will likely use the rejuvenated code, and their participation ensures that the modifications and comments are correct. Code metrics may indicate better code; however, subjective assessment of those metrics is needed. For example, code complexity increased in some modules where error checking was added, but error checking was an important addition and so the increase in complexity was acceptable in those cases. Significant improvements in readability and maintenance were made, including a 58% reduction in cyclomatic complexity (Watson and McCabe 1996) and a reduction in the number of lines of code (30%) and scripts (51%). The OISST software has now been transitioned to full operations and a new principal investigator has assumed responsibility for the project.
DISCUSSION.
NOAA established the CDRP in FY2009 to ensure operational production of high-quality, multidecadal time series data describing the global atmosphere, oceans, and land surface. NOAA’s definition of operational means sustained, systematic, reliable, and robust mission activities with an institutional commitment to deliver appropriate and cost-effective products and services. The CDR program has been implemented by establishing a maturity model for rigorously assessing the level of process maturity in data management, software management, and application of the climate records in research and applications. Similar to other methods for quantifying software and data management maturity, it serves as a measurement instrument to help evaluate how scientists are doing in these process areas and identifying what should be done next to achieve operational capabilities. The CDR maturity model is one way of achieving best practices for the production, preservation, and use of CDRs.
Since 2008, 30 CDRs have been transitioned from research to initial operations capability. The formation of an integrated project team to partner with the principal investigator with expertise and resources for archival, documentation, and software is important for increasing and maintaining the maturity of a CDR as well as the openness and transparency of the CDR. The transition of a CDR from research to operations also facilitates additional use-inspired applications of CDRs for near-real-time monitoring as well as tailored use in sectors. The production of CDRs requires collaboration between experts in the climate community and experts in data management and software engineering. It is also informed by scientific application and associated user feedback on the accessibility and usability of the produced CDRs. Long-term production of CDRs is an essential first step to achieving the goal of providing the observational record needed to inform decision-making. Further work is needed and is ongoing to engage with end users to assure these records meet their needs.
A major challenge for the transition of CDRs to full operational capability has been the design, architecture, and portability of the software. The CDR program has begun to rigorously address this challenge and has piloted the OISST software through refactoring code and script modifications. This exercise resulted in a significant improvement in the readability and maintenance of the code and ensures that the OISST CDR can continue to be accurately generated beyond the career of a single scientist.
Although all of the software and input–output data from a CDR can be shared openly, independently running such code and reproducing results is complicated and challenging. However, if decisions are to be based upon CDRs, then there should be required levels of maturity for software and data management that are independently certified, thus establishing the authoritative and dependable nature of the CDRs.
REFERENCES
Adler, R. F., and Coauthors, 2003: The version-2 Global Precipitation Climatology Project (GPCP) monthly precipitation analysis (1979–present). J. Hydrometeor., 4, 1147–1167, doi:10.1175/1525-7541(2003)004<1147:TVGPCP>2.0.CO;2.
AghaKouchak, A., and N. Nakhjiri, 2012: A near real-time satellite-based global drought climate data record. Environ. Res. Lett., 7, 044037, doi:10.1088/1748-9326/7/4/044037.
Ashouri, H., K.-L. Hsu, S. Sorooshian, D. K. Braithwaite, K. R. Knapp, L. D. Cecil, B. R. Nelson, and O. P. Prat, 2015: PERSIANN-CDR: Daily precipitation climate data record from multisatellite observations for hydrological and climate studies. Bull. Amer. Meteor. Soc., 96, 69–83, doi:10.1175/BAMS-D-13-00068.1.
Banzon, V. F., R. W. Reynolds, D. Stokes, and Y. Xue, 2014: A ¼°-spatial-resolution daily sea surface temperature climatology based on a blended satellite and in situ analysis. J. Climate, 27, 8221–8228, doi:10.1175/JCLI-D-14-00293.1.
Bates, J. J., and J. L. Privette, 2012: A maturity model for assessing the completeness of climate data records. Eos, Trans. Amer. Geophys. Union, 93, 441, doi:10.1029/2012EO440006.
Bates, J. J., X. Wu, and D. L. Jackson, 1996: Interannual variability of upper-tropospheric water vapor band brightness temperature. J. Climate, 9, 427–438, doi:10.1175/1520-0442(1996)009<0427:IVOUTW>2.0.CO;2.
Bojinski, S., M. Verstraete, T. C. Peterson, C. Richter, A. Simmons, and M. Zemp, 2014: The concept of essential climate variables in support of climate research, applications, and policy. Bull. Amer. Meteor. Soc., 95, 1431–1443, doi:10.1175/BAMS-D-13-00047.1.
Casey, K. S., T. B. Brandon, P. Cornillon, and R. Evans, 2010: The past, present and future of the AVHRR Pathfinder SST program. Oceanography from Space: Revisited, V. Barale, J. F. R. Gower, and L. Alberotanza, Eds., Springer, 273–287, doi:10.1007/978-90-481-8681-5_16.
Curtis, S., A. Salahuddin, R. F. Adler, G. J. Huffman, G. Gu, and Y. Hong, 2007: Precipitation extremes estimated by GPCP and TRMM: ENSO relationships. J. Hydrometeor., 8, 678–689, doi:10.1175/JHM601.1.
Hennon, C. C., and Coauthors, 2015: Cyclone center: Can citizen scientists improve tropical cyclone intensity records? Bull. Amer. Meteor. Soc., 96, 591–607, doi:10.1175/BAMS-D-13-00152.1.
Hollmann, R., and Coauthors, 2013: The ESA Climate Change Initiative: Satellite data records for essential climate variables. Bull. Amer. Meteor. Soc., 94, 1541–1552, doi:10.1175/BAMS-D-11-00254.1.
Karl, T. R., S. J. Hassol, C. D. Miller, and W. L. Murray, Eds., 2006: Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences. U.S. Climate Change Science Program, 164 pp.
Kley, D., J. M. Russell III, and C. Phillips, Eds., 2000: SPARC Assessment of upper tropospheric and stratospheric water vapour. SPARC Rep. 2, WCRP 113, WMO/TD-1043, 312 pp. [Available online atwww.sparc-climate.org/publications/sparc-reports/sparc-report-no2/.]
Lindzen, R. S., 1990: Some coolness concerning global warming. Bull. Amer. Meteor. Soc., 71, 288–299, doi:10.1175/1520-0477(1990)071<0288:SCCGW>2.0.CO;2.
McClain, E. P., W. G. Pichel, and C. C. Walton, 1985: Comparative performance of AVHRR-based multichannel sea surface temperatures. J. Geophys. Res., 90, 11,587–11,601, doi:10.1029/JC090iC06p11587.
National Research Council, 2000: From Research to Operations in Weather Satellites and Numerical Weather Prediction: Crossing the Valley of Death. National Academies Press, 96 pp.
National Research Council, 2004: Climate data records from environmental satellites: Interim report. National Academies Press, 150 pp.
National Research Council, 2005: Review of NOAA’s plan for the Scientific Stewardship Program. National Academies Press, 38 pp.
National Research Council, 2007: Options to ensure the climate record from the NPOESS and GOES-R spacecraft: A workshop report. National Academies Press, 84 pp.
National Research Council, 2008: Ensuring the climate record from the NPOESS and GOES-R spacecraft: Elements of a strategy to recover measurement capabilities lost in program restructuring. National Academies Press, 190 pp.
Reynolds, R. W., T. M. Smith, C. Liu, D. B. Chelton, K. S. Casey, and M. G. Schlax, 2007: Daily high-resolution-blended analyses for sea surface temperature. J. Climate, 20, 5473–5496, doi:10.1175/2007JCLI1824.1.
Rozoff, C. M., C. S. Velden, J. Kaplan, J. P. Kossin, and A. J. Wimmers, 2015: Improvements in the probabilistic prediction of tropical cyclone rapid intensification resulting from inclusion of passive microwave observations. Wea. Forecasting, 30, 1016–1038, doi:10.1175/WAF-D-14-00109.1.
Schreck, C. J., III, J. M. Cordeira, and D. Margolin, 2013: Which MJO events affect North American temperatures? Mon. Wea. Rev., 141, 3840–3850, doi:10.1175/MWR-D-13-00118.1.
Shi, L., and J. J. Bates, 2011: Three decades of intersatellite-calibrated High-Resolution Infrared Radiation Sounder upper tropospheric water vapor. J. Geophys. Res., 116, D04108, doi:10.1029/2010JD014847.
Stackhouse, P. W., Jr., and Coauthors, 2011: Towards an improved high resolution global long-term solar resource database. Proc. 40th National Solar Conf., Raleigh, NC, American Solar Energy Society, 323–326. [Available online athttp://power.larc.nasa.gov/publications/SOLAR2011_0166.pdf.]
Watson, A. H., and T. J. McCabe, 1996: Structured testing: A testing methodology using the cyclomatic complexity metric. NIST Special Publ. 500–235, 114 pp. [Available online atwww.mccabe.com/pdf/mccabe-nist235r.pdf.]
Wielicki, B. A., R. D. Cess, M. D. King, D. A. Randall, and E. F. Harrison, 1995: Mission to Planet Earth: Role of clouds and radiation in climate. Bull. Amer. Meteor. Soc., 76, 2125–2153, doi:10.1175/1520-0477(1995)076<2125:MTPERO>2.0.CO;2.
Wu, X., J. J. Bates, and S. J. S. Khalsa, 1993: A climatology of water vapor band brightness temperatures from the NOAA operational satellites. J. Climate, 6, 1282–1300, doi:10.1175/1520-0442(1993)006<1282:ACOTWV>2.0.CO;2.
Zhang, H.-M., J. J. Bates, and R. W. Reynolds, 2006: Assessment of composite global sampling: Sea surface wind speed. Geophys. Res. Lett., 33, L17714, doi:10.1029/2006GL027086.