1. Introduction
Accurate cloud forecasts by numerical weather prediction (NWP) models are of particular importance to the aviation community. As part of the efforts to optimize air traffic control in consideration of weather and its associated hazards, the Federal Aviation Administration (FAA) has outlined the Next Generation (NextGen) (http://www.faa.gov/nextgen/; FAA 2013) Air Transportation System. NextGen aims to provide via automated processes a common picture across the national airspace of the current and future weather situation (e.g., in-flight hazards such as clear-air turbulence, thunderstorms, aircraft icing, and reduced visibility near airports due to low cloud ceilings/fog, haze, and precipitation), and will integrate probabilistic weather forecast information within decisions on air traffic flow (Reynolds et al. 2012). Validation of the forecast cloud fields populating NextGen requires a scale of observation that is only feasible from a space platform, but most satellites carry passive radiometers that are extremely limited in their ability to resolve 3D cloud information.
Knowledge of cloud distribution and content is critical to assurance of aviation safety in terms of predicting ceilings and visibility, turbulence (e.g., Kaplan et al. 2005), and regions of aircraft icing (e.g., Cober and Isaac 2012). Models that are dedicated to the prediction of supercooled cloud liquid water layers (responsible for icing), such as the National Center for Atmospheric Research (NCAR) Current Icing Potential (CIP; Bernstein et al. 2005), attempt to overcome the traditional limitations of satellite observations by blending satellite, surface, and model information within a data-fusion framework. For example, low clouds and fog layers residing below upper-level cirrus go undetected by conventional passive satellite radiometers. Likewise, high clouds present an array of in-flight hazards, including engine stalls tied to high ice water content (e.g., in anvil clouds; Mason et al. 2006). Other issues include ambiguity between meteorological cirrus and tenuous volcanic ash plumes residing at flight level (Casadevall 1994). On occasion, cirrus provide indicators of turbulence (herringbone patterns in orographic cirrus) (e.g., Conover 1964; Uhlenbrock et al. 2007). The impact of clouds on U.S. Department of Defense (DoD) aviation activity has led the U.S. Air Force Weather Agency (AFWA) to embark on development of the AFWA Coupled Analysis and Prediction System (ACAPS) to provide accurate three-dimensional (3D) cloud depictions (e.g., Auligné et al. 2011).
Improving NWP prediction of clouds requires better insight on the space/time statistics of cloud properties, cloud evolution, and analysis of environments conducive to cloud formation. In partnership with the Environmental Modeling Center (EMC), under the auspices of the National Oceanic and Atmospheric Administration/National Centers for Environmental Prediction (NCEP), the Developmental Testbed Center (DTC; http://www.dtcenter.org) at NCAR has established a Model Evaluation Toolkit (MET; Holland et al. 2007; Halley Gotway et al. 2013). The MET package provides advanced model verification and diagnostic capabilities to the NWP community. The required spatial and temporal scales of observation are conducive to the satellite platform and help to identify where deficiencies in model processes and system coupling reside. The inherent limitations of current operational environmental satellite observing systems (e.g., passive solar, thermal infrared, and microwave spectral radiometers) to provide the detailed, vertically resolved information necessary to identify and address modeling deficiencies stands as one of the main roadblocks to progress.
This paper describes improvements to the MET analysis package tailored to exploit recent advances in satellite technology found in the National Aeronautics and Space Administration (NASA) A-Train constellation. In particular, cloud vertical-profile information from the CloudSat (Stephens et al. 2002) and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO; Winker et al. 2010) active sensors is combined with traditional two-dimensional (2D) observations of cloud properties from the Moderate Resolution Imaging Spectroradiometer (MODIS, carried on the Aqua satellite) to provide an ability to evaluate 3D model cloud fields. This evaluation requires innovations to existing MET tools as well as the introduction of new approaches to incorporate the NASA A-Train datasets. For example, new tools are required for assessing cloud vertical structure and for combining the 3D model field with 2D curtain slices from the CloudSat/CALIPSO sensors. The research aims to provide new pathways for model verification adapted to accommodate these new observing systems and similar next-generation profiler observing systems.
The layout of the paper is as follows: section 2 describes the capabilities and limitations of the current MET system in the context of cloud verification; section 3 describes the A-Train observations introduced to MET; section 4 outlines the new MET tools and analysis methods that exploit the vertical information provided by CloudSat and CALIPSO; section 5 presents selected case-study examples of these tools in practice, including analysis of full 3D cloud-field verification; section 6 concludes the paper with a roadmap for ongoing improvements to the next-generation MET package in light of future satellite profiling systems.
2. The current MET toolkit
MET is a modular, adaptable, and portable verification software package that incorporates traditional statistical tools along with newly developed and advanced verification methods, including methods for diagnostic and spatial verification (e.g., Davis et al. 2006; Brown et al. 2007; Casati et al. 2004). It has been designed specifically for applications utilizing the Weather Research and Forecasting (WRF) Model but is extendable to other models, provided there is adherence to certain format conventions. For completeness and to provide a baseline, the capabilities of other NWP verification systems (e.g., NCEP verification capabilities) were also included in the development of MET. These capabilities include input/output, methods, statistics, and handling of various data types commonly encountered in NWP verification. Beyond the conventional tools, MET provides unique statistical measures such as confidence-interval spatial forecast verification methods.
MET is a Linux-based-operating-system package that has been designed to be modular, readily adaptable, and easily portable. Individual statistical tools can be used without running the entire analysis-tools package. In addition, the user can easily add new analysis tools that fit the needs of the required analysis. MET tools can readily be incorporated into a larger verification system that may include a database as well as more sophisticated input/output and user interfaces. At the time of writing, a more detailed description along with access to the latest MET software package release could be obtained from the DTC Internet site (http://www.dtcenter.org/met/users/).
MET computes a variety of traditional statistics that include continuous and categorical variables; these are described in detail by Wilks (1995). Statistics such as bias, root-mean-square error, correlation coefficient, and mean absolute error are computable for continuous model variables. For categorical measures, MET provides standard scoring statistics such as probability of detection, probability of false detection, false-alarm ratio, and critical success index. In addition to providing traditional forecast-verification scores for both continuous and categorical variables, confidence intervals are also produced with parametric and nonparametric methods. Confidence intervals take into account the uncertainty associated with verification statistics that results from sampling variability and limitations in sample size.
MET provides tools that can be applied to a variety of diagnostic evaluations that utilize various dataset types. A standard tool within MET is “Point-Stat,” which calculates statistics for verification between a grid and a point. For example, this tool can be used for validation studies between satellite-derived rainfall (gridded data) and rain gauge (point) measurements. A second useful MET validation tool is “Grid-Stat,” used to compute traditional statistical measures for grid-to-grid validation such as the comparison between gridded radar rainfall products.
MET also has the ability to apply more advanced spatial-validation techniques, providing users with information about the spatial features of the gridded fields (e.g., precipitation) that traditional statistical measures do not provide. Example statistics that are computed include displacement in time and/or space, location, size, intensity, and orientation (rotational) errors. To compute these spatial validation statistics, an object-based spatial-validation tool called Method for Object-Based Diagnostic Evaluation (MODE) has been developed. MODE applies an object-based spatial-verification technique that is described in Davis et al. (2006).
MODE plays an important role in the evaluation of discontinuous variables such as cloud cover. A modeled cloud structure that is displaced even a small distance away from the observed structure is effectively penalized twice by standard categorical validation scores: once for missing the event and a second time for producing a false alarm of the event elsewhere. MODE defines objects in both surface observations (e.g., precipitation derived from surface-based radar) and from the satellite estimates (e.g., rain-rate retrievals). The objects in both fields are then matched and compared with one another. Applying this technique also provides diagnostic verification information that is difficult or even impossible to obtain using traditional verification measures. Prior to this research the MODE-based techniques have applied to 2D objects in the horizontal plane. The current work extends this capability into 3D and prepares the community for next-generation cloud observing systems such as those described below.
3. Adding a new dimension to cloud verification
The A-Train (so called for its early-afternoon local crossing time) demonstrates the potential of the satellite-train concept (L’Ecuyer and Jiang 2010). Members of the A-Train fly in a 705-km sun-synchronous polar orbit, with approximate local crossing times of 1330/0130 and ground tracks ranging between 82°N and 82°S latitude. At the time of publication, the A-Train was composed of the Orbiting Carbon Observatory 2 (OCO-2) at the lead, followed by the Global Change Observation Mission 1st-Water or “Shizuku” (Japanese for water drop) satellite (GCOM-W1), Aqua, CloudSat, CALIPSO, and Aura. In the past, it has also included the Polarization and Anisotropy of Reflectances for Atmospheric Sciences Coupled with Observations from a Lidar (PARASOL) satellite. The passage time of the full A-Train over a given location on Earth’s surface is less than 0.5 h, with the CloudSat and CALIPSO spacecraft placed originally in tight formation (separation time of only 12.5 s, or 93.8 km) to achieve 90% coincident observations. Since battery problems with CloudSat began in mid-April 2011, the operation of that satellite has been relegated to daytime-only portions of the orbit. Commensurate with this change, formation flight goals with CALIPSO were relaxed from 90% of sensor footprints collocated within 1 km to 90% collocated within 4 km.
The inclusion of CloudSat and CALIPSO in the A-Train in April of 2006 provided the first collocated radar–lidar observing system in space. CloudSat’s Cloud Profiling Radar (CPR) system (94 GHz, or 3-mm wavelength) provides vertically resolved cloud-property information. The data are collected as “curtain slices” at a spatial resolution of 1.1 km horizontally (along track; nonscanning) and 240 m in the vertical direction, oversampled from 480-m range gates. Derived products include geometric properties such as cloud-base heights (ceilings); multilayered cloud profiles; synoptic-scale storm structure; microphysical properties such as extinction, cloud liquid/ice water content, and effective particle size; and radiative properties such as shortwave and longwave heating/cooling rates. The CPR also resolves precipitation (light–moderate levels; Haynes et al. 2009; Mitrescu et al. 2010). The presence of precipitation can obscure true cloud-base height. A precipitation flag that is based on analysis of surface reflectance (Haynes et al. 2009) exists in the CloudSat precipitation products, allowing for avoidance or mitigation (e.g., assignment of a lifting-condensation-level height) of these situations in a model-validation study.
CALIPSO joined the NASA A-Train at the same time as CloudSat (and launched on the same rocket in 2006). Like CloudSat, the CALIPSO mission was designed primarily with climate-oriented science objectives in mind. In particular, the emphasis of CALIPSO is on the roles of aerosol (both direct and indirect, i.e., involving cloud interactions) in the climate system. Its main payload, the Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP), is dual frequency (532 and 1064 nm) with polarization channels in the 532-nm band for improved characterization of microphysics. In terms of spatial resolution, CALIOP provides a 90-m footprint and vertical resolution ranging from 30 m (<8 km) to 60 m (>8 km). Like CloudSat, CALIOP is nonscanning and provides only curtain slices. CALIOP draws heritage from the Lidar In-Space Technology Experiment (LITE; Winker et al. 1996), which flew a similar dual-band lidar on the Space Shuttle Discovery (STS-64) in September of 1994. In the context of cloud detection, the lidar does provide useful information about the presence of optically thin cirrus and boundary layer clouds that the CPR can miss (exceeding 35%; Marchand et al. 2008; Mace et al. 2009) because of sensitivity limitations, as well as inference of cloud-top phase through depolarization of the linearly polarized lidar signal by randomly oriented ice crystals (Hu et al. 2009). Lidar signals are attenuation limited when optical depths exceed values of about 3, however (e.g., Comstock and Sassen 2001). In this way CloudSat and CALIPSO provide highly complementary cloud observations.
Spaceborne active-sensor observations, while limited, have provided important insight to NWP cloud forecasts. Miller et al. (1999) examine European Centre for Medium-Range Weather Forecasts cloud forecasting using observations from LITE. The analysis found surprisingly close agreement in the model’s prediction of cloudy (75%) and combined cloudy + clear (90%) vertical profiles. Facing challenges in reconciling the 2D curtain observations with the 3D model grid field, the authors attempted a weighting of performance statistics according to size to transect through each model grid box that was analyzed.
Similar resolution issues are faced when comparing CloudSat and CALIPSO observations (curtain measurements similar to LITE) with gridded model data. Barker et al. (2011) and Miller et al. (2014) propose techniques for expanding these curtain observations to 3D. These approximations entail a blending of CloudSat, CALIPSO, and MODIS observations. Barker et al. (2011) apply a radiation-similarity approach that produces radiatively consistent results out to distances of ~20 km removed from the active-sensor data. The Miller et al. (2014) technique uses a cloud-type-dependent correlation to relate active-sensor-observed cloud geometric boundaries to the surrounding region and attempts to extend the field to the limits of climatological skill (100–300 km from the CloudSat track, depending on cloud type). Performance of the technique is a strong function of cloud type and proximity to the sparse-field active-sensor observations but is shown to outperform both the climatological and nearest-neighbor (of any class) techniques for most cloud types when within a 100-km radius of the active data. In each case, the synthesis is predicated on an observed relationship between clouds of common morphology or radiative properties and is applied to clouds observed in the 2D MODIS swath. These 3D fields are more readily compared with a model gridded dataset, but must be accompanied by estimates of uncertainty in their synthesis. The 3D rendering of cloud objects is done here for purposes of demonstrating MET. In the ideal case, true 3D observations (e.g., via scanning radar/lidar) would be applied to the MET tools, thereby avoiding the uncertainties incurred by a technique such as those of Miller et al. (2014) or Barker et al. (2011).
4. Augmented MET analysis tools
To accommodate the unique active-sensor profile (and augmented 3D cloud field) information of the A-Train observing system, several new tools have been introduced to the MET package. In developing these tools, we have attempted to maintain the input/output standards to be as generic as possible to facilitate portability to a variety of vertical-slice or fully 3D observational datasets.
a. Vertical-cross-section MODE
Among the new tools being developed for the advanced MET analysis package, basic accommodations for vertical profilers such as CloudSat and CALIPSO have been added. The traditional application of MODE within MET is 2D, in the horizontal plane. Application of MODE in the X–Z plane (i.e., vertical cross sections similar to CloudSat and CALIPSO curtains) enables direct comparisons between cloud objects observed by CloudSat/CALIPSO and those simulated within NWP—allowing for multilayer and cloud water content profile evaluations. In this simple adaptation of traditional MODE, the same suite of object and scene attributes (e.g., displacement and orientation) can be calculated and easily interpreted.
b. Observational operator for CloudSat
To conduct head-to-head comparisons between the multidimensional model state vector and CloudSat observations, Colorado State University’s QuickBeam (Haynes et al. 2007) radar simulator was enlisted to convert NWP environmental state vector parameters into CloudSat-equivalent 94-GHz attenuated radar-reflectivity profiles. This conversion allows for application of standard MET techniques such as object identification via thresholding done in observation unit space. QuickBeam is highly configurable, allowing up to 50 hydrometeor populations with five preset distributions: modified gamma, exponential, power law, monodisperse, and lognormal. The examples shown herein enlist five hydrometeor mixing ratios: ice, cloud water, graupel, snow, and rain. Comparisons between model and observed “objects” in the X–Z plane can then be attempted with MODE mentioned above.
c. Expanding-window statistical analysis
Phase-shift (spatial, temporal, or microphysical) errors in NWP-predicted cloud structures may lead to overly harsh penalties in model-evaluation statistics. Placing the cloudy observations in the context of the regional meteorological conditions strikes a compromise here. A tool allowing for expanding-window searches about the CloudSat ground track was designed to help to mitigate the effects of displaced clouds while maintaining focus on the validation of vertical structure. The expanding-neighborhood search window can be applied in both space and time and can be combined with other statistical-analysis methods. The tool is of particular use in quickly viewing and conducting preliminary verification statistics for model “slices” of varying widths and azimuthal orientations, allowing the user to “slide” through the model and compute statistics on the basis of the currently selected model data.
For verifications that are concerned more with storm structure as opposed to spatial placement, it would be useful to consult other observation sets to help to identify spatial errors in the model prior to extracting a model slice/domain and comparing it with the profile observations. Such analyses have been done in the context of midlatitude-cyclone structures (Klein and Jakob 1999). For example, using MODIS cloud-top pressures to locate the center of a tropical storm and then spatially shifting the modeled storm appropriately would ensure that the model slice is as fair and as representative as possible. We examine such a situation in the case studies to follow.
d. Contoured-frequency-by-altitude-diagram analyses
Cloud-cover evaluation on the basis of regional-domain statistics (as opposed to enforcing exact matchups between modeled and observed clouds) can be useful for assessing a model’s general ability to capture cloud structure for the current meteorological conditions. The contoured-frequency-by-altitude diagram (CFAD; Yuter and Houze 1995) was introduced to MET to analyze the vertical structure of clouds from radar reflectivity over a given region in this context.
For the particular application to CloudSat observations, all of the cloudy range gates are used to construct a 2D density function in this CFAD space. The horizontal axes of these CFADs (shown in several figures of the analyses to follow) contain values of radar reflectivity, and the vertical axes are in height (either above ground level or above mean sea level). Model cloud predictions (forward processed through QuickBeam) can be used to construct a similar CFAD density function for grid cells extracted along the CloudSat ground track, as based on a neighborhood of grid cells in the domain. The neighborhood of model grid cells may be a simple expanding window about the CloudSat ground track as described above or may be drawn from an archive of model output corresponding to similar meteorological events or common airmass/stability properties. This flexibility makes CFAD a powerful analysis tool for examining the fundamental ability of the model to represent realistic clouds.
e. A 3D version of MODE
Further expanding the vertical MODE (X–Z plane) capability, a version of MODE capable of defining fully 3D objects has also been developed. This version operates on the same basic principles as conventional MODE: identifying contiguous regions of a volume meeting or exceeding a user-specified threshold, applying morphological operations to the volume fields, and matching and merging 3D objects on the basis of attribute similarities. The 3D application of MODE necessarily loses some attributes (e.g., convex hull) because of computational complexity. Other attributes unique to the 3D object verification problem can be calculated, including calculation of object volume, the ability to project and view cross-sectional areas on 2D planes, and axis-angle determination in terms of both azimuth and elevation angles. This tool allows for in-depth comparisons between complete model volumes and 3D cloud products derived from observations (e.g., Miller et al. 2014). An illustrative example of 3D MODE is included in the case studies to follow.
5. Case studies of MET applications
The new MET tools discussed in section 4 are applied here to selected case studies to illustrate the basic concepts of multidimensional cloud verification. These examples are intended to provide MET users with a template for conducting their own analyses using CloudSat/CALIPSO or other profiling observation datasets.
a. Tropical-cyclone analysis: Hurricane Igor
During the 2010 Atlantic Ocean hurricane season, the strength of Hurricane Igor reached category 4. CloudSat provided overpasses near Igor’s center on 15, 16, and 19 September. To analyze this storm with the augmented MET tools, the Advanced Hurricane WRF (AHW; Davis et al. 2008) Model was initiated at 0000 and 1200 UTC, with 12-, 4-, and 1.33-km resolution. The 12-km-resolution model-output fields were produced every 3 h, whereas the two finer resolutions were output hourly. Thus there is a chance for a small temporal offset between the observations, particularly for the coarsest model output, but these differences were deemed small for the purposes of this illustrative example. The AHW provides five 3D cloud-species mixing ratios (ice, cloud water, graupel, snow, and rain), which were used as input to QuickBeam (using the default settings for species size distributions, for illustrative purposes here) for simulating CloudSat radar reflectivity.
Results from the ~1745 UTC A-Train pass over Hurricane Igor on 19 September are shown in Figs. 1–3 for the 12.0-, 4.0-, and 1.33-km-spatial-resolution AHW fields, respectively, honing in on the eye of the storm. To concentrate on storm structure, model runs for Hurricane Igor were shifted following the suggestion in section 4c. The center of the storm “cloud object” was located using MODIS-retrieved cloud-top pressures and was compared with the modeled storm center, and then the model was relocated to account for the observed displacements. Doing so resulted in modest relocation offsets of 58.2, 35.4, and 34.8 km for the 12.0-, 4.0-, 1.33-km AHW resolution fields, respectively. This was done for illustrative purposes only; other more rigorous spatial-shifting methods, such as best-track analysis and surface pressure/wind field matching, could also be employed in practice. Without such adjustments, a CloudSat/CALIPSO curtain observation passing through the eye of the storm may correspond to a slice through the outer rainbands of the displaced modeled system—precluding meaningful analysis of eyewall structure.
(a), CloudSat ground track (red) overlaid upon MODIS Aqua infrared imagery, (b) spatially corrected modeled composite reflectivity (dBZ) on 19 Sep 2010 showing the location of the observational track, (c) model minus observed CFAD, (d) model-simulated radar reflectivity along the CloudSat track, and (e) CloudSat-observed reflectivity for 12-km model results. Observation time is 1900 UTC; model time is 1800 UTC. The spatial offset between modeled and observed features is ~58 km.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
As in Fig. 1, but shown for 4-km model results. Here, the model output has been matched to the satellite observation time. The CFAD difference in (c) shows a similar structure to the 12-km results in Fig. 1. Modeled storm shape and size shown in (d) compare well to the observations in (e), with spatial offset of ~35 km, although the model-simulated reflectivity intensity appears to be too low.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
As in Figs. 1 and 2, but honing in further to 1.33-km model results and the eye of the storm. Model and observation times match; the spatial offset is 34 km. The results are similar to those of the coarser-resolution simulations.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
CFAD differences (model − CloudSat; as in section 4d) are shown in Figs. 1c, 2c, and 3c. Strong negative differences around 10 dBZ/6 km are due in part to QuickBeam’s inability to produce a bright band, owing to limitations in the detail of cloud microphysical information available in the AHW model fields (QuickBeam can, in principle, represent brightband structures but requires information on the melting layer). Modeled radar-reflectivity values are, in general, of lower intensity than observations for this case, but the area and shape of the main body of the storm compare well. It is apparent from these simulations that additional constraints from the model in terms of particle size distribution [including a better account for large droplet members of the precipitation hydrometeor population, in light of the strong (sixth power) dependency of radar reflectivity on particle diameter] would be important for detailed comparisons.
b. October 2010 “super” extratropical cyclone
Evaluating NWP forecasts under conditions of strong dynamical forcing is a well-posed problem for cloud verification, particularly in terms of evaluating classical storm structure (e.g., Lau and Crane 1995, 1997; Klein and Jakob 1999). Likewise, CloudSat observations are well suited to providing such synoptic-scale views (e.g., Posselt et al. 2008). We therefore sought out a case study whose strong dynamics should produce a classic cloud structure for direct comparison with CloudSat observations.
In late October of 2010, an intense midlatitude cyclone impacted the United States. The storm, which featured a 25.2-hPa pressure change over the course of only 20.3 h, produced a record sea level equivalent low pressure reading for the state of Minnesota [955.2 hPa, measured at an Automated Weather Observing System station at Bigfork (KFOZ) at 2213 UTC 26 October 2010]. The strong baroclinicity associated with the storm produced high surface winds and significant precipitation over a widespread region of the eastern United States, with accompanying thunderstorms and severe weather farther to the south.
Figure 4 depicts the simulated [Rapid Refresh (RAP) model, valid at time 0700 UTC] and CloudSat-observed reflectivity fields for a cross section through part of this storm system at 0745 UTC 27 October 2010. Figure 5 shows the resolved merged and matched cloud objects identified via an arbitrarily selected threshold on the reflectivity fields (here, −20 dBZ). In qualitative terms, the forecast was found to produce more objects than were found in the observations. Comparing the matched objects for this case yields a mean intensity error of −1.6 dB and mean spatial offset of 91 km. Object matching offers a more informative comparison between the fields than literal gridpoint matching can accomplish. A CFAD analysis for this case is shown in Fig. 6. Comparison of the two fields indicates that the model produces a high range of cloud reflectivity at low altitudes, similar to the observations, but is not producing enough higher-reflectivity clouds at mid- to high levels. The specific causes of these errors (e.g., updrafts that are too weak, timing errors in convective onset, or spatial shifts in the environmental fields that determine stability) are well beyond the scope of the current work. The MET tools are designed to reveal such issues and to help to pose questions in a 3D context.
(left) CloudSat ground track, (top right) simulated model reflectivity, and (bottom right) observed reflectivity for the extratropical-storm case study over the United States at ~0745 UTC 27 Oct 2010. Latitude/longitude positions are shown on the horizontal axis, and height (km MSL) is shown on the vertical axis, with labels A and B giving reference to the start and end points of the CloudSat pass, respectively.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
Resolved objects in the (top) model and (bottom) observation fields, corresponding to Fig. 4. The cloud objects were identified using a reflectivity threshold of −20 dBZ.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
CFADs for (a) model-simulated reflectivity and (b) observations, and (c) the model − observed difference between the two CFADS for the 27 Oct storm case.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
c. CIP product evaluation
NWP output provides important guidance to the aviation community in terms of safe aircraft routing in the presence of hazardous aircraft icing conditions. The CIP product (Bernstein et al. 2005) is a data-fusion product that combines satellite, model, and in situ data (including pilot reports) to produce a 3D hourly diagnosis of the potential for hazardous icing conditions. In brief, the CIP product uses multispectral (visible + infrared) information from the Geostationary Operational Environmental Satellites to identify cloud locations and cloud-top height, and cloud layering is prescribed via analysis of the model column relative humidity fields. The efficacy of this product, which is used operationally by the National Weather Service and FAA, hinges on the correct placement of cloud features in 4D space + time. In this case study, the combined CloudSat/CALIPSO cloud-layer product was used to examine CIP’s statistical performance at identifying multilayer cloud scenarios as well as the product’s ability to place cloud-base/cloud-top boundaries in the vertical direction, using data collected over a 3-month period from January through March of 2007 over the operational CIP domain (a combination of the WRF and Rapid Update Cycle domains, spanning the continental United States and including northern Mexico and southern Canada).
A comparison showing the frequency of number of cloud layers as predicted by CIP and observed by CloudSat/CALIPSO is shown in Fig. 7. For a perfect forecast, this figure would show a bright red line along the 1:1 diagonal. The comparison indicates considerable disagreements, however. Cloud layers are difficult to forecast because of coarse vertical resolution in models as well as a lack of observations of multiple levels. The CIP model will often miss single-layer cloud situations, and, although the overall frequency of two-layer clouds is fairly accurate (i.e., summing the third row and column of Fig. 7), the observations suggest that both the timing and location of these systems are off. These active sensors provide a first capability to validate CIP model clouds in three dimensions, and the MET tools will offer a mechanism for evaluating model improvements.
Joint frequency of observed and CIP-forecast number of cloud layers for a 3-month comparison period from January through March 2007.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
The placement of clouds, starting with the distributions of top and base heights, was also evaluated directly against the level-2 CloudSat/CALIPSO cloud geometric profile product. Figure 8 illustrates the distribution of error in modeled cloud-top and cloud-base heights. The top errors appear to be approximately normal and centered near zero, whereas the forecast cloud bases are biased to be too low. Considering CloudSat’s sensitivity to light precipitation, which might naturally bias these observations to be too low, the fact that the CIP forecast places its cloud bases even lower in the atmosphere than what is observed stands as a significant result.
Distribution of error for modeled − observed cloud-top (std dev σ = 4.7 km) and cloud-base (σ = 3.1 km) heights for the same analysis as Fig. 7.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
Last, the vertical distribution of clouds between CIP and CloudSat/CALIPSO was evaluated. The overall frequencies are shown in Fig. 9 (left panel), and the frequencies of “unmatched” cloud are shown in Fig. 9 (right panel). Here, a cloud is deemed unmatched when there are differing numbers of layers between the observations and model; the closest clouds in each field are paired, and the location of the remaining clouds is recorded. The highest frequency of unmatched cloud in the model occurs at low altitudes, most likely because of CloudSat’s inability to differentiate between low cloud and the surface below ~1 km as a result of sidelobe contamination. Meanwhile, the model underforecasts clouds above about 5 km. Such analyses reveal various strengths and shortfalls of the model and observations, which are of prime importance when considering their guidance for operational support, but an understanding of the observation limitations is also required.
Vertical distribution of (a) all clouds and (b) unmatched clouds (see text for details).
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
d. Case study of 3D MODE
As a final example, we consider the new 3D MODE utility. The current developmental version of this tool requires the scene to contain distinct cloud objects, preferably away from the edges of the domain. A thunderstorm outbreak over the United States on the afternoon of 26 July 2010 provided a good case study in this regard. In Fig. 10, true-color imagery from Aqua/MODIS paired with quick-look CloudSat reflectivity shows the satellite observations crossing an isolated thunderstorm (denoted by a blue asterisk) with a well-formed anvil at the 1920 UTC time of observation.
Aqua/MODIS true-color imagery with CloudSat ground track (red line) overlaid. The CloudSat reflectivity profile along the track segment A→B is shown in toward the bottom of the figure, with a blue asterisk denoting an isolated thunderstorm targeted for 3D MODE analysis.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
The tool was applied to a 3D estimate of the 26 July 2010 Aqua/MODIS observations as well as to output from the RAP modeled cloud liquid water. The MODIS-derived cloud liquid water path was distributed in the vertical direction by following the method of Miller et al. (2014). The 3D images with cross sections through the weighted centroids are shown for the forecast and observed cloud objects in Figs. 11 and 12. The differing centroid values are due to object displacement errors in the model. Key results of this comparison are as follows:
The model captures the observed shape fairly well in the Z and Y planes.
The model places the storm slightly too far to the south and with a similar cloud top but too low of a cloud base.
The forecast cloud volume was too high (4.29 × 103 vs 1.19 × 103 km3), with much of the additional volume residing below the observationally derived cloud base, and amassed more along the X plane.
In a similar way, the forecast cloud area projected onto the X–Y plane was nearly 2 times that of the observations (6.43 × 103 vs 3.72 × 103 km2).
The forecast mean water content magnitude was larger than observations by about a factor of 4 (0.30 vs 0.075 g m−3). Differences between the hydrometeor populations included in the model versus the CloudSat combined water content algorithm (Austin et al. 2009) may account for a portion of this disagreement.
Object orientation in the X–Y plane was fairly similar between the forecast and observed storm, with less than 5° difference in azimuth and 9° difference in elevation axis angles, confirming the visual similarity of the two objects along most planes of view.
3D modeled cloud objects corresponding to the isolated thunderstorm shown in Fig. 10. (a) Outer surfaces of the clouds in the domain; (b) X–Y, (c) X–Z, and (d) Y–Z planar cross sections through selected parts of the domain. (e) Histogram of cloud liquid water content for all cloudy grid boxes in this domain.
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
As in Fig. 11, but for cloud objects based on a combination of MODIS and CloudSat observations, with application of the 3D rendering technique of Miller et al. (2014).
Citation: Journal of Applied Meteorology and Climatology 53, 9; 10.1175/JAMC-D-13-0322.1
This example has been shown not for the purposes of conducting a rigorous analysis of the RAP convective scheme but simply to illustrate the ways in which the 3D MODE tool allows for posing new kinds of questions, which in turn may lead to deeper insight on a model’s cloud-resolving capabilities or point toward suggestions for improvements. Such information provides modelers with a better perspective on potentially multidimensional interactions between various errors and model parameters.
6. Conclusions
As numerical models increase their levels of sophistication in cloud representation, commensurate improvements to observational datasets capable of evaluating and further advancing these representations are required. An important recommendation made by the World Climate Research Program in this regard was for a combined active sensor (lidar and radar) satellite system to improve the four-dimensional (3D + time) distribution of cloud optical properties and the relationships between those properties and physical cloud properties such as liquid water and ice mass (World Climate Research Programme 1994). This has been further supported by the Aerosol/Clouds/Ecosystems requirements definition and subsequent refinements resulting in the Global Precipitation and Cloud Mission and improved cloud measurement system concepts (Sadowy et al. 2003; Rahmat-Samii et al. 2005). The CloudSat/CALIPSO genre of spaceborne active sensing, marking the advent of these next-generation satellite capabilities, will be followed in the near future by such systems as NASA’s Global Precipitation Measurement mission (e.g., Kummerow et al. 2007), the European Space Agency (ESA) Earth, Clouds, Aerosols, and Radiation Explorer (EarthCARE; ESA 2001), and possible future NASA Decadal Survey mission concepts thereafter. These future research-grade missions may include scanning active systems, providing the first true 3D cloud observations.
The current work expands the dimension of model-evaluation tools in advance and anticipation of these nascent active observations. In summary, a version of MET capable of accepting as input “unconventional” profiling and 3D satellite datasets has been developed along with an initial set of accompanying diagnostic tools. The concept of object matching and comparative analysis with MODE has been adapted to handle both curtain-slice and full 3D objects, suitable for CloudSat, future EarthCARE, and volumetric scanning systems (which may include surface-based radar). Expanding the capability beyond binary comparisons of modeled/CloudSat cloud objects, the QuickBeam model has been incorporated as an observational operator—translating from model state vector space to radar reflectivity. Although some low biases in simulated radar reflectivity were noted for the examples considered herein, the limitations are tied in part to lack of detailed information on the hydrometeor population in the specific NWP models considered here; in principle a model offering refined detail would enable better exploitation of QuickBeam in this regard. Other tools introduced here, such as the expanding search window and CFAD analysis, are tailored to providing meaningful diagnostics without enforcing exact object structural matchups in space/time.
The prototype MET tools continue to be evaluated and refined as part of a “β version” package. When they reach maturity and operational code robustness they will be introduced in a future official release of MET. In advance of these releases, and in line with NCEP/EMC recommendations to promote research to operations (R2O) collaboration, the tools are being socialized at operational facilities such as the Aviation Weather Center in Kansas City, Missouri. As the FAA NextGen 4D data cube (e.g., Carmichael and Pace 2008; Curry et al. 2008) becomes populated with observations, the MET 3D toolkit will realize expanded utility, perhaps beyond that of its intended design for profiling satellite observations.
In anticipation of this potentially broader application, an important part of the ongoing development involves making the current MET 3D tools user friendly and generalized to read various data formats and handle multiple parameter types. In this regard, converters between native-observation dataset structures and the generic format accepted by MET will be written in cadence with the launch/availability of new satellite resources. This new framework better positions MET to assist users in exploiting new satellite resources for NWP analysis/improvements, evaluating the probabilistic output fields provided by the NextGen 4D data cube, supporting operational decision making (thereby increasing the safety and efficiency of the National Airspace System), and preparing the community for future operational environmental satellite capabilities.
Acknowledgments
Principal support of this research from NASA Grant NNX09AN79G is gratefully acknowledged. We also acknowledge support from the Oceanographer of the Navy through the PEO C4I & Space/PMW120 and from the DoD Center for Geosciences/Atmospheric Research at Colorado State University under Cooperative Agreement W911NF-06-2-0015 with the Army Research Laboratory. CloudSat data processing for this research was conducted at Colorado State University, operating under the support of the Jet Propulsion Laboratory, California Institute of Technology, NASA Contract NM0710984.
REFERENCES
Auligné, T., A. Lorenc, Y. Michel, T. Montmerle, A. Jones, M. Hu, and J. Dudhia, 2011: Toward a new cloud analysis and prediction system. Bull. Amer. Meteor. Soc., 92, 207–210, doi:10.1175/2010BAMS2978.1.
Austin, R. T., A. J. Heymsfeld, and G. L. Stephens, 2009: Retrieval of ice cloud microphysical parameters using the CloudSat millimeter-wave radar and temperature. J. Geophys. Res., 114, D00A23, doi:10.1029/2008JD010049.
Barker, H. W., M. P. Jerg, T. Wehr, S. Kato, D. P. Donovan, and R. J. Hogan, 2011: A 3D cloud-construction algorithm for the EarthCARE satellite mission. Quart. J. Roy. Meteor. Soc., 137, 1042–1058, doi:10.1002/qj.824.
Bernstein, B. C., and Coauthors, 2005: Current icing potential: Algorithm description and comparison with aircraft observations. J. Appl. Meteor., 44, 969–986, doi:10.1175/JAM2246.1.
Brown, B. G., R. Bullock, J. Halley Gotway, D. Ahijevych, C. Davis, E. Gilleland, and L. Holland, 2007: Application of the MODE object-based verification tool for the evaluation of model precipitation fields. 22nd Conf. on Weather Analysis and Forecasting/18th Conf. on Numerical Weather Prediction, Park City, UT, Amer. Meteor. Soc., 10A.2. [Available online at https://ams.confex.com/ams/pdfpapers/124856.pdf.]
Carmichael, B., and D. J. Pace, 2008: The single authoritative source for weather information. 13th Conf. on Aviation, Range and Aerospace Meteorology, New Orleans, LA, Amer. Meteor. Soc., J6.4. [Available online at https://ams.confex.com/ams/88Annual/techprogram/paper_128615.htm.]
Casadevall, T. J., 1994: The 1989–1990 eruption of Redoubt Volcano, Alaska: Impacts on aircraft operations. J. Volcanol. Geotherm. Res., 62, 301–316, doi:10.1016/0377-0273(94)90038-8.
Casati, B., G. Ross, and D. Stephenson, 2004: A new intensity-scale approach for the verification of spatial precipitation forecasts. Meteor. Appl., 11, 141–154, doi:10.1017/S1350482704001239.
Cober, S. G., and G. A. Isaac, 2012: Characterization of aircraft icing environments with supercooled large drops for application to commercial aircraft certification. J. Appl. Meteor. Climatol., 51, 265–284, doi:10.1175/JAMC-D-11-022.1.
Comstock, J. M., and K. Sassen, 2001: Retrieval of cirrus cloud radiative and backscattering properties using combined lidar and infrared radiometer (LIRAD) measurements. J. Atmos. Oceanic Technol., 18, 1658–1673, doi:10.1175/1520-0426(2001)018<1658:ROCCRA>2.0.CO;2.
Conover, J. H., 1964: The identification and significance of orographically induced clouds observed by TIROS satellites. J. Appl. Meteor., 3, 226–234, doi:10.1175/1520-0450(1964)003<0226:TIASOO>2.0.CO;2.
Curry, K., R. Hardwick, D. Pace, and K. Johnston, 2008: U.S. government environmental data cube (4D cube) harmonization. 13th Conf. on Aviation, Range and Aerospace Meteorology, New Orleans, LA, Amer. Meteor. Soc., J5.5. [Available online at https://ams.confex.com/ams/pdfpapers/128890.pdf.]
Davis, C., B. Brown, and R. Bullock, 2006: Object-based verification of precipitation forecasts. Part I: Methodology and application to mesoscale rain areas. Mon. Wea. Rev., 134, 1772–1784, doi:10.1175/MWR3145.1.
Davis, C., and Coauthors, 2008: Prediction of landfalling hurricanes with the Advanced Hurricane WRF Model. Mon. Wea. Rev., 136, 1990–2005, doi:10.1175/2007MWR2085.1.
ESA, 2001: The Five Candidate Earth Explorer Core Missions: EarthCARE—Earth Clouds, Aerosols and Radiation Explorer. European Space Agency Rep. ESA SP-1257 (1), 130 pp. [Available online at http://esamultimedia.esa.int/docs/sp_1257_1_earthcaresc.pdf.]
FAA, 2013: NextGen implementation plan. Federal Aviation Administration Office of NextGen Doc., 94 pp. [Available online at www.faa.gov/nextgen/implementation/media/NextGen_Implementation_Plan_2013.pdf.]
Halley Gotway, J., and Coauthors, 2013: Model Evaluation Tools version 4.1 (METv4.1): User’s guide 4.1. Developmental Testbed Center Rep., 226 pp. [Available online at http://www.dtcenter.org/met/users/docs/users_guide/MET_Users_Guide_v4.1.pdf.]
Haynes, J. M., R. T. Marchand, Z. Luo, A. Bodas-Salcedo, and G. L. Stephens, 2007: A multipurpose simulation package: QuickBeam. Bull. Amer. Meteor. Soc., 88, 1723–1727, doi:10.1175/BAMS-88-11-1723.
Haynes, J. M., T. S. L’Ecuyer, G. L. Stephens, S. D. Miller, C. Mitrescu, N. B. Wood, and S. Tanelli, 2009: Rainfall retrieval over the ocean with spaceborne W-band radar. J. Geophys. Res., 114, D00A22, doi:10.1029/2008JD009973.
Holland, L., J. Halley Gotway, B. Brown, R. Bullock, E. Gilleland, and D. Ahijevych, 2007: The Model Evaluation Tool. 22nd Conf. on Weather Analysis and Forecasting, Park City, UT, Amer. Meteor. Soc., 10A.1. [Available online at https://ams.confex.com/ams/pdfpapers/124840.pdf.]
Hu, Y., and Coauthors, 2009: CALIPSO/CALIOP cloud phase discrimination algorithm. J. Atmos. Oceanic Technol., 26, 2293–2309, doi:10.1175/2009JTECHA1280.1.
Kaplan, M. L., A. W. Huffman, K. M. Lux, J. J. Charney, A. J. Riordan, and Y.-L. Lin, 2005: Characterizing the severe turbulence environments associated with commercial aviation accidents. Part 1: A 44-case study synoptic observational analysis. Meteor. Atmos. Phys., 88, 129–152, doi:10.1007/s00703-004-0080-0.
Klein, S., and C. Jakob, 1999: Validation and sensitivities of frontal clouds simulated by the ECMWF model. Mon. Wea. Rev., 127, 2514–2531, doi:10.1175/1520-0493(1999)127<2514:VASOFC>2.0.CO;2.
Kummerow, C., H. Masunaga, and P. Bauer, 2007: A next-generation microwave rainfall retrieval algorithm for use by TRMM and GPM. Measuring Precipitation from Space: EURAINSAT and the Future, V. Levizzani, P. Bauer, and F. J. Turk, Eds., Advances in Global Change Research, Vol. 28, Springer, 235–252.
Lau, N.-C., and M. W. Crane, 1995: A satellite view of the synoptic-scale organization of cloud properties in midlatitude and tropical circulation systems. Mon. Wea. Rev., 123, 1984–2006, doi:10.1175/1520-0493(1995)123<1984:ASVOTS>2.0.CO;2.
Lau, N.-C., and M. W. Crane, 1997: Comparing satellite and surface observations of cloud patterns in synoptic-scale circulation systems. Mon. Wea. Rev., 125, 3172–3189, doi:10.1175/1520-0493(1997)125<3172:CSASOO>2.0.CO;2.
L’Ecuyer, T. S., and J. H. Jiang, 2010: Touring the atmosphere aboard the A-Train. Phys. Today, 63 (7), 36–41, doi:10.1063/1.3463626.
Mace, G. G., Q. Zhang, M. Vaughn, R. Marchand, G. Stephens, C. Trepte, and D. Winker, 2009: A description of hydrometeor layer occurrence statistics derived from the first year of merged CloudSat and CALIPSO data. J. Geophys. Res., 114, D00A26, doi:10.1029/2007JD009755.
Marchand, R., G. G. Mace, T. Ackerman, and G. Stephens, 2008: Hydrometeor detection using CloudSat—An Earth-orbiting 94-GHz cloud radar. J. Atmos. Oceanic Technol., 25, 519–533, doi:10.1175/2007JTECHA1006.1.
Mason, J., J. W. Strapp, and P. Chow, 2006: Ice particle threat to engines in flight. 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV, AIAA, 2006-206. [Available online at http://arc.aiaa.org/doi/pdf/10.2514/6.2006-206.]
Miller, S. D., G. L. Stephens, and A. Beljaars, 1999: A validation survey of the ECMWF prognostic cloud scheme using LITE. Geophys. Res. Lett., 26, 1417–1420, doi:10.1029/1999GL900263.
Miller, S. D., and Coauthors, 2014: Estimating three-dimensional cloud structure via statistically blended satellite observations. J. Appl. Meteor. Climatol., 53, 437–455, doi:10.1175/JAMC-D-13-070.1.
Mitrescu, C., S. D. Miller, J. Haynes, T. S. L’Ecuyer, and J. Turk, 2010: CloudSat precipitation profiling algorithm: Model description. J. Appl. Meteor., 49, 991–1003, doi:10.1175/2009JAMC2181.1.
Posselt, D., G. L. Stephens, and M. Miller, 2008: Cloudsat adding a new dimension to a classical view of extratropical cyclones. Bull. Amer. Meteor. Soc., 89, 599–609, doi:10.1175/BAMS-89-5-599.
Rahmat-Samii, Y., J. Huang, B. Lopez, M. Lou, E. Im, S. L. Durden, and K. Bahadori, 2005: Advanced precipitation radar antenna: Array-fed offset membrane cylindrical reflector antenna. IEEE Antennas Propag., 53, 2503–2151, doi:10.1109/TAP.2005.852599.
Reynolds, D. W., D. A. Clark, F. W. Wilson, and L. Cook, 2012: Forecast-based decision support for San Francisco International Airport: A NextGen prototype system that improves operations during summer stratus season. Bull. Amer. Meteor. Soc., 93, 1503–1518, doi:10.1175/BAMS-D-11-00038.1.
Sadowy, G. A., A. C. Berkun, W. Chun, E. Im, and S. L. Durden, 2003: Development of an advanced airborne precipitation radar. Microwave J., 46, 84–98.
Stephens, G. L., and Coauthors, 2002: The CloudSat mission and the A-Train: A new dimension of space-based observations of clouds and precipitation. Bull. Amer. Meteor. Soc., 83, 1771–1790, doi:10.1175/BAMS-83-12-1771.
Tiedtke, M., 1993: Representations of clouds in large-scale models. Mon. Wea. Rev., 121, 3040–3061, doi:10.1175/1520-0493(1993)121<3040:ROCILS>2.0.CO;2.
Uhlenbrock, N. L., K. M. Bedka, W. F. Feltz, and S. A. Ackerman, 2007: Mountain wave signatures in MODIS 6.7-μm imagery and their relation to pilot reports of turbulence. Wea. Forecasting, 22, 662–670, doi:10.1175/WAF1007.1.
Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences: An Introduction. Academic Press, 467 pp.
Winker, D. M., R. H. Couch, and M. P. McCormick, 1996: An overview of LITE: NASA’s Lidar In-Space Technology Experiment. Proc. IEEE, 84, 164–180, doi:10.1109/5.482227.
Winker, D. M., and Coauthors, 2010: The CALIPSO mission: A global 3D view of aerosols and clouds. Bull. Amer. Meteor. Soc., 91, 1211–1229, doi:10.1175/2010BAMS3009.1.
World Climate Research Programme, 1994: Cloud–radiation interactions and their parameterization in climate models. WCRP Rep. WCRP-86, WMO Tech. Doc. WMO/TD 648, 147 pp.
Yuter, S. E., and R. A. Houze, 1995: Three-dimensional kinematic and microphysical evolution of Florida cumulonimbus. Part II: Frequency distributions of vertical velocity, reflectivity, and differential reflectivity. Mon. Wea. Rev., 123, 1941–1963, doi:10.1175/1520-0493(1995)123<1941:TDKAME>2.0.CO;2.