Perspectives on Cloud Prediction, Postprocessing, and Verification for DoD Applications

Erica K. Dolinar U.S. Naval Research Laboratory, Marine Meteorology Division, Monterey, California

Search for other papers by Erica K. Dolinar in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-1451-4478
and
Jason E. Nachamkin U.S. Naval Research Laboratory, Marine Meteorology Division, Monterey, California

Search for other papers by Jason E. Nachamkin in
Current site
Google Scholar
PubMed
Close
Open access

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Erica K. Dolinar, erica.k.dolinar.civ@us.navy.mil

Distribution Statement A: Approved for public release. Distribution is unlimited.

For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Erica K. Dolinar, erica.k.dolinar.civ@us.navy.mil

Distribution Statement A: Approved for public release. Distribution is unlimited.

DoD Cloud Postprocessing and Verification Workshop

What:

A hybrid workshop convening nearly 100 participants from U.S. government laboratories and operational military weather centers, research institutions, academia, and the private sector. State-of-the-science cloud verification techniques, tools, and datasets for cloud postprocessing and emerging methods for improved cloud prediction were discussed with an underlying perspective on how these topics project onto Department of Defense (DoD) needs.

When:

13–14 September 2023

Where:

NCAR Foothills Laboratory, Boulder, Colorado

1. Introduction

The U.S. Naval Research Laboratory (NRL) Marine Meteorology Division, in collaboration with the Office of Naval Research (ONR), hosted a workshop for members of the science community and Department of Defense (DoD) research laboratories to share current methods and tools for predicting and verifying clouds. Discussions focused on how end-users, such as customers of DoD operational centers or forward deployed sailors, use forecasts or postprocessed datasets for decision support. Participants also identified critical limitations related to cloud prediction and how the community can address relevant science gaps.

2. Background

Accurate characterization of the battlespace environment across multiple time and space scales is needed for the DoD to achieve success in their mission; applications include communications, visibility, ship routing, and aviation, with clouds playing a critical role in these activities. Growing attention toward laser weapons and satellite-based detection and avoidance has further increased interest in cloud prediction and awareness. Cloud prediction has generally lagged behind other meteorological variables due to relatively large observational uncertainties, the wide range in scales required to resolve clouds, and the many processes involved in cloud development and evolution.

Volumetric cloud fields are necessary for high-priority products such as cloud-free line of sight (CFLOS). It is generally believed that global update frequencies of 5–10 min at 300-m vertical intervals are required for complete situational awareness. Unfortunately, current observational capabilities lack the requirements to fully depict the rapidly evolving four-dimensional (4D) nature of clouds. Satellite and surface observations are limited by temporal and spatial sampling and spectral sensitivity. While numerical weather prediction (NWP) models provide comprehensive 4D cloud estimates, forecast accuracy is limited by our understanding of cloud physics, dynamics, initialization [i.e., data assimilation (DA)], and their representation in models. Cloud prediction demands close cooperation across many scientific communities, from observations to modeling, to distill their combined knowledge for use in the latest prediction systems.

Once generated, NWP forecasts are postprocessed, tailored toward warfighter needs, and oftentimes verified using imperfect observations. Involvement between the DoD end-user communities along with experts in NWP, DA, ensemble modeling, machine learning/artificial intelligence (ML/AI), and verification is essential for synergistically building the most accurate guidance for decision-making tools.

3. Workshop format

The DoD Cloud Postprocessing and Verification Workshop convened in a hybrid forum and featured briefs from two major U.S. military operational weather centers, the Air Force 16th Weather Squadron and the Navy’s Fleet Numerical Meteorology and Oceanography Command. Other contributions included science talks, a panel discussion, and open discussion. In total, 32 presentations covered the following topics:

  • Probabilistic cloud forecasting

  • All-sky radiance assimilation

  • Cloud analysis techniques

  • Statistical postprocessing

  • Cloud diagnosis and verification

  • Operational requirements

Participants were also encouraged to consider the following “big picture” questions:

  • What aspects need to be prioritized to improve cloud forecasting?

  • What are the best cloud quantities to predict and verify?

  • How can the community better address observational uncertainty?

Herein, we summarize the knowledge shared by workshop participants (Fig. 1), identify research topics worth further investigation, and provide perspectives on how these components project onto DoD activities and needs.

Fig. 1.
Fig. 1.

Workshop participants attending in-person at the NCAR Foothills Laboratory in Boulder, Colorado.

Citation: Bulletin of the American Meteorological Society 105, 6; 10.1175/BAMS-D-24-0077.1

4. Defining clouds

What is a cloud? This question was posed early during the workshop and became a welcomed theme. Clouds can be thought of as collections of water-based particles suspended in the atmosphere and are described in terms of their phase (ice vs liquid), mass, and size. Alternatively, clouds can be defined in terms of how they affect energy propagation across many wavelengths. Inconsistencies between these definitions make interpretation and dataset comparison difficult, though methods and tools to convert between the two do exist.

NWP clouds are generally defined by physical properties such as hydrometeor (e.g., ice/liquid cloud, rain, snow, and graupel) mixing ratios. While observed cloud mixing ratio thresholds vary vertically (from 10−6 kg·kg−1 in low-level clouds to 10−9 kg·kg−1 or less in cirrus) and regionally, a single threshold is often applied globally for NWP. Other NWP cloud definitions rely on empirical relative humidity (RH) or vertical velocity thresholds. In contrast, passive remote sensing instruments measure radiance and can be converted to a model output parameter (or vice versa) using a forward model. For example, Griffin et al. (2021) evaluated NWP-simulated brightness temperatures TB against those observed by GOES-16, where a 10.3-μm threshold of 235 K was used to isolate convective clouds. The TB thresholds may be adjusted by time of day, surface properties, or cloud-type. Participants emphasized the importance of building toward a more universal definition among models and observations, with the goal of adhering to end-user needs.

5. Physics-based cloud prediction

Clouds develop through the interaction between turbulent, radiative, and microphysical processes that operational NWP models cannot fully resolve. While these interactions occur simultaneously in nature, they are parameterized separately in NWP. To that end, the community needs to focus on model “unification” or understanding how to synergistically parameterize the complex cloud processes that occur across various scales. Coupling between the atmosphere, ocean, land, and ice also influences cloud development, and research exploring these topics should be further pursued.

Bulk microphysics modeling is dominated by single- and double-moment schemes. While double-moment schemes can improve cloud forecasts, systematic biases persist, suggesting that greater microphysical sophistication appears to provide relatively minimal benefit. Some participants even proposed that NWP microphysical schemes may have reached peak performance at two moments, at least in the midlatitudes. Furthermore, stochastic perturbation ensembles of microphysical parameters also produce inconclusive variability in predicted cloud patterns when compared with unperturbed ensembles (Thompson et al. 2021). Perturbations to turbulence, boundary layer, radiation, and convective schemes may provide additional insight into the nature of model error. Notably, tropical cyclone simulations show sensitivity to microphysical parameterizations, as these systems are dominated by latent heating (Jin et al. 2014).

Subgrid cloud fraction (CF) is difficult to estimate in NWP models, and its definition is scale dependent. Fractional cloudiness schemes account for unresolved clouds in radiative transfer parameterizations and radiance-based DA but are often based on simple RH thresholds and empirical relationships. Mocko and Cotton (1995) suggest simple schemes perform the best. The Xu and Randall (1996) scheme was reported to underpredict partly cloudy scenarios, likely because it is designed such that fractional clouds are not possible in highly moist environments where the microphysical scheme has no cloud condensate.

A new scale-aware CF scheme was developed to adjust RH and mixing ratio thresholds according to the vertical and horizontal grid spacing and land/ocean differentiation. Eddy-diffusivity mass-flux (EDMF)-based CF schemes also show promise. These schemes are more ideal because the grid-mean supersaturation is constrained by the temperature and moisture solved from the EDMF parameterization. These schemes are also more consistent than traditional ones because they are well integrated with the turbulent flux parameterization. Still, parameterizations should be informed by and evaluated with large eddy simulations. Discussions supported the idea that more work is needed to accurately simulate partly to mostly cloudy situations, especially when the physical processes forcing clouds are poorly resolved.

Aerosols are also poorly observed, yet they affect cloud development by acting as cloud condensation nuclei or through indirect effects. As microphysical schemes become increasingly aerosol-aware, systematic aerosol observations become more important. Furthermore, given aerosol impacts on visibility and the increasing pervasiveness of large wildfires, participants recognized the need for more comprehensive aerosol measurements and a growing need for aerosol prediction.

6. Data assimilation

Complementary to the issues surrounding cloud prediction are difficulties related to model initial conditions. While cloud-affected (i.e., all-sky) radiances at microwave and infrared wavelengths are assimilated by some operational centers for global NWP, clouds themselves are not. A mature cloudy DA system hinges on the capability to effectively combine observed and simulated clouds, all while preserving physical consistency. Despite the complex nature of this developing technology, all-sky DA is showing promise for improved cloud prediction. Several talks showed favorable impacts to cloud forecasts through the assimilation of cloud-affected radiances. For example, global root-mean-square errors (RMSEs) of TB were reduced by up to 5%. Furthermore, ensemble Kalman filter (EnKF) radiance assimilation can improve tropical cyclone intensity and track forecasts. Nevertheless, many challenges remain due to unconstrained observation error distributions, nonlinear observation operators, variational bias correction, quality control procedures, and cloud representation in forecast models.

7. Statistical and analytical techniques including machine learning

A number of presentations described the use of ML/AI methods to statistically reduce NWP model error (i.e., postprocessing) and predict clouds. Below are the highlights:

  • High-resolution now-casting (0–6 h) remains challenging as persistence is difficult to beat. However, optical flow TB forecasts can beat persistence in the first 3 h.

  • ML models have trouble extrapolating forward in time from initial satellite observations alone but perform better when NWP forecasts are added as predictors.

  • Cloud base and TB are difficult to predict due to large error variance and poor predictor/predictand correlations. However, CF, cloud type, fog occurrence, and precipitation type are better suited for ML as they are well constrained.

  • ML models trained with both NWP and observational input were shown to outperform the original NWP guidance for fog, cloud type, and CF.

  • ML models predicting CF and precipitation type have been successfully trained without NWP microphysics. RH is continuous in nature and may better encompass spatial uncertainty and cloud impacts beyond their boundaries. Potential improvements from using microphysical output are unknown, and therefore, testing with microphysics fields is recommended.

  • ML can statistically reproduce the complex, continuous relationships between NWP variables and CF. However, it must be applied carefully, especially in scenarios outside the original training dataset. Further investigation is needed to estimate the impact of changes in NWP parameters, such as microphysics or grid spacing.

  • EnKF methods and time-lag ensembles capably remove CF biases, outscoring deterministic models when compared against GOES.

  • Convolutional neural networks work well because spatial errors are considered during training. Self-attention transformer models perform even better, although they require large computational resources.

  • Explainable AI quantifies the impact of predictor variables on a given forecast. Forecasts are decomposed, using a number of potential methods, to determine which predictors are the most influential. Each method may highlight different features, and their importance may be regime dependent. Interpretation can be challenging when correlations between predictors exist or, for map-based features, there are differing levels of granularity between predictors. Testing groups of related variables using multiple methods is recommended, along with a deep understanding of the related physics.

  • Since mean values tend to minimize cost functions, ML forecasts retain fewer details than NWP. Alternate cost functions (e.g., absolute difference) reduce outlier impacts but still tend to smooth forecasts. Generative adversarial networks produce imagery with statistics similar to that of the training data while simultaneously judging image realism. These techniques show potential, though they are relatively untested in meteorology.

  • ML trustworthiness is a major issue. Since ML models are data-driven, consistency across multiple variables in space and time is not guaranteed. Participants agreed that forecast consistency and AI understanding are important for building trust and confidence.

8. Cloud verification

Forecast verification, especially for clouds, is a complex problem. Forecast skill is often ill-defined or subjective based on user needs, yet forecast trustworthiness must be quantified. DoD operational forecasters are not expected, nor have sufficient time or resources, to conduct comprehensive verification studies and often build their own level of trust through experience. Since model verification is conducted by the scientific community, it is incumbent upon them to understand and create verification products based on operational needs, which can then be communicated to operational forecasters.

Many verification metrics are available through the Model Evaluation Tool (MET) (Brown et al. 2021). MET code is being developed to comply with Security Technical Information Guides (STIG), so it can be used by the DoD. Object-based verification tools such as the Method for Object Based Diagnostic Evaluation (MODE) and Python version of Flexible object Tracker (pyFLEXTRKR) algorithm (Feng et al. 2023) offer feature-tracking capabilities based on observations and model output. Since pyFLEXTRKR is an open-source Python-based software package, it may already adhere to STIG.

Participants stressed the need for verifying clouds using multiple metrics, including grid point and contingency table statistics such as bias, RMSE, equitable threat score, probability of detection, and false alarm rate. Newer object-based scores provide a measure of spatial accuracy, including fractions skill score, MODE, object-based threat score, and spatial dissimilarity measures. MODE requires a number of user-specified parameters to identify, describe, and match objects. Participants encouraged community input to document how these parameters affect verification statistics and what combinations are best for a given variable or application.

Numerous talks discussed cloud verification from a variety of perspectives. In no particular order, the following cloud parameters are commonly verified and/or are ones needed for DoD applications: thickness, ceiling, optical depth, phase, top/base heights, CF, RH, effective particle size, water path/content, CFLOS, TB, type, precipitation type, rain rate, object number/size, and surface radiation budgets.

9. Cloud analyses

Global near-real-time gridded cloud analysis products (Table 1) developed by the Air Force, Cooperative Institute for Research in the Atmosphere (CIRA), and NASA Langley were discussed in reference to DoD needs. These analyses consist of passive satellite cloud property retrievals obtained from multiple satellites, with the addition of NWP data and AI/ML products that are often trained against active satellite measurements. Various ML techniques have proven to be very effective in refining the accuracy of difficult-to-retrieve quantities (e.g., cloud-base height and nighttime properties) from conventional passive radiometers onboard many operational satellites. It was suggested that optical flow techniques can mitigate the challenging effects of boundaries between satellites, such as parallax and time-based discontinuities, additionally showing its potential for 4D applications. The CIRA and NASA products are publicly available online and could be used for now-casting, verification, and ML training.

Table 1.

Cloud analysis products highlighted by workshop participants. WWMCA: World Wide Merge Cloud Analysis; OVERCAST: Optical Variability Evaluation of Regional Cloud Asymmetries in Space and Time (Noh et al. 2022); SatCORPS/GCC: Satellite Cloud Observations and Radiative Property retrieval System/Global Cloud Composites.

Table 1.

Participants agreed that intercomparisons between cloud analysis products, as well as reanalyses, would be useful for documenting differences, strengths, and deficiencies in the respective methods.

10. Observational uncertainty and its quantification

Observational uncertainty has long been a problem for cloud prediction and verification. Users must understand observational limits and uncertainty, causes of uncertainty, and for what conditions data should be used. The available satellite data record is expanding as more sensors are launched into space, thus filling observational gaps with new channels at higher spatial and spectral resolution from both passive and active sensors. Sophisticated algorithms combine all available information and, as a result, more clouds are detected, retrievals are more accurate, and 4D properties are better characterized. Ground-based observations have been underutilized due to their limited point-scale nature, despite the availability of several ground-based networks. These datasets can be used to verify cloud forecasts and satellite retrievals. One such network in the United States contains highly accurate cloud-base information from ceilometer measurements and is scheduled to come online in 2024.

Uncertainty quantification is perhaps best thought of in terms of the ML prediction process. Observational uncertainties within training datasets can be identified and quantified by applying physics knowledge and cross-checking among multiple variables, data sources, and times. Decomposing forecast uncertainty helps to determine if it can be reduced with additional data (epistemic) or if it is inherent and best reduced by altering the model (aleatoric). Data curation is foundationally important for cloud prediction, postprocessing, verification, and ML/AI but is often underappreciated. It requires large, homogeneous, quality-controlled, and well-documented datasets spanning multiple years. Data curation is onerous, as both NWP and observation errors evolve over time. Techniques and lessons learned should be shared among the community when possible.

11. DoD perspective

Given the broad impacts of weather and clouds on many activities carried out by the DoD (e.g., communications, detection/avoidance, visibility, ship routing, vehicle re-entry, aviation/icing, and laser weapons), weather forecasting for short- and long-term planning must be considered a high priority. To avoid costly damage caused by weather, decision-makers require real-time information from multiple perspectives. For instance, information about hangar and aircraft specifications (e.g., wind limits) must be available. Otherwise, assets can be damaged if improperly managed when adverse weather is predicted. Long-term planning also requires weather guidance to assist with risk assessment and prevention, such as avoiding maintenance in Florida during hurricane season. The U.S. Navy has intentionally directed resources toward prioritizing weather forecasting and risk assessment in all operational lead times.

Perceived trust in weather forecasts is a widespread issue in both the DoD and general public. The military is uniquely faced with a ranking imbalance between weather forecasters and their commanding officers who make stressful, high-impact decisions. While lower-ranked personnel may have limited academic meteorological training, they must identify and understand when a weather forecast is reliable and effectively communicate their confidence. AI/ML offers an opportunity for redefining trust in operational weather forecasting. Discussions supported the idea that attitudes toward AI/ML are rapidly changing, and it can help to alleviate some of the hurdles that inherently exist for communicating forecast trust and uncertainty within the chain of command. Nevertheless, it is important for the science community to understand that even in scenarios where an accurate forecast is effectively delivered, mission success relies on other aspects that extend beyond weather prediction.

The following action items were identified as a means to potentially enhance DoD preparedness within the context of weather prediction:

  • Incorporate weather hazards in war games.

  • Adopt a “NWS weather-ready” construct.

  • More training for military weather forecasters, including communicating probabilistic versus deterministic forecasts and examples of AI/ML technology success.

  • Continued development and delivery of verification tools for the DoD community.

Acknowledgments.

The workshop was supported by the Office of Naval Research (N0001423WX01786). Workshop support and logistics were arranged in coordination with the Cooperative Programs for the Advancement of Earth System Science (CPAESS). Presentations and recordings are available online: https://cpaess.ucar.edu/meetings/dod-cloud-post-processing-and-verification-workshop. The authors would also like to acknowledge CDR Shelley Caplan for her perspectives on operational weather forecasting within the U.S. Navy.

References

  • Brown, B., and Coauthors, 2021: The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bull. Amer. Meteor. Soc., 102, E782E807, https://doi.org/10.1175/BAMS-D-19-0093.1.

    • Search Google Scholar
    • Export Citation
  • Feng, Z., J. Hardin, H. C. Barnes, J. Li, L. R. Leung, A. Varble, and Z. Zhang, 2023: PyFLEXTRKR: A flexible feature tracking Python software for convective cloud analysis. Geosci. Model Dev., 16, 27532776, https://doi.org/10.5194/gmd-16-2753-2023.

    • Search Google Scholar
    • Export Citation
  • Griffin, S. M., J. A. Otkin, S. E. Nebuda, T. L. Jensen, P. S. Skinner, E. Gilleland, T. A. Supinie, and X. Ming, 2021: Evaluating the impact of planetary boundary layer, land surface model, and microphysics parameterization schemes on cold cloud objects in simulated GOES-16 brightness temperatures. J. Geophys. Res. Atmos., 126, e2021JD034709, https://doi.org/10.1029/2021JD034709.

    • Search Google Scholar
    • Export Citation
  • Jin, Y., and Coauthors, 2014: The impact of ice phase cloud parameterizations on tropical cyclone prediction. Mon. Wea. Rev., 142, 606625, https://doi.org/10.1175/MWR-D-13-00058.1.

    • Search Google Scholar
    • Export Citation
  • Mocko, D. M., and W. R. Cotton, 1995: Evaluation of fractional cloudiness parameterizations for use in a mesoscale model. J. Atmos. Sci., 52, 28842901, https://doi.org/10.1175/1520-0469(1995)052<2884:EOFCPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Noh, Y.-J., and Coauthors, 2022: A framework for satellite-based 3D cloud data: An overview of the VIIRS cloud base height retrieval and user engagement for aviation applications. Remote Sens., 14, 5524, https://doi.org/10.3390/rs14215524.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., J. Berner, M. Frediani, J. A. Otkin, and S. M. Griffin, 2021: A stochastic parameter perturbation method to represent uncertainty in a microphysics scheme. Mon. Wea. Rev., 149, 14811497, https://doi.org/10.1175/MWR-D-20-0077.1.

    • Search Google Scholar
    • Export Citation
  • Xu, K.-M., and D. A. Randall, 1996: A semiempirical cloudiness parameterization for use in climate models. J. Atmos. Sci., 53, 30843102, https://doi.org/10.1175/1520-0469(1996)053<3084:ASCPFU>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
Save
  • Brown, B., and Coauthors, 2021: The Model Evaluation Tools (MET): More than a decade of community-supported forecast verification. Bull. Amer. Meteor. Soc., 102, E782E807, https://doi.org/10.1175/BAMS-D-19-0093.1.

    • Search Google Scholar
    • Export Citation
  • Feng, Z., J. Hardin, H. C. Barnes, J. Li, L. R. Leung, A. Varble, and Z. Zhang, 2023: PyFLEXTRKR: A flexible feature tracking Python software for convective cloud analysis. Geosci. Model Dev., 16, 27532776, https://doi.org/10.5194/gmd-16-2753-2023.

    • Search Google Scholar
    • Export Citation
  • Griffin, S. M., J. A. Otkin, S. E. Nebuda, T. L. Jensen, P. S. Skinner, E. Gilleland, T. A. Supinie, and X. Ming, 2021: Evaluating the impact of planetary boundary layer, land surface model, and microphysics parameterization schemes on cold cloud objects in simulated GOES-16 brightness temperatures. J. Geophys. Res. Atmos., 126, e2021JD034709, https://doi.org/10.1029/2021JD034709.

    • Search Google Scholar
    • Export Citation
  • Jin, Y., and Coauthors, 2014: The impact of ice phase cloud parameterizations on tropical cyclone prediction. Mon. Wea. Rev., 142, 606625, https://doi.org/10.1175/MWR-D-13-00058.1.

    • Search Google Scholar
    • Export Citation
  • Mocko, D. M., and W. R. Cotton, 1995: Evaluation of fractional cloudiness parameterizations for use in a mesoscale model. J. Atmos. Sci., 52, 28842901, https://doi.org/10.1175/1520-0469(1995)052<2884:EOFCPF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Noh, Y.-J., and Coauthors, 2022: A framework for satellite-based 3D cloud data: An overview of the VIIRS cloud base height retrieval and user engagement for aviation applications. Remote Sens., 14, 5524, https://doi.org/10.3390/rs14215524.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., J. Berner, M. Frediani, J. A. Otkin, and S. M. Griffin, 2021: A stochastic parameter perturbation method to represent uncertainty in a microphysics scheme. Mon. Wea. Rev., 149, 14811497, https://doi.org/10.1175/MWR-D-20-0077.1.

    • Search Google Scholar
    • Export Citation
  • Xu, K.-M., and D. A. Randall, 1996: A semiempirical cloudiness parameterization for use in climate models. J. Atmos. Sci., 53, 30843102, https://doi.org/10.1175/1520-0469(1996)053<3084:ASCPFU>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Workshop participants attending in-person at the NCAR Foothills Laboratory in Boulder, Colorado.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1116 1116 183
PDF Downloads 256 256 76