• Alexander, S. P., and A. Protat, 2018: Cloud properties observed from the surface and by satellite at the northern edge of the Southern Ocean. J. Geophys. Res. Atmos., 123, 443456, https://doi.org/10.1002/2017JD026552.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bikos, D., and Coauthors, 2012: Synthetic satellite imagery for realtime high-resolution model evaluation. Wea. Forecasting, 27, 784795, https://doi.org/10.1175/WAF-D-11-00130.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blanchard, Y., J. Pelon, E. W. Eloranta, K. P. Moran, J. Delanoë, and G. Sèze, 2014: A synergistic analysis of cloud cover and vertical distribution from a-train and ground-based sensors over the high Arctic station Eureka from 2006 to 2010. J. Appl. Meteor. Climatol., 53, 25532570, https://doi.org/10.1175/JAMC-D-14-0021.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bodas-Salcedo, A., and Coauthors, 2011: COSP: Satellite simulation software for model assessment. Bull. Amer. Meteor. Soc., 92, 10231043, https://doi.org/10.1175/2011BAMS2856.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bony, S., J. L. Dufresne, H. L. Treut, J. J. Morcrette, and C. Senior, 2004: On dynamic and thermodynamic components of cloud changes. Climate Dyn., 22, 7186, https://doi.org/10.1007/s00382-003-0369-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, S., and Coauthors, 2003: COAMPS version 3 model description-general theory and equations. Naval Research Laboratory Tech. Note NRL/PU/7500-03448, 143 pp.

  • Cui, W., X. Dong, B. Xi, Z. Feng, and J. Fan, 2020: Can the GPM IMERG final product accurately represent MCSs’ precipitation characteristics over the central and eastern United States? J. Hydrometeor., 21, 3957, https://doi.org/10.1175/JHM-D-19-0123.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Daley, R., and E. Barker, 2001: NAVDAS Source Book 2001: NRL atmospheric variational data assimilation system. Naval Research Laboratory Tech. Note NRL/PU/7530-01-441, 163 pp.

  • Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models. Quart. J. Roy. Meteor. Soc., 102, 405418, https://doi.org/10.1002/qj.49710243210.

    • Search Google Scholar
    • Export Citation
  • Dolinar, E. K., X. Dong, B. Xi, J. Jiang, and H. Su, 2015: Evaluation of CMIP5 simulated clouds and TOA radiation budgets using NASA satellite observations. Climate Dyn., 44, 22292247, https://doi.org/10.1007/s00382-014-2158-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evans, S., R. Marchand, T. Ackerman, L. Donner, J.-C. Golaz, and C. Seman, 2017: Diagnosing cloud biases in the GFDL AM3 model with atmospheric classification. J. Geophys. Res. Atmos., 122, 12 82712 844, https://doi.org/10.1002/2017JD027163.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Field, P. R., and R. Wood, 2007: Precipitation and cloud structure in midlatitude cyclones. J. Climate, 20, 233254, https://doi.org/10.1175/JCLI3998.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frey, R., B. Baum, W. Menzel, S. Ackerman, C. Moeller, and J. Spinhirne, 1999: A comparison of cloud top heights computed from airborne lidar and MAS radiance data using CO2 slicing. J. Geophys. Res., 104, 24 54724 555, https://doi.org/10.1029/1999JD900796.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Govekar, P. D., C. Jakob, M. J. Reeder, and J. Haynes, 2011: The three-dimensional distribution of clouds around Southern Hemisphere extratropical cyclones. Geophys. Res. Lett., 38, L21805, https://doi.org/10.1029/2011GL049091.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grasso, L. D., M. Sengupta, J. F. Dostalek, R. Brummer, and M. DeMaria, 2008: Synthetic satellite imagery for current and future environmental satellites. Int. J. Remote Sens., 29, 43734384, https://doi.org/10.1080/01431160801891820.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Heidinger, A. K., M. J. Foster, A. Walther, and X. Zhao, 2014: The Pathfinder atmospheres-extended AVHRR climate dataset. Bull. Amer. Meteor. Soc., 95, 909922, https://doi.org/10.1175/BAMS-D-12-00246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hodur, R. M., 1997: The Naval Research Laboratory’s Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Mon. Wea. Rev., 125, 14141430, https://doi.org/10.1175/1520-0493(1997)125<1414:TNRLSC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huffman, G. J., and Coauthors, 2019: NASA Global Precipitation Measurement (GPM) Integrated Multi-satellitE Retrievals for GPM (IMERG). NASA Algorithm Theoretical Basis Doc., version 06, 38 pp., https://gpm.nasa.gov/sites/default/files/document_files/IMERG_ATBD_V06.pdf.

  • Jin, D., L. Oreopoulos, and D. Lee, 2016: Regime-based evaluation of cloudiness in CMIP5 models. Climate Dyn., 48, 89112, https://doi.org/10.1007/s00382-016-3064-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, T. A., P. Skinner, K. Knopfmeier, E. Mansell, P. Minnis, R. Palikonda, and W. Smith Jr., 2018: Comparison of cloud microphysics schemes in a Warn-On-Forecast system using synthetic satellite objects. Wea. Forecasting, 33, 16811708, https://doi.org/10.1175/WAF-D-18-0112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and J. M. Fritsch, 1993: Convective parameterization for mesoscale models: The Kain-Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

    • Crossref
    • Export Citation
  • Klein, S. A., and D. L. Hartmann, 1993: The seasonal cycle of low stratiform clouds. J. Climate, 6, 15881606, https://doi.org/10.1175/1520-0442(1993)006<1587:TSCOLS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Klein, S. A., and C. Jakob, 1999: Validation and sensitivities of frontal clouds simulated by the ECMWF model. Mon. Wea. Rev., 127, 25142531, https://doi.org/10.1175/1520-0493(1999)127<2514:VASOFC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Koshiro, T., and M. Shiotani, 2013: Relationship between low stratiform cloud amount and estimated inversion strength in the lower troposphere over the global ocean in terms of cloud types. J. Meteor. Soc. Japan, 92, 107120, https://doi.org/10.2151/jmsj.2014-107.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuma, P., A. J. McDonald, O. Morgenstern, R. Querel, I. Silber, and C. J. Flynn, 2021: Ground-based lidar processing and simulator framework for comparing models and observations (ALCF 1.0). Geosci. Model Dev., 14, 4372, https://doi.org/10.5194/gmd-14-43-2021.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kumar, S. V., and Coauthors, 2006: Land information system: An interoperable framework for high resolution land surface modeling. Environ. Modell. Software, 21, 14021415, https://doi.org/10.1016/j.envsoft.2005.07.004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, M., J. E. Nachamkin, and D. L. Westphal, 2009: On the improvement of COAMPS weather forecasts using an advanced radiative transfer model. Wea. Forecasting, 24, 286306, https://doi.org/10.1175/2008WAF2222137.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mahajan, S., and B. Fataniya, 2020: Cloud detection methodologies: Variants and development—A review. Complex Intell. Syst., 6, 251261, https://doi.org/10.1007/s40747-019-00128-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McDonald, A. J., and S. Parsons, 2018: A comparison of cloud classification methodologies: Differences between cloud and dynamical regimes. J. Geophys. Res. Atmos., 123, 11 17311 193, https://doi.org/10.1029/2018JD028595.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McErlich, C., A. McDonald, A. Schuddeboom, and I. Silber, 2021: Comparing satellite and ground based observations of cloud occurrence over high southern latitude. J. Geophys. Res. Atmos., 126, e2020JD033607, https://doi.org/10.1029/2020JD033607.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Medeiros, B., and B. Stevens, 2011: Revealing differences in GCM representations of low clouds. Climate Dyn., 36, 385399, https://doi.org/10.1007/s00382-009-0694-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure for geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851875, https://doi.org/10.1029/RG020i004p00851.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and Coauthors, 2014: Model-evaluation tools for three-dimensional cloud verification via spaceborne active sensors. J. Appl. Meteor. Climatol., 53, 21812195, https://doi.org/10.1175/JAMC-D-13-0322.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Minnis, P., and Coauthors, 2021: CERES MODIS cloud product retrievals for edition 4. Part I: Algorithm changes. IEEE Trans. Geosci. Remote Sens., 59, 27442780, https://doi.org/10.1109/TGRS.2020.3008866.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nachamkin, J. E., Y. Jin, L. D. Grasso, and K. Richardson, 2017: Using synthetic brightness temperatures to address uncertainties in cloud-top-height verification. J. Appl. Meteor. Climatol., 56, 283296, https://doi.org/10.1175/JAMC-D-16-0240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Naud, C. M., J. F. Booth, and A. D. Del Genio, 2016: The relationship between boundary layer stability and cloud cover in the post-cold-frontal region. J. Climate, 29, 81298149, https://doi.org/10.1175/JCLI-D-15-0700.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Niu, G.-Y., and Coauthors, 2011: The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res., 116, D12109, https://doi.org/10.1029/2010JD015139.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Noh, Y.-J., and Coauthors, 2017: Cloud-base height estimation from VIIRS. Part II: A statistical algorithm based on A-train satellite data. J. Atmos. Oceanic Technol., 34, 585598, https://doi.org/10.1175/JTECH-D-16-0110.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oreopoulos, L., and W. Rossow, 2011: The cloud radiative effects of International Satellite Cloud Climatology Project weather states. J. Geophys. Res., 116, D12202, https://doi.org/10.1029/2010JD015472.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oreopoulos, L., N. Cho, D. Lee, and S. Kato, 2016: Radiative effects of global MODIS cloud regimes. J. Geophys. Res. Atmos., 121, 22992317, https://doi.org/10.1002/2015JD024502.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Otkin, J. A., T. J. Greenwald, J. Sieglaff, and H.-L. Huang, 2009: Validation of a large-scale simulated brightness temperature dataset using SEVIRI satellite observations. J. Appl. Meteor. Climatol., 48, 16131626, https://doi.org/10.1175/2009JAMC2142.1

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Protat, A. S. A., and Coauthors, 2014: Reconciling ground-based and space based estimates of the frequency of occurrence and radiative effect of clouds around Darwin, Australia. J. Appl. Meteor. Climatol., 53, 456478, https://doi.org/10.1175/JAMC-D-13-072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall acculations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, https://doi.org/10.1175/2007MWR2123.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rossow, W. B., and R. A. Schiffer, 1999: Advances in understanding clouds from ISCCP. Bull. Amer. Meteor. Soc., 80, 22612287, https://doi.org/10.1175/1520-0477(1999)080<2261:AIUCFI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rossow, W. B., G. Tselioudis, A. Polak, and C. Jakob, 2005: Tropical climate described as a distribution of weather states indicated by distinct mesoscale cloud property mixtures. Geophys. Res. Lett., 32, L21812, https://doi.org/10.1029/2005GL024584.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rutledge, S. A., and P. V. Hobbs, 1983: The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. VIII: A model for the “seeder-feeder” process in warm-frontal rainbands. J. Atmos. Sci., 40, 11851206, https://doi.org/10.1175/1520-0469(1983)040<1185:TMAMSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rutledge, S. A., and P. V. Hobbs, 1984: The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. XII: A diagnostic modeling study of precipitation development in narrow cold-frontal rainbands. J. Atmos. Sci., 40, 11851206, https://doi.org/10.1175/1520-0469(1983)040<1185:TMAMSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Smith, W. L., Jr., and Coauthors, 1996: Comparisons of cloud heights derived from satellite, aircraft, surface lidar and LITE data. Proc. Int. Radiation Symp., Fairbanks, AK, Int. Radiation Commission, 603–606.

  • Taylor, P. C., S. Kato, K.-M. Xu, and M. Cai, 2015: Covariance between Arctic sea ice and clouds within atmospheric state regimes at the satellite footprint level. J. Geophys. Res. Atmos., 120, 12 65612 678, https://doi.org/10.1002/2015JD023520.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tselioudis, G., W. Rossow, Y. Zhang, and D. Konsta, 2013: Global weather states and their properties from passive and active satellite cloud retrievals. J. Climate, 26, 77347746, https://doi.org/10.1175/JCLI-D-13-00024.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, S., L. W. O’Neill, Q. Jiang, S. P. de Szoeke, X. Hong, H. Jin, W. T. Thomson, and X. Zheng, 2011: A regional real-time forecast of marine boundary layers during VOCALS-Rex. Atmos. Chem. Phys., 11, 421437, https://doi.org/10.5194/acp-11-421-2011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Webb, M., C. Senior, S. Bony, and J. J. Morcrette, 2001: Combining ERBE and ISCCP data to assess clouds in the Hadley Centre, ECMWF and LMD atmospheric climate models. Climate Dyn., 17, 905922, https://doi.org/10.1007/s003820100157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences: An Introduction. International Geophysics Series, Vol. 59, Elsevier, 467 pp.

  • Williams, K. D., and M. J. Webb, 2009: A quantitative performance assessment of cloud regimes in climate models. Climate Dyn., 33, 141157, https://doi.org/10.1007/s00382-008-0443-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Williams, K. D., and A. Bodas-Salcedo, 2017: A multi-diagnostic approach to cloud evaluation. Geosci. Model Dev., 10, 25472566, https://doi.org/10.5194/gmd-10-2547-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, R., and C. S. Bretherton, 2006: On the relationship between stratiform low cloud cover and lower-tropospheric stability. J. Climate, 19, 64256432, https://doi.org/10.1175/JCLI3988.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yost, C., P. Minnis, S. Sun-Mack, Y. Chen, and W. L. Smith, 2021: CERES MODIS cloud product retrievals for edition 4—Part II: Comparisons to CloudSat and CALIPSO. IEEE Trans. Geosci. Remote Sens., 59, 3695–3724, https://doi.org/10.1109/TGRS.2020.3015155.

    • Crossref
    • Export Citation
  • Zhang, M. H., and Coauthors, 2005: Comparing clouds and their seasonal variations in 10 atmospheric general circulation models with satellite measurements. J. Geophys. Res., 110, D15S02, https://doi.org/10.1029/2004JD005021.

    • Search Google Scholar
    • Export Citation
  • View in gallery
    Fig. 1.

    Computational domain used for the cloud verification study. COAMPS topography is shown over land; redder colors indicate higher terrain. The 2-yr mean surface temperature valid at 1800 UTC (K; color bar) is shown over water.

  • View in gallery
    Fig. 2.

    Observations valid at 1800 UTC 8 May: (a) GOES normalized 0.65-μm reflectance, (b) GOES-retrieved cloud-top height, (c) IMERG precipitation, and (d) GOES-retrieved CBH. Text labels in (b) refer to cloud regions mentioned in the text.

  • View in gallery
    Fig. 3.

    Normalized scores depict the degree to which clouds likely result from lower-tropospheric stable or unstable processes as based on the parameters in Tables 1 and 2. Values near 1.0 indicate high likelihood, and white areas denote clear skies. All plots are valid at 1800 UTC 8 May. Shown are (a) the unstable score for the COAMPS 6-h forecast cloud field, (b) the unstable cloud score for the GOES cloud retrievals, and the corresponding stable scores for (c) COAMPS and (d) GOES.

  • View in gallery
    Fig. 4.

    Stable and unstable masks for (a) GOES-observed clouds and (b) clouds from COAMPS 6-h forecasts valid at 1800 UTC 8 May 2018. Masks represent regions where the scores in Fig. 3 exceed 0.75. Unstable and stable regions are shaded red and blue, respectively, and overlapping regions are shaded in dark red. Light-blue shading in (a) indicates regions where the retrieved stable clouds were patched with the convolutional filter.

  • View in gallery
    Fig. 5.

    Scatterplots of daily statistics from the cloud/no-cloud mask valid at 1800 UTC as a function of yearday. Each distribution is fitted with a second-order polynomial. Shown are (a) daily bias scores from the 6-h forecasts (blue dots) and GOES-observed fractional coverage (red dots) and (b) the corresponding ETS values.

  • View in gallery
    Fig. 6.

    Scatterplots of daily 6-h forecast scores and fitted curves, displayed as in Fig. 5. In all panels, solid curves and daily dots correspond to the base experiment B in Table 4. Short dashed and dash–dotted curves correspond to experiments NP and NH8 according to the legend in (d). Daily dots associated with these curves are not shown. For (left) stable and (right) unstable masks, shown are (a),(b) bias (blue) and fractional coverage, (c),(d) ETS scores, and (e),(f) FSS values for the single-point (orange) and 11-point, or 55-km, (green) neighborhoods.

  • View in gallery
    Fig. 7.

    Cold-season (14 Oct–14 Apr) mean fractional coverage of the (a) unstable cloud mask for COAMPS 6-h forecasts, (b) GOES unstable cloud mask, (c) stable cloud s for COAMPS 6-h forecast, and (d) GOES stable cloud mask. Masks were derived from unpatched clouds with tops of ≤8 km (NH8 in Table 4) and are valid at 1800 UTC.

  • View in gallery
    Fig. 8.

    As in Fig. 7, but for the warm season (15 Apr–13 Oct).

  • View in gallery
    Fig. 9.

    The 2-yr mean fractional coverage of the (a) COAMPS stable and (b) GOES stable masks from all days where the stable cloud coverage bias for COAMPS 6-h forecasts was ≤ 0.75. All masks and biases were calculated from unpatched clouds with tops of ≤8 km (NH8 in Table 4) valid at 1800 UTC.

  • View in gallery
    Fig. 10.

    As in Fig. 9, but for all days with bias scores of ≥1.5.

  • View in gallery
    Fig. 11.

    Marine stratus mask statistics valid at 1800 UTC showing: (a) the daily COAMPS 6-h forecast bias (blue) and GOES-based fractional coverage (red) for each event, (b) the mean fractional coverage of the events for COAMPS 6-h forecasts, (c) the daily ETS for each event, and (d) the mean fractional coverage of the GOES events.

All Time Past Year Past 30 Days
Abstract Views 45 0 0
Full Text Views 400 309 35
PDF Downloads 395 301 42

Classification and Evaluation of Stable and Unstable Cloud Forecasts

Jason E. NachamkinaNaval Research Laboratory, Monterey, California

Search for other papers by Jason E. Nachamkin in
Current site
Google Scholar
PubMed
Close
,
Adam BienkowskibUniversity of Connecticut, Storrs, Connecticut

Search for other papers by Adam Bienkowski in
Current site
Google Scholar
PubMed
Close
,
Rich BankertaNaval Research Laboratory, Monterey, California

Search for other papers by Rich Bankert in
Current site
Google Scholar
PubMed
Close
,
Krishna PattipatibUniversity of Connecticut, Storrs, Connecticut

Search for other papers by Krishna Pattipati in
Current site
Google Scholar
PubMed
Close
,
David SidotiaNaval Research Laboratory, Monterey, California

Search for other papers by David Sidoti in
Current site
Google Scholar
PubMed
Close
,
Melinda SurrattaNaval Research Laboratory, Monterey, California

Search for other papers by Melinda Surratt in
Current site
Google Scholar
PubMed
Close
,
Jacob GullcDeVine Consulting, Fremont, California

Search for other papers by Jacob Gull in
Current site
Google Scholar
PubMed
Close
, and
Chuyen NguyendAmerican Society for Engineering Education, Monterey, California

Search for other papers by Chuyen Nguyen in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

A physics-based cloud identification scheme, originally developed for a machine-learning forecast system, was applied to verify cloud location and coverage bias errors from two years of 6-h forecasts. The routine identifies stable and unstable environments by assessing the potential for buoyant versus stable cloud formation. The efficacy of the scheme is documented by investigating its ability to identify cloud patterns and systematic forecast errors. Results showed that stable cloud forecasts contained widespread, persistent negative cloud cover biases most likely associated with turbulent, radiative, and microphysical feedback processes. In contrast, unstable clouds were better predicted despite being poorly resolved. This suggests that scale aliasing, while energetically problematic, results in less-severe short-term cloud cover errors. This study also evaluated Geostationary Operational Environmental Satellite (GOES) cloud-base retrievals for their effectiveness at identifying regions of lower-tropospheric cloud cover. Retrieved cloud-base heights were sometimes too high with respect to their actual values in regions of deep-layered clouds, resulting in underestimates of the extent of low cloud cover in these areas. Sensitivity experiments indicate that the most accurate cloud-base estimates existed in regions with cloud tops at or below 8 km.

Significance Statement

Cloud forecasts are difficult to verify because the height, depth, and type of the clouds are just as important as the spatial location. Satellite imagery and retrievals are good for verifying location, but these measurements are sometimes uncertain because of obscuration from above. Despite these uncertainties, we can learn a lot about specific forecast errors by tracking general areas of clouds based on their physical forcing mechanisms. We chose to sort by atmospheric stability because buoyant and stable processes are physically very distinct. Studies of this nature exist, but they typically assess mean cloud frequencies without considering spatial and temporal displacements. Here, we address displacement error by assessing the direct overlap between the observed and predicted clouds.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Bankert: Retired.

Corresponding author: Jason E. Nachamkin, jason.nachamkin@nrlmry.navy.mil

Abstract

A physics-based cloud identification scheme, originally developed for a machine-learning forecast system, was applied to verify cloud location and coverage bias errors from two years of 6-h forecasts. The routine identifies stable and unstable environments by assessing the potential for buoyant versus stable cloud formation. The efficacy of the scheme is documented by investigating its ability to identify cloud patterns and systematic forecast errors. Results showed that stable cloud forecasts contained widespread, persistent negative cloud cover biases most likely associated with turbulent, radiative, and microphysical feedback processes. In contrast, unstable clouds were better predicted despite being poorly resolved. This suggests that scale aliasing, while energetically problematic, results in less-severe short-term cloud cover errors. This study also evaluated Geostationary Operational Environmental Satellite (GOES) cloud-base retrievals for their effectiveness at identifying regions of lower-tropospheric cloud cover. Retrieved cloud-base heights were sometimes too high with respect to their actual values in regions of deep-layered clouds, resulting in underestimates of the extent of low cloud cover in these areas. Sensitivity experiments indicate that the most accurate cloud-base estimates existed in regions with cloud tops at or below 8 km.

Significance Statement

Cloud forecasts are difficult to verify because the height, depth, and type of the clouds are just as important as the spatial location. Satellite imagery and retrievals are good for verifying location, but these measurements are sometimes uncertain because of obscuration from above. Despite these uncertainties, we can learn a lot about specific forecast errors by tracking general areas of clouds based on their physical forcing mechanisms. We chose to sort by atmospheric stability because buoyant and stable processes are physically very distinct. Studies of this nature exist, but they typically assess mean cloud frequencies without considering spatial and temporal displacements. Here, we address displacement error by assessing the direct overlap between the observed and predicted clouds.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Bankert: Retired.

Corresponding author: Jason E. Nachamkin, jason.nachamkin@nrlmry.navy.mil

1. Introduction

Clouds by their nature are difficult to predict, characterize, and verify. Cloud fields have high spatial variance, and their vertical extent is an added degree of freedom beyond other 2D variables like surface precipitation. Since clouds are three-dimensional, the definition of overlap between the forecast and the observations should include some aspect of all three dimensions. Cloud verification studies often relax the overlap constraints by comparing temporal, spatial, or composite means. While very useful, these do not assess instantaneous performance of cloud position forecasts. Verification of this type requires consideration of both displacement and cloud height errors.

Unfortunately, most cloud observations do not easily lend themselves to three-dimensional analysis. Active sensors like cloud profiling radar on CloudSat are advantageous in this aspect (Miller et al. 2014), but are limited in horizontal extent with a “curtain” of data. Clouds within 1 km of Earth’s surface are also undersampled by these sensors relative to ground-based observations (Alexander and Protat 2018; Blanchard et al. 2014; Protat et al. 2014; McErlich et al. 2021). For many applications passive geostationary satellite observations offer the best alternative in terms of spatial and temporal coverage (Bikos et al. 2012; Schmit et al. 2017). Although the observed fields are two-dimensional, conditional samples based on cloud height and thickness can isolate the third dimension. Applying these observations to model verification requires careful understanding of their limitations. Cloud-top-height underestimation is well documented for semitransparent clouds (Smith et al. 1996; Frey et al. 1999; Yost et al. 2021), and cloud thickness and cloud-base height (CBH) are also difficult to retrieve for layered clouds (Noh et al. 2017; Yost et al. 2021).

Inaccuracies in the retrieved cloud properties make radiance-based verification an attractive option. Forward models now simulate properties observed from most satellite-borne passive and active sensors (Grasso et al. 2008; Otkin et al. 2009; Bodas-Salcedo et al. 2011; Bikos et al. 2012) as well as ground-based sensors (Kuma et al. 2021). Direct comparisons between numerical forecasts and observed quantities such as brightness temperature, cloud optical thickness, and total cloud fraction are at an advantage over the retrievals. Variables such as total cloud cover or cloud fraction are footprint dependent and have different definitions in the retrievals than in the numerical models (Bodas-Salcedo et al. 2011). Numerical cloud parameterizations vary, depending on scale, from diagnosed cloud fraction to bulk microphysical schemes. Comparing such widely varying schemes in radiance space eliminates the need to convert all output to a common cloud fraction, avoiding the associated uncertainties. However, studies of this nature primarily evaluate forecast errors in terms of spatial and temporal means, and not in terms of instantaneous cloud location. This is because as Bodas-Salcedo et al. (2011) point out, radiance-based variables are not trivially related to specific geophysical quantities. Conditional samples based on brightness temperature can isolate cloud-top pressures or heights in specific layers, but these are subject to the same limitations incurred by the cloud-top-height retrievals. Namely, that the brightness temperatures must be mapped to preexisting atmospheric temperature profiles. Thick, convective cloud tops are easily identified (Jones et al. 2018), but semitransparent clouds are still mischaracterized. Lower atmospheric clouds can be difficult to distinguish from the background, especially due to snow and cold surface temperatures in winter (Mahajan and Fataniya 2020). Cloud thickness must also be inferred from optical depth. While the radiance-based verification offers the most consistent comparison between the model and the observations, the retrievals offer the best means for identifying specific regions of cloud-top height, CBH, and cloud thickness over large areas.

Both radiance-based and retrieval-based methods are constrained by the top-down nature of the passive satellite observations. Identification of three-dimensional cloud structure requires conditional samples based on properties inherent in the observations. The International Satellite Cloud Climatology Project (ISCCP; Rossow and Schiffer 1999) is a commonly used cloud regime classifier based on cloud fraction, cloud-top pressure and cloud optical thickness. ISCCP criteria are often used with direct radiance and retrievals to assess mean cloud features, such as coverage and optical depth, in global models (Webb et al. 2001; Zhang et al. 2005; Williams and Webb 2009; Dolinar et al. 2015; Jin et al. 2016; Oreopoulos et al. 2016). Composite means surrounding systematic phenomena like midlatitude cyclones are also effective conditional samples (Klein and Jakob 1999; Field and Wood 2007; Govekar et al. 2011; Williams and Bodas-Salcedo 2017). In contrast to cloud regime classifiers whose categories are based on observed cloud properties assigned to corresponding model levels, dynamic classifiers use variables such as vertical motion and lower-tropospheric stability to identify clouds. Dynamic classifiers can effectively differentiate between marine stratus and trade cumulus (Bony et al. 2004; Medeiros and Stevens 2011; Williams and Bodas-Salcedo 2017), but identifying specific cloud types is still quite difficult as a wide range of thermodynamic variables are necessary to describe regional variabilities in cloud properties (Taylor et al. 2015; McDonald and Parsons 2018). McDonald and Parsons (2018) noted that dynamic classifiers are less effective at separating distinct cloud regimes than the cloud regime classifiers alone due to regions of mixed cloud types. However, classifying by weather regime serves the dual purpose of determining where the model is deficient as well as the specific conditions that contribute to the deficiency (Evans et al. 2017). That aspect combined with the aforementioned limitations of the satellite observations argues for a dynamic component to be part of the cloud identification process.

Given the diversity of cloud forcing mechanisms, a fixed group of variables is unlikely to be sufficient to characterize all cloud types. Instead, considering variables associated with families of cloud types may be more effective. Atmospheric stability strongly modulates most clouds that are rooted in the boundary layer, so quantities describing stability as well as processes that affect it are useful in identifying these clouds. Mid- and upper-tropospheric clouds are not as well correlated with stability and thus require separate variable sets to isolate them. Importantly, different cloud families often coexist so samples may contain multiple cloud types (Rossow et al. 2005; Oreopoulos and Rossow 2011; Tselioudis et al. 2013). Attempts to isolate pure samples conforming to specific profiles through the depth of the troposphere will result in limited coverage, leaving many clouds unsampled. As such, the verification strategy should accommodate mixed samples. One way to accomplish this is to identify regions where specific cloud families likely exist. Applying masks to isolate each type and verifying the position of the clouds alone without regard to their vertical structure relaxes the overlap constraint. Instead of considering clouds at specific levels and locations, the overlap of cloud families is considered. This strategy accommodates the increased uncertainty in the vertical cloud structure inherent in the passive satellite observations.

This work documents a generalized scheme that differentiates clouds associated with buoyant versus stable lower-tropospheric processes. The two categories are broad in that lower-tropospheric stability is considered independently from cloud-top height, cloud depth, and precipitation. The only constraint is that the cloud base must be in the lower troposphere (below ∼5 km). As a result, shallow, deep, precipitating and nonprecipitating clouds are included in each category. These classifiers were developed as part of a yet to be published machine-learning project to predict the cloud state based on input from the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS)1 (Hodur 1997) and the Geostationary Operational Environmental Satellite (GOES). Initial attempts to predict brightness temperature did not outscore COAMPS because the radiance-based measures were poorly correlated with specific physical variables that predict cloud cover. In contrast, when clouds were identified and labeled based on the type of physical forcing, the machine-learning model outperformed COAMPS by a statistically significant margin. Predicted fields consisted of masks identifying cloudy regions in each category. In all, five broad, overlapping categories were developed: stable, unstable, deep precipitating, midtropospheric and upper-tropospheric clouds. Future publications will present the results of the machine-learning predictions as well as the latter three cloud families.

Here we address the transparency of the stable and unstable categorization method by investigating how these broad classifications identify cloud patterns and their associated systematic errors in COAMPS. Since no independent, observed fields with equivalent stable and unstable cloud labels exist, the bulk of the focus is on comparisons between the GOES-observed and COAMPS-predicted masks. Like the ISCCP criteria, these masks are conditional samples of the model and the observations based on observable properties. Diagnostics of COAMPS forecasts help understand the masks’ effectiveness in identifying systematic error. Since the GOES CBH retrievals were used to define the observed clouds, sensitivity experiments were conducted to document the incurred uncertainties.

The outline of this paper is as follows. Section 2 describes the COAMPS and GOES data used for cloud classification. Section 3 explains the features and methods used to identify the stable and unstable clouds. Section 4 provides the classification results, and section 5 offers a discussion and conclusions.

2. Forecast and observational data

Forecast data consisted of COAMPS high-resolution forecasts over the U.S. mid-Atlantic region for the 2-yr period from 1 January 2018 to 31 December 2019. The domains consisted of three one-way nested grids with horizontal spacings of 45, 15, and 5 km centered over Norfolk, Virginia. Only data from the 5-km 277 × 229 gridpoint domain are used in this study. The vertical domain consisted of 60 sigma-z levels extending from 10 m to a model top at approximately 50 km. Forecast initializations occurred daily at 0000, 0600, 1200, and 1800 UTC, using the Naval Research Laboratory’s Atmospheric Variational Data Assimilation System (NAVDAS) (Daley and Barker 2001) and run to 12-h. The previous 6-h forecast served as a first guess. Forecasts from the Navy Global Environmental Model (NAVGEM) provided boundary conditions at three-hour intervals using a Davies (1976) scheme. The explicit microphysics parameterization, used on all grids, consisted of a modified version of the single-moment bulk scheme of Rutledge and Hobbs (1983, 1984) described by Chen at al. (2003). Mixing ratios of cloud droplets, cloud ice, rain, snow, and graupel were predicted. Subgrid-scale convection on the 15- and 45-km grids was parameterized using the Kain–Fritsch scheme (Kain and Fritsch 1993). The Fu–Liou (Liu et al. 2009) parameterization was used for shortwave and longwave radiative transfer. Parameterization of boundary layer turbulence used a 1.5-order turbulence closure method (Mellor and Yamada 1982) where turbulent kinetic energy is predicted. Land surface processes were parameterized with the Noah land surface model (Niu et al. 2011), initialized at each data assimilation cycle with the 0.25° NASA Land Information System (LIS) analyses (Kumar et al. 2006) provided by the U.S. Air Force Weather Agency.

Satellite observations consisted of the GOES-16 Advanced Baseline Imager (ABI) 0.65-μm normalized reflectance (visible channel) as well as retrievals of cloud-top height, CBH, liquid water path, and cloud-top phase. The Cooperative Institute for Research in the Atmosphere (CIRA) provided observations and retrievals, as did the National Aeronautics and Space Administration (NASA) Langley Research Center (LARC) from their Clouds and the Earth’s Radiant Energy System (CERES) (Minnis et al. 2021). CIRA data were the primary source because of the finer grid spacing (∼3 km) relative to NASA (∼10 km). NASA data were a contingency when CIRA data were unavailable (∼3% of the time).

Cloud-top heights are retrieved by mapping satellite brightness temperature measurements to corresponding heights based on numerical model temperature profiles (Heidinger et al. 2014; Minnis et al. 2021). Most height errors are under 1 km except in areas of optically thin ice and multilayered clouds where cloud-top-height underestimates ranged from 5 to 10 km (Yost et al. 2021) due to the mixed signal retrieved from cloud top and lower layers, or the surface.

Cloud-base heights are retrieved by subtracting the retrieved cloud geometric thickness from the retrieved cloud-top height. Additional geometric thickness corrections are then applied based on CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) (Noh et al. 2017). Statistics and case studies performed by Noh et al. (2017), and Yost et al. (2021) indicate that daytime low CBHs were ∼3–5 km too high relative to CloudSat measurements due mostly to missed clouds in overlapping situations. Noh et al. (2017) reported mean CBH errors of +0.3 km relative to CloudSat, indicating that the retrieved CBHs have a high bias; otherwise, RMS errors ranged from 1 to 2 km for most clouds. Case studies indicated that single-layered and deep convective clouds (aside from overshooting tops) performed the best.

Since retrievals performed best during the day, verification was restricted to the 6-h COAMPS forecasts valid 1800 UTC (1300 eastern standard time). Additionally, all satellite data were restricted to solar zenith angles of 86° or less. To mitigate low cloud overestimates due to erroneously low semitransparent cirrus, all ice-phase clouds with liquid water paths less than or equal to 40 g m−2 were removed. Nachamkin et al. (2017) noted values between 25 and 100 g m−2 were effective at reducing this bias.

Snow-cover artifacts were removed by comparing the retrieved CBH with the COAMPS lifted condensation level (LCL) in regions where LIS snow depth was greater than zero. Most artifacts occurred in cold, dry environments where the LCL was much higher than the spurious CBH. Removing all clouds over snow if (LCL − CBH) ≥ 500 m eliminated many artifacts while preserving existing cloud cover. Limiting the removal to clouds with tops below 4000 m prevented the removal of overlying clouds. Some artifacts were retained in regions of thin overlapping clouds not removed by the cirrus filter. Partly cloudy environments also retained some artifacts, as the LCL was similar to the CBH. Despite these exceptions, the filter worked well overall, removing many large swaths of spurious cloud over clear, snow-covered areas.

Precipitation accumulations over 3-h windows from the Integrated Multi-satellitE Retrievals for GPM (IMERG) Level-3, version 6B, data (Huffman et al. 2019) provided a mask to locate regions of deep precipitating clouds. Stage IV gridded precipitation data were available, but not used due to lack of coverage over the ocean. The IMERG spatial grid spacing was 0.1° with a 30-min update frequency. Although the short time accumulations tend to have the greatest error, Cui et al. (2020) noted hourly precipitation accumulations adequately depict the position of mesoscale convective systems with a positive bias in areal coverage.

All forecast and observational data employed a 0.045° (∼5 km) constant latitude-longitude grid (Fig. 1) using closest point interpolation.

Fig. 1.
Fig. 1.

Computational domain used for the cloud verification study. COAMPS topography is shown over land; redder colors indicate higher terrain. The 2-yr mean surface temperature valid at 1800 UTC (K; color bar) is shown over water.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

3. Method

Stability was quantified using five common parameters that measure boundary layer heat and moisture as well as lower-tropospheric buoyancy: 1) the LCL, 2) the convective condensation level (CCL), 3) the difference between the near-surface and LCL potential temperature (DΘ), 4) the estimated inversion strength (EIS) (Wood and Bretherton 2006), and 5) convective available potential energy (CAPE). Although not fully independent, each parameter emphasizes a separate aspect of stability. The LCL and CCL were combined for a single measure (CML = CCL − LCL). Environments where the LCL and the CCL are equal, or nearly so, are more likely to support unstable clouds because buoyant parcels that reach the LCL can continue to rise buoyantly. The DΘ characterizes the degree to which the lower boundary layer is well mixed; it approaches zero as the lapse rate between the surface and the LCL approaches dry adiabatic. Including this term helps differentiate convective clouds rooted in the boundary layer from those resulting from elevated instability, such as nocturnal convection north of a warm/stationary front. Surprisingly, CAPE also differentiates these clouds. Although boundary layer-based CAPE is low in these situations, it is often nonzero. Without a measure of CAPE, these clouds registered as highly stable, when in fact they are more hybrid in nature. The EIS quantifies the inversion strength. Although inversions affect both the LCL and CCL, the EIS is a more complete measure (Wood and Bretherton 2006). The EIS incorporates the LCL as well as the difference between potential temperature at 700 hPa and the surface, often referred to as lower-tropospheric stability (LTS). LTS is positively correlated with stable marine stratus in the subtropics (Klein and Hartmann 1993), but less so at higher latitudes. Temperature dependencies in the moist adiabatic lapse rate considered in the EIS help identify stable layers in cooler regions. Wood and Bretherton (2006) noted positive correlations between EIS and marine stratocumulus occurrence while Naud et al. (2016) noted a similar relationship in postfrontal stratus clouds.

Since sounding observations were limited, the stability parameters were derived from COAMPS output. Cloud fields from GOES-16 ABI observations valid at 1800 UTC were paired with the 1800 UTC COAMPS analyses, while 6-h COAMPS cloud forecasts were paired with the corresponding 6-h forecast thermodynamic fields. These parameters are subject to any errors in the COAMPS forecasts. Analysis errors displayed in Liu et al. (2009) are comparable to those noted here. Average 0–6-h upper-air temperature bias and RMS errors with respect to the observed soundings (not shown) were less than 0.5 and 1.0 K, respectively, below 500 hPa. Dewpoint errors were higher with biases of 1 K at 500 hPa dropping to near zero at 1000 hPa. RMS errors ranged from 5 K at 500 hPa to 2 K near the surface. Visual comparisons between simulated and observed soundings indicated reasonable agreement. The most notable errors occurred near cloud boundaries and fronts where COAMPS boundary layer temperatures differed from observations in localized areas.

The lower-tropospheric cloud masks were determined from COAMPS and GOES CBH fields. COAMPS CBH corresponds to the lowest level where cloud or ice mixing ratio exceeds 106kgwaterkgair1. Uncertainties between using mixing ratio versus volumetric concentration noted by Nachamkin et al. (2017) are relatively minor in the lower atmosphere where air density is close to 1 kg m−3. The retrieved GOES CBHs are considerably more uncertain and relatively untested as a validation tool. Though less accurate than ground-based or active satellite observations, they may be sufficient to identify clouds within a broad vertical layer.

Noh et al. (2017) noted the greatest errors occurred in deep and/or overlapping cloud layers where retrieved CBH was often higher than indicated by CloudSat observations. Such errors result in the underestimation of low cloud cover in these regions. While these issues are not easily corrected, we tested a few measures to mitigate the worst errors. In areas where the 3-h IMERG precipitation exceeded 5 mm, CBH was assumed as the minimum between the retrieved value and the LCL. This assumption very likely overestimates the low cloud coverage, but it may be preferable to the complete lack of retrieved low clouds in some large, precipitating systems. Another measure, applied only to stable clouds, involved patching holes beneath regions of thick overlying cloud by applying a 10-point convolutional filter to the original low cloud mask. Cloudy regions with convolved mask values greater than 0.1 corresponded to low clouds in this analysis. No patching occurred for unstable clouds because of their more-scattered nature. Sensitivity experiments detailed in the results section investigate the net effects of these assumptions.

Each parameter’s contribution to the stability estimate was normalized to values between 0 and 1 using linear scoring and thresholds summarized in Tables 1 and 2. The numerical values for the thresholds are drawn from convective parameterization (Kain and Fritsch 1993) and stable cloud (Wood and Bretherton 2006) literature as well as comparisons between vertical thermodynamic profiles and observed cloud and precipitation fields. Although the parameters were similar in many ways, sufficient differences arose to justify them all. Difference DΘ was good at depicting regions of low-level instability but provided no information about the inversion. CML was sensitive to shallow inversions or moist layers that sometimes resulted in spatial irregularities in the field. While the EIS was less sensitive to these shallow layers, it also tended to be nearly neutral (values from 2 to 4) in oceanic convective regions. Stratocumulus in well-mixed boundary layers sometimes attained both large EIS and small CML scores, which reflects the hybrid nature of these clouds. In general, regions of strong instability scored highly, while moderate- to mixed-stability regimes scored lower. The ensemble nature of the combined score provides condensed, normalized thermodynamic information to the machine-learning algorithm while allowing quantifying uncertainty when measuring stability. To account for positive height biases in the satellite retrievals, CBH thresholds extend well above the boundary layer. Since the exact nature of the retrieval error is unknown, the same height thresholds were applied to both the COAMPS and GOES cloud bases.

Table 1

Unstable atmosphere scoring parameters and thresholds.

Table 1
Table 2

Stable atmosphere scoring parameters and thresholds.

Table 2

Although stable and unstable clouds differ physically, they share many predictors. In terms of the CML, DΘ, and EIS, inverting the unstable scoring schemes for these parameters effectively locates high-stability areas. Near-surface fog layers were an exception as COAMPS sometimes underestimated the boundary layer stability in these shallow layers. As a result, the DΘ and CML criteria only identified portions of these clouds as stable, labeling the rest as neutral (neither stable nor unstable). Fog extent was often underestimated. The EIS was less sensitive to the surface layer and more often labeled fog as neutrally stable. Given these issues, it was decided to remove ground fog from the stable category by doubling the EIS weighting and removing all clouds with tops below 500 m (∼1% of all cloudy points) from both the COAMPS and GOES stable masks. Since the EIS is a very robust indicator for stable clouds (Wood and Bretherton 2006; Naud et al. 2016; Koshiro and Shiotani 2013) the results elsewhere were similar except in regions of hybrid stratocumulus, which received relatively higher stable scores. Unstable CBH thresholds of 4–6 km were reduced to 3–4 km for stable clouds (Table 2) as stable clouds tend to be lower in the atmosphere. Sensitivity tests in the results section will investigate the impacts of these reductions on the statistics. As mentioned above, CAPE reduces the stability scores in areas of elevated convection, though these clouds retain moderate stable scores. Once the scoring thresholds were applied to all parameters, the scores are tallied for all areas where the cloud-base score was greater than 0.1. Cloud-base scores equal to 0.1 indicate where cloud base was outside of the threshold range. In such regions, the tallied score was set to 0.1 to differentiate regions of higher clouds from clear regions where the tally was set to zero. The tallies were normalized again based on the maximum attainable values for each score to produce fields ranging from 0 to 1.

4. Case study

The stability scores agree well with qualitative aspects of the cloud fields as depicted in an example case from 1800 UTC 8 May 2018 (Figs. 2 and 3). On this day, a region of low-topped convective clouds extended through the central part of the domain from southwest to northeast along the Appalachian Mountains (A in Fig. 2b). Farther east, a region of stratus extended from eastern North Carolina northeastward into the ocean, paralleling the coast (B in Fig. 2b). Scattered clouds that appeared more convective in the satellite imagery due to their variable reflectance and higher cloud tops bounded the southeastern edge of the stratus zone (region C in Fig. 2b). An additional area of high, thick precipitating clouds resided over the ocean at the southeastern corner of the grid (D in Fig. 2b).

Fig. 2.
Fig. 2.

Observations valid at 1800 UTC 8 May: (a) GOES normalized 0.65-μm reflectance, (b) GOES-retrieved cloud-top height, (c) IMERG precipitation, and (d) GOES-retrieved CBH. Text labels in (b) refer to cloud regions mentioned in the text.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Fig. 3.
Fig. 3.

Normalized scores depict the degree to which clouds likely result from lower-tropospheric stable or unstable processes as based on the parameters in Tables 1 and 2. Values near 1.0 indicate high likelihood, and white areas denote clear skies. All plots are valid at 1800 UTC 8 May. Shown are (a) the unstable score for the COAMPS 6-h forecast cloud field, (b) the unstable cloud score for the GOES cloud retrievals, and the corresponding stable scores for (c) COAMPS and (d) GOES.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Each cloud region is recognizable from qualitative satellite indicators that can be compared with the stability scores. Over the Appalachians, the highly reflective, scattered cloud pattern is consistent with the high unstable scores in this region. Morning (1200 UTC) soundings (not shown) indicate an unstable layer below 750 hPa supportive of convective clouds. Likewise, the coastal stratus deck appears as a region of solid low clouds on satellite, which is consistent with the high stable scores. Beneath these clouds, northeasterly flow advected cool air southwestward from the Atlantic Ocean. Coastal soundings (not shown) indicated a shallow stable layer near the surface capped by a warm, moist layer extending to ∼700 hPa. Notably, a layer of middle and high clouds was also present above the stratus deck as evidenced by the 6–10-km cloud tops (E in Fig. 2b). Retrieved CBHs (Fig. 2d) in region E were 1–6 km higher than base heights in surrounding areas. The effect of the convolution cloud-base patch is demonstrated by the light blue shading in Fig. 4a. High stable scores were preserved (Fig. 3d) in all but two isolated areas. Although the patch worked well in this case, sensitivity tests described below indicate that it sometimes overestimated cloudiness. Convection in region C corresponded to high cloud tops and regions of moderate to heavy precipitation (Fig. 2c). Land-based radars (not shown) exhibited 50-dBZ returns off the North Carolina coast. The stability scores indicate the presence of a front in this region, with a sharp gradient from northwest to southeast across it. This region is also close to the northern edge of the Gulf Stream. Region D appears more hybrid in nature. The deep, cirrus-topped canopy indicates the potential for convection, but the lack of overshooting tops or strong gradients in the reflectance indicates that any convection is weak. Patches of moderate stable and unstable scores were scattered through the region in both COAMPS and GOES. While COAMPS cloud bases (not shown) were relatively low, GOES CBHs were much higher, resulting in near-zero GOES stability scores in the northern portions of region D due to CBH being above the maximum value in Table 2. These higher cloud bases likely reflect biases in the GOES retrievals. As noted in the previous section, such biases are common in regions of layered precipitating clouds.

Fig. 4.
Fig. 4.

Stable and unstable masks for (a) GOES-observed clouds and (b) clouds from COAMPS 6-h forecasts valid at 1800 UTC 8 May 2018. Masks represent regions where the scores in Fig. 3 exceed 0.75. Unstable and stable regions are shaded red and blue, respectively, and overlapping regions are shaded in dark red. Light-blue shading in (a) indicates regions where the retrieved stable clouds were patched with the convolutional filter.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Binary masks were created by thresholding the stability scores at 0.75 for minimal overlap between the stable and unstable regions (Fig. 4). A second set of binary cloud/no-cloud masks defined from all positive cloud-top heights tracked cloud coverage, regardless of stability or CBH. Evaluation involved compilation of common verification statistics from all masks to gauge the effects of GOES CBH errors and the attempts to mitigate them.

5. Verification metrics

Table 3 summarizes several widely used pointwise verification metrics (Wilks 1995) employed for this study. These measures are based on a contingency table where the sum of all correct forecasts A, false alarms B, missed events C, and correct negatives D based on the binary masks are collected for the 2-yr period. The bias is the ratio of the number of predicted points to observed points. Values greater than orless than 1 respectively indicate too much or too little predicted cloud cover, with 1 being a perfect score. The equitable threat score (ETS or Gilbert skill score) quantifies the degree of forecast/observation overlap relative to a random forecast. Since the random forecast probabilities depend on the grid coverage of the observed field, the ETS may be lower during cloudy periods. The ETS ranges from −⅓ for poor forecasts that are worse than random chance to 1.0 for a perfect forecast. The probability of detection (POD) measures the likelihood of correctly predicting clouds where they are observed, while false alarm ratio (FAR) measures the likelihood of predicting clouds in regions observed to be clear. Both scores range from 0 to 1, with perfect scores of 1 for the POD and 0 for the FAR.

Table 3

Deterministic pointwise verification scores.

Table 3

None of the pointwise scores above account for near misses. Although cloud cover is often sufficiently widespread to allow substantial overlap between the predicted and observed masks, near misses are important when clouds are scattered in nature. The fractions skill score (FSS; Roberts and Lean 2008) accounts for near misses by sampling square neighborhoods of length b centered at each point in the verification domain of size Nx × Ny. At a given scale b, the fractions of observed Ob(i, j) and forecast Fb(i, j) points are computed for all neighborhoods and are combined to calculate the FSS, which is defined as
FSSb=1MSEbMSEbref,
where
MSEb=1NxNy i=1bj=1b[Ob(i,j)Fb(i,j)]2and
MSEbref=1NxNy[i=1bj=1bOb(i,j)2+i=1bj=1bFb(i,j)2]
Unlike the ETS, the reference forecast represents the largest possible error within a neighborhood, which only depends on the neighborhood size. The FSS ranges from 0 to 1, with 1 being the best-quality forecast. Imperfect, unbiased forecasts can still receive a score of 1 if the errors are small enough to be contained within the neighborhoods. The scale of the errors can be estimated from the rate of FSS improvement as b increases. Forecasts with small spatial errors improve more rapidly than those with larger spatial errors.

6. Statistical comparisons

a. General characteristics

Despite the uncertainty in the GOES cloud-base retrievals, basic statistics do reveal several aspects of the observed and predicted cloud distributions. Scatterplots of the cloud/no-cloud daily bias and ETS fitted with second-order polynomials are shown in Fig. 5. The 2-yr mean GOES total cloud fraction was 0.63, meaning that on average 63% of the domain was cloudy. Winter was the cloudiest time of year (red curve Fig. 5a), although cloud fractions on most days were greater than 50%. COAMPS total cloud forecasts were relatively unbiased (blue curve Fig. 5a), with a 2-yr mean bias of 0.96. The positive tendencies in the mean curve during the summer were due to scattered outlying off-chart biases as high as 5. Statistics shown later indicate that these were marine stratus false alarms. The ETS was weakly seasonal (Fig. 5b) with lower scores during the summer.

Fig. 5.
Fig. 5.

Scatterplots of daily statistics from the cloud/no-cloud mask valid at 1800 UTC as a function of yearday. Each distribution is fitted with a second-order polynomial. Shown are (a) daily bias scores from the 6-h forecasts (blue dots) and GOES-observed fractional coverage (red dots) and (b) the corresponding ETS values.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

The stability-classification base experiment (B in Table 4; all experiment labels are defined there) includes both the GOES cloud-base patch and the assumption that GOES cloud base was at or below the LCL in precipitating areas. Results of this experiment are represented by solid curves and daily scatter dots in Fig. 6. Like the overall cloud masks, stable clouds (Figs. 6a,c) were most common during the cool season. Bias scores peaked in summer but the scatter about the mean was greater than the total cloud mask. Summer high-bias outliers remained reflecting the stable marine stratus cloud overforecasts. Notably, the stable cloud bias was less than 1.0, with an overall mean of 0.89 (Table 4). These negative biases are significant given that GOES CBH undercounts tend to promote positive biases. Despite the bias, ETS scores were generally greater than for total cloud cover, with a few outliers above 0.6. The number of near-zero scores was also lower. Stable ETS was lower during the summer, though like the total cloud cover the seasonality was not well defined.

Table 4

Summary statistics based on sums of the contingency-table constituents over the full 2-yr period for each experiment; “ob frac” indicates the observed fraction.

Table 4
Fig. 6.
Fig. 6.

Scatterplots of daily 6-h forecast scores and fitted curves, displayed as in Fig. 5. In all panels, solid curves and daily dots correspond to the base experiment B in Table 4. Short dashed and dash–dotted curves correspond to experiments NP and NH8 according to the legend in (d). Daily dots associated with these curves are not shown. For (left) stable and (right) unstable masks, shown are (a),(b) bias (blue) and fractional coverage, (c),(d) ETS scores, and (e),(f) FSS values for the single-point (orange) and 11-point, or 55-km, (green) neighborhoods.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Unstable cloud statistics were different from the stable and total clouds (Figs. 6b,d). Observed cloud coverage exhibited little seasonality, with only a slight increase during the summer. Although it is initially counterintuitive, lower-tropospheric unstable clouds are common in this region during winter because of cold-air outbreaks and cyclone warm sectors. Biases were close to 1.0 through the year, although with a greater number of positive outliers than stable. A cluster of very-low-bias cold-season forecasts is also apparent for yeardays 0–50. The unstable ETS was strongly seasonal, with high cool-season scores varying continuously toward a summer minimum.

FSS calculations at 1 and 11 grid points (5 and 55 km) show greater improvement with neighborhood size for unstable clouds during the warm season than for stable (Figs. 6e,f). As mentioned in section 5, the rate of FSS improvement with increasing neighborhood size correlates with the scale of the forecast errors. Unstable clouds are typically smaller and more scattered than stable clouds, making small displacement errors more likely. Increasing the neighborhood size compensates for these errors, resulting in higher scores. Conversely, stable cloud errors are larger and more systemic, as evidenced by the more modest FSS improvements and poorer bias scores.

Wintertime ETS scores were generally lower for stable clouds than for unstable clouds, due primarily to the negative stable cloud coverage bias. Seasonal cloud coverage fluctuations also played a secondary, more subtle, role in reducing the seasonality of the stable ETS. The ETS forecast skill is measured relative to a random forecast that performs better as cloud coverage increases. Seasonal variations in the stable cloud coverage were in phase with the variations in absolute forecast accuracy (independent of the random forecast). However, resulting out-of-phase random forecast deductions reduced the seasonality of the ETS score. The effect is most noticeable in the base experiment (solid line in Fig. 6c) because of the increased cloud coverage and increased seasonal variation in cloud coverage relative to the NP and NH8 experiments. The ETS seasonality of the latter two experiments was much closer to the corresponding FSS scores (Fig. 6e). Both the ETS and FSS are relative scores, but the FSS is not coverage dependent because the reference score (MSEbref) only depends on neighborhood size.

b. Sensitivity experiments

Effects of the GOES cloud cover uncertainty on the verification statistics are detailed by removing specific assumptions and recalculating the statistics as summarized in Table 4. Removing the assumption that maximum CBH was the LCL in clouds with 3-h rain rates exceeding 5 mm (NL in Table 4) slightly decreased the observed cloud coverage in each mask. Stable observed fraction decreased from 0.263 to 0.257, and unstable observed fraction decreased from 0.195 to 0.191. A slight increase in the bias and FAR indicates that observed cloud base was raised in areas with low cloud bases in COAMPS. Otherwise, the effect of this assumption was relatively small.

Removing the convolution patch on the stable cloud cover (NH Table 4) reversed the effective sign of the stable bias, and was the only experiment to result in a stable bias greater than 1. The shape of the daily distributions (not shown) remained largely the same, though with increased scatter in the bias scores. The patch lowered cloud base in regions with adjacent low clouds. While the patch likely resulted in overestimates in some areas, it also removed numerous gaps in stratus decks caused by overlapping clouds as shown in Figs. 24.

Another set of experiments involved removing all precipitating clouds that were likely deep and more prone to cloud-base errors (Noh et al. 2017). A combined mask merging all areas of nonzero 3-h IMERG precipitation with all areas of COAMPS 3-h precipitation greater than 0.1 mm effectively removed these clouds. The small positive COAMPS threshold was necessary to filter large regions of very light precipitation in deep layered clouds. This precipitation may have been correct but too light for detection by IMERG. Combining the observed and predicted precipitation fields prevented any precipitation biases from affecting the mask.

Removing precipitating clouds reduced cloud coverage by 50% (NP in Table 4) and decreased the seasonality of the stable cloud cover (red dashed line Fig. 6). Seasonality in the stable ETS correspondingly increased, and now resembled the shape of the FSS curves. The consistent seasonality in the FSS curves indicates that the increased ETS seasonality resulted from the reduced influence of the random forecast accuracy associated with the decreased stable cloud coverage. Seasonality also increased in the stable cloud bias with mean values approaching 1.0 in summer and 0.6 in winter. During winter, large precipitating areas and associated thick clouds likely contain CBH errors that contribute positively to the bias score. Summer marine stratus decks are shallow and precipitation-free, so the associated positive bias false alarms remained in the statistics. Overall, the negative bias effects of removing the deep precipitating clouds contributed more to the weighted mean, resulting in a value of 0.79 as compared with 0.89 in experiment B. Unstable NP cloud statistics largely resembled their full sample (B) counterparts, especially with respect to the FSS. Scatter in the unstable ETS was greater, with more near-zero values in winter as very low coverage reduced the probability of overlap. Greatest cloud cover reductions occurred in summer, which is consistent with the dominance of convective precipitation. Mean scores were only slightly below their full sample values (1.01 vs 1.03), which agrees with Noh et al. (2017) who noted the retrievals performed better for convective clouds.

Removing the patch from the nonprecipitating stable clouds (NPH in Table 4) increased the bias and the FAR, and slightly increased the POD. These shifts indicate that a slight majority of the patched nonprecipitating clouds resided outside areas covered by COAMPS clouds. The efficacy of the patch remains largely unknown. It appears to work well in warm-season stratus decks but may be adding spurious clouds beneath large cool-season cirrus shields.

Another potential source of uncertainty is the maximum CBH used to define the stable and unstable clouds. Here, stable clouds were assumed to be lower in the atmosphere (3–4 km) as they tend to form above shallow, stable boundary layers while unstable clouds sometimes form in very deep, well-mixed layers (4–6 km). While reasonable, these values may be sensitive to gradients in mean CBH. Exchanging the stable and unstable CBH criteria in Tables 1 and 2 is a simple way to examine sensitivity to the thresholds. Experiment BC in Table 4 is the same as experiment B except with the exchanged CBH criteria. The shapes of the resulting stable and unstable ETS, bias, and coverage distributions (not shown) were similar to the original, though the mean values differed. Stable cloud fraction increased from 0.26 to 0.31 because of the larger vertical sampling interval. Conversely, unstable cloud fraction decreased from 0.20 to 0.17 as a result of the smaller vertical sampling interval. Notably, the mean unstable cloud bias increased from 1.03 to 1.15 due to an increased number of high-bias outliers while mean stable bias decreased from 0.89 to 0.84 because of a decreased number of positive outliers. This behavior is consistent with the positive cloud-base bias in the GOES retrievals as the larger vertical intervals sample more observed clouds. However, a negative midlevel cloud bias in COAMPS could also result in these differences. Nachamkin et al. (2017) noted that COAMPS tended to have a low bias in midlevel clouds based on cloud-top observations. These results indicate that a large height interval may alleviate some of the GOES retrieval biases, but the issue requires greater study of the vertical cloud-base distribution in COAMPS in comparison with GOES.

Two final tests involve removing all clouds with cloud-top heights greater than 8 km. The extent of any cloud-base errors within this sample was estimated by computing mask both with the stable convolution patch (B8) and without it (NH8). Means in Table 4 reveal only minor differences in the stable cloud statistics between these two experiments, indicating relatively few such errors. The degree to which the NH8 sample represents the overall total cloud distribution is gauged by comparing the daily curves in Fig. 6. NH8 FSS scores were similar to the base experiment B at both the 1- and 11-km scales. Although the stable ETS scores differed significantly, most of these derive from grid coverage variabilities in the reference score and do not represent absolute differences. Stable biases followed the NP case for the same reasons discussed above. Unstable biases were closer to unity, especially in the later half of the year. Positive biases in experiment B relative to NH8 suggest either a positive bias in COAMPS unstable deep precipitating clouds or increased GOES cloud-base errors, or possibly both.

Despite uncertainties in GOES cloud coverage, the sensitivity studies reveal several robust trends. Stable cloud forecasts retained a negative bias for all but the NH case. Since the retrievals errors favor positive forecast biases (Noh et al. 2017), negative stable biases likely result from forecast deficits. A number of high-positive bias stable forecasts during the summer remained prevalent in the nonprecipitating and low cloud samples, indicating potential issues with oceanic stratus cloud forecasts. ETS and FSS scores indicate larger and more systematic errors in the stable cloud forecasts relative to the unstable clouds during summer. Unstable cloud systems exhibit very high predictability during the winter, likely due to strong synoptic forcing.

7. Geographic cloud coverage

Geographical distributions of the cloud occurrence frequencies provide useful insight into which clouds were selected by the classification scheme and how COAMPS predicted them. Unless otherwise stated, the means originate from experiment NH8 as it likely contained the fewest retrieval errors. Since cloud coverage varied broadly through the year, separate warm- and cold-season samples were generated based on when the cloud/no cloud coverage curve (Fig. 5a) crossed its median value. From this definition, the warm season spanned from 15 April to 13 October, and the cold season comprised all residual days. Mean occurrence frequencies for each sample were created by summing the binary masks over each period and dividing by the number of days in the sample. Values in Figs. 7 and 8 reflect the fraction of each period that clouds of each type were present at 1800 UTC. Although the NH8 clouds significantly underestimate the total cloud coverage, the occurrence patterns retained the same basic features as those derived from the full, patched cloud sample B.

Fig. 7.
Fig. 7.

Cold-season (14 Oct–14 Apr) mean fractional coverage of the (a) unstable cloud mask for COAMPS 6-h forecasts, (b) GOES unstable cloud mask, (c) stable cloud s for COAMPS 6-h forecast, and (d) GOES stable cloud mask. Masks were derived from unpatched clouds with tops of ≤8 km (NH8 in Table 4) and are valid at 1800 UTC.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Fig. 8.
Fig. 8.

As in Fig. 7, but for the warm season (15 Apr–13 Oct).

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Unstable clouds during the cold season, (Figs. 7a,b) were most common over the Great Lakes snow belts and warm waters over and south of the Gulf Stream. Lake-effect snow squalls often remained unstable far downstream of the lakes as strong fluxes induced long-lived shallow convection. Some unstable clouds also occurred north of the Gulf stream, though colder waters and weaker heat fluxes resulted in shallower unstable boundary layers during cold outbreaks. Unstable clouds were mostly absent southeast of the Appalachians. Being far from the Great Lakes, the only opportunities for lower-tropospheric instability arose from occasional cold-frontal passages.

Cold-season stable clouds (Figs. 7c,d) were most common over the Great Lakes, regions of upslope west-northwest flow along the Appalachians, and over the ocean. Much of this cloudiness was stratus associated with weak cold-air advection, although passing cyclones also contributed. Most cyclone-associated low clouds were either northeastern-quadrant stable warm-frontal clouds or mixed-stability northwest wrap-around clouds. Cold-frontal precipitation was mostly categorized as stable or neutral except for narrow regions near the front.

Warm-season, unstable cloudiness was dominated by land/water contrasts with most of the clouds occurring over land (Figs. 8a,b). Only the warm waters over and south of the Gulf Stream were able to support unstable lower-tropospheric clouds at this time of day. The emphasis on boundary layer stability may have resulted in underestimation of the frequencies of elevated convection over cool water bodies, especially the Great Lakes. Stable clouds (Figs. 8c,d) were more evenly distributed with a general south-to-north gradient in cloud frequency. Local maxima over the Great Lakes may at least in part been associated with advected or elevated unstable clouds that were classified as stable.

The largest differences between COAMPS and GOES were primarily in the stable cloud frequencies, as indicated by the statistics in the previous section. During the cool season (Figs. 7c,d) stable cloud cover over most of the grid was negative biased. These biases were particularly strong over the Great Lakes and adjoining leeward regions to the south and east. Warm-season (Figs. 8c,d) stable clouds were mostly negatively biased over land, but positively biased over the ocean. Positive biases over the cold waters off the northeast coast were particularly strong. Unstable clouds were generally well forecast with the exception of negative biases over the Great Lakes in winter and positive biases over the Carolinas and the warm Atlantic Ocean waters in summer.

A composite of the 264 stable forecasts with daily bias scores of 0.75 or less (Fig. 9) strongly resembles the winter cloud frequency distributions. Visual inspections indicated that the LCL filter effectively removed most of the GOES artifacts associated with snow and ice, thus spurious cloud cover was not responsible for these biases. In most cases, COAMPS cloud cover was less extensive than GOES during weak to moderate cold outbreaks. In areas where COAMPS was cloud free, the COAMPS LCL was still within 500 m of the observed cloud bases. Were it not, the LCL filter would be ineffective as it would have removed the clouds from the observations. Well-predicted LCLs suggest the problem is not the product of a moisture bias. Notably, unstable clouds, which were often present during the same cold outbreaks, did not exhibit a low bias. Poorly characterized diffusion/turbulent processes may be responsible as stable clouds are sensitive to balances between turbulence and radiation. Fleet Numerical Meteorology and Oceanography Center (FNMOC) operational statistics indicate negative COAMPS wind biases of about 1 m s−1 in the Great Lakes region. These may reduce the strength of low-level upslope lift, though are likely too localized to be responsible for the widespread negative cloud biases.

Fig. 9.
Fig. 9.

The 2-yr mean fractional coverage of the (a) COAMPS stable and (b) GOES stable masks from all days where the stable cloud coverage bias for COAMPS 6-h forecasts was ≤ 0.75. All masks and biases were calculated from unpatched clouds with tops of ≤8 km (NH8 in Table 4) valid at 1800 UTC.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Another robust feature was the occasional strongly positive stable cloud biases during the summer. Those forecasts were isolated in a similar way by compositing the 51 cases where the daily stable cloud bias score was 1.5 or greater (Fig. 10). As was hinted by the sensitivity studies, these biases primarily resulted from false alarm forecasts of low marine stratus. The issue was most prevalent over the cooler waters north of the Gulf Stream, although too many clouds were also predicted over the warmer waters. The warm water overpredictions extended to the unstable clouds (not shown), indicating a general issue of too many low clouds during these forecasts.

Fig. 10.
Fig. 10.

As in Fig. 9, but for all days with bias scores of ≥1.5.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

The common occurrence of maritime stratus off the U.S. East Coast warrants a closer investigation of this specific cloud regime. In this region, stratus often coexists with multilayered clouds and the stable cloud mask alone does not sufficiently isolate it. However, weak oceanic latent heat fluxes associated with cold lower-tropospheric air over the cool water, much like the case in Figs. 24, are an effective indicator. Subsampling the stable cloud masks for all areas with latent heat flux ≤ 200 W m−2, EIS ≥ 4.0, and CML ≥ 1200 m successfully isolated multiple marine stratus cases. The patched nonprecipitating (NP) dataset formed the basis for this analysis because marine stratus often occurred in deep-layered cloud environments.

Figure 11 displays the statistics and distribution frequencies from the 314 forecasts with greater than 1000 points meeting the above specifications. The latent heating criteria constrained most of the samples to regions north of the Gulf Stream. Very few GOES points occurred over the warmer waters, though COAMPS frequencies approached 10% there. Otherwise, the geographical cloud distributions were relatively similar. Bias and ETS scores were surprisingly poor given the agreement in the frequencies. Day-to-day scatter was large with no well-defined trends. A number of very high bias scores greater than 5.0 occurred (values were capped at 5.0 for scaling purposes), along with many near-zero or slightly negative ETS values. As previously indicated, false alarms during the summer influenced the mean, though low bias forecasts were common as well. The resulting cancelation of errors accounts for the similarities in the geographical means.

Fig. 11.
Fig. 11.

Marine stratus mask statistics valid at 1800 UTC showing: (a) the daily COAMPS 6-h forecast bias (blue) and GOES-based fractional coverage (red) for each event, (b) the mean fractional coverage of the events for COAMPS 6-h forecasts, (c) the daily ETS for each event, and (d) the mean fractional coverage of the GOES events.

Citation: Monthly Weather Review 150, 1; 10.1175/MWR-D-21-0056.1

Sensitive interactions between turbulent fluxes, radiation, and microphysics govern the formation and maintenance of marine stratus. These processes occur on small spatial scales, requiring heavy parameterization in most models. Marine stratus off the northeast U.S. coast often develops beneath weak inversions in shallow stable layers with thicknesses of a few hundred meters. Forcing evolves through the year, with weak to moderate cold outbreaks dominating the cold season, and more quiescent conditions during the warm season. Slight variations in the vertical temperature gradient or boundary layer moisture content can mean the difference between widespread cloudiness and clear skies. Since COAMPS characterizes sea surface temperatures well, poorly simulated interactions between turbulent flux, radiation, microphysical processes are most likely behind the errors. Wang et al. (2011) similarly note that although COAMPS simulated the structure of the marine boundary layer well, clouds were underpredicted due to the lack of a turbulence-shallow convection parameterization.

At the opposite end of the spectrum, a number of cold-season events were very well predicted events. Visual inspections of the 30 top scoring (ETS ≥ 0.8 in Fig. 5) unstable events revealed a variety of scenarios, all of which occurred in winter: 12 (40%) were cold outbreaks, 5 (17%) were cyclones, 4 (13%) were warm-advection events, and 9 (30%) were mixed or ill-defined events. Most of these events featured very high overall cloud fractions.

8. Summary and conclusions

In this work, we evaluated COAMPS cloud forecasts using stable and unstable cloud masks derived from combined COAMPS thermodynamic fields and GOES-16 ABI cloud property retrievals. Most of the retrieval errors consisted of spurious cloud cover that was due to snow-cover artifacts, vertically misplaced clouds that were due to semitransparent cirrus, and CBHs that were too high because of cloud-thickness underestimates. Masking ice phase clouds with low LWP values mitigates the issue of semitransparent cirrus artifacts. Removing these clouds sufficed for the current study since lower-tropospheric clouds were the primary focus. For future work, combining numerical model relative humidity and tropopause height estimates with height measurements of nearby optically thick cirrus may improve the retrieval accuracy. Similarly, comparing the COAMPS LCL with the retrieved CBH effectively removed snow-cover artifacts. The success of the snow-cover removal suggests that high-resolution (∼5 km) numerical model moisture and thermodynamic analyses are sufficiently accurate to support satellite retrievals beyond the derivation of cloud-top height. CBH retrieval errors were more difficult to mitigate because the magnitude of the error depends on unknown cloud distributions beneath deep-layered cloud cover. Sensitivity experiments conducted in this work indicate a trade-off between uncertainty in the GOES retrievals and the benefit of highly refined samples. Regions of deep-layered cloud contain systematic positive cloud-base biases that were due to cloud-thickness underestimates. Convolutional patching may help, especially in areas of widespread stratus, but it also may lead to cloud coverage overestimates in other areas. Some of the patched regions were cloudy in COAMPS, which suggests that numerical model analyses may be of some benefit in mitigating this problem. Restricting the analysis to areas where cloud-top heights did not exceed 8 km produced the most consistent results.

Separately verifying cloud forecasts based on physical processes revealed that systematic forecast errors vary significantly with atmospheric stability. Stable clouds were less extensive than observed, particularly during winter cold outbreaks. Although the negative biases were widespread, reasonable LCL values along with the lack of an unstable cloud bias suggest that deficits in the turbulent, radiative, and microphysical parameterizations are the most likely issue as opposed to a moisture deficit. Along with the overall negative bias, a number of high-positive bias cases associated with marine stratus false alarms occurred off the northeastern coast during the summer. Conditional samples revealed that stable marine stratus forecast performance varies substantially, with large fluctuations in coverage bias and relatively low ETS scores. The poor performance showed little seasonality despite the range of synoptic forcing. Complex interactions between turbulent, radiative, and microphysical processes govern the formation of stable marine stratus. The lack of a turbulence-shallow convection parameterization in COAMPS likely impacts the development of these clouds.

Unstable cloud forecasts were more accurate both in terms of the statistics and the mean geographical distributions. Bias scores were closer to 1.0 in most samples, with a slight positive tendency. Lower ETS scores in the warm season were balanced by higher FSS scores at larger scales, indicating that unstable displacement errors were smaller than those for stable clouds. Certain unstable cloud systems exhibited exceptionally high ETS scores during the cold season. Strong synoptic forcing associated with cyclones and fronts induced widespread cloudiness that generally scored well. However, stable cloudiness associated with these same features generally scored lower than unstable, especially during cold outbreaks.

Improved unstable cloud performance may not necessarily mean that these clouds are inherently more predictable. Unstable clouds rely on rising buoyant processes governed by surface fluxes and turbulence. Unresolved processes manifest as aliasing errors that can result in apparently reasonable cloud fields that are energetically erroneous. At 5 km, the COAMPS grid does not resolve most cumulus-scale processes. Although condensed clouds developed in unstable environments the patterns contained less detail than observed (Figs. 24). Herein lies an important trade-off between accurately representing the boundary layer energetics versus the lower atmospheric cloud field. For simulations lying in the convective “grey zone” (horizontal grid deltas ∼1–10 km) the benefits of improved cloud characterization may outweigh improvements in turbulent transport, especially for short-term cloud forecasts. A shallow convection parameterization scheme would help address the problem.

The results here also underscore the importance of thoroughly understanding the behavior of the statistical scores. The ETS measures forecast skill relative to a random forecast. Random forecasts perform better as coverage of the feature being predicted increases. However, increased random forecast skill does not equate to a less skillful forecast. It simply makes skill harder to measure. In this study, the ETS indicated little seasonality in the stable cloud forecasts, whereas the FSS indicated that seasonality was comparable to the unstable clouds. Reducing the random forecast probabilities by removing precipitating and deep clouds increased seasonality in the ETS, thereby indicating greater skill in the cool-season forecasts than in those for the warm season. These subtleties in the behavior of even relatively simple metrics serve as a cautionary example. As modern verification methods become more complex, their behavior becomes harder to investigate and understand. Comparing results from multiple methods for consistency decreases the risk of misinterpretation.

Acknowledgments

This research is supported by Grant N0001421WX00031from the Naval Research Laboratory. Thanks are given to Steve Miller, Jeremy Solbrig, and Matt Rogers (CIRA), along with Rabi Palikonda and William Smith of the NASA Langley Research Center for help in obtaining the satellite retrieval data. Computer resources for the COAMPS simulations and data archival were supported in part by a grant of high-performance computing (HPC) time from the Department of Defense Major Shared Resource Center at Stennis Space Center in Mississippi. The work was performed on Cray XC40 and SGI 8600 computing systems.

Data availability statement

The IMERG data are available from NASA, and information about the dataset can be found online (https://gpm.nasa.gov/data/directory). The satellite retrieval data were collected daily from NASA and CIRA. NASA LARC daily imagery can be found at https://satcorps.larc.nasa.gov/, and CIRA daily imagery can be found at https://rammb.cira.colostate.edu/ramsdis/online/goes-16.asp. The COAMPS forecasts as well as the satellite data interpolated to the analysis grid are stored at the DoD HPC centers and are controlled unclassified data that require users to register with the U.S. government and acquire permission prior to use. More details can be found online (https://www.nrlmry.navy.mil/coamps-web/web/reg).

REFERENCES

  • Alexander, S. P., and A. Protat, 2018: Cloud properties observed from the surface and by satellite at the northern edge of the Southern Ocean. J. Geophys. Res. Atmos., 123, 443456, https://doi.org/10.1002/2017JD026552.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bikos, D., and Coauthors, 2012: Synthetic satellite imagery for realtime high-resolution model evaluation. Wea. Forecasting, 27, 784795, https://doi.org/10.1175/WAF-D-11-00130.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Blanchard, Y., J. Pelon, E. W. Eloranta, K. P. Moran, J. Delanoë, and G. Sèze, 2014: A synergistic analysis of cloud cover and vertical distribution from a-train and ground-based sensors over the high Arctic station Eureka from 2006 to 2010. J. Appl. Meteor. Climatol., 53, 25532570, https://doi.org/10.1175/JAMC-D-14-0021.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bodas-Salcedo, A., and Coauthors, 2011: COSP: Satellite simulation software for model assessment. Bull. Amer. Meteor. Soc., 92, 10231043, https://doi.org/10.1175/2011BAMS2856.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bony, S., J. L. Dufresne, H. L. Treut, J. J. Morcrette, and C. Senior, 2004: On dynamic and thermodynamic components of cloud changes. Climate Dyn., 22, 7186, https://doi.org/10.1007/s00382-003-0369-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, S., and Coauthors, 2003: COAMPS version 3 model description-general theory and equations. Naval Research Laboratory Tech. Note NRL/PU/7500-03448, 143 pp.

  • Cui, W., X. Dong, B. Xi, Z. Feng, and J. Fan, 2020: Can the GPM IMERG final product accurately represent MCSs’ precipitation characteristics over the central and eastern United States? J. Hydrometeor., 21, 3957, https://doi.org/10.1175/JHM-D-19-0123.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Daley, R., and E. Barker, 2001: NAVDAS Source Book 2001: NRL atmospheric variational data assimilation system. Naval Research Laboratory Tech. Note NRL/PU/7530-01-441, 163 pp.

  • Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models. Quart. J. Roy. Meteor. Soc., 102, 405418, https://doi.org/10.1002/qj.49710243210.

    • Search Google Scholar
    • Export Citation
  • Dolinar, E. K., X. Dong, B. Xi, J. Jiang, and H. Su, 2015: Evaluation of CMIP5 simulated clouds and TOA radiation budgets using NASA satellite observations. Climate Dyn., 44, 22292247, https://doi.org/10.1007/s00382-014-2158-9.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Evans, S., R. Marchand, T. Ackerman, L. Donner, J.-C. Golaz, and C. Seman, 2017: Diagnosing cloud biases in the GFDL AM3 model with atmospheric classification. J. Geophys. Res. Atmos., 122, 12 82712 844, https://doi.org/10.1002/2017JD027163.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Field, P. R., and R. Wood, 2007: Precipitation and cloud structure in midlatitude cyclones. J. Climate, 20, 233254, https://doi.org/10.1175/JCLI3998.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Frey, R., B. Baum, W. Menzel, S. Ackerman, C. Moeller, and J. Spinhirne, 1999: A comparison of cloud top heights computed from airborne lidar and MAS radiance data using CO2 slicing. J. Geophys. Res., 104, 24 54724 555, https://doi.org/10.1029/1999JD900796.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Govekar, P. D., C. Jakob, M. J. Reeder, and J. Haynes, 2011: The three-dimensional distribution of clouds around Southern Hemisphere extratropical cyclones. Geophys. Res. Lett., 38, L21805, https://doi.org/10.1029/2011GL049091.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grasso, L. D., M. Sengupta, J. F. Dostalek, R. Brummer, and M. DeMaria, 2008: Synthetic satellite imagery for current and future environmental satellites. Int. J. Remote Sens., 29, 43734384, https://doi.org/10.1080/01431160801891820.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Heidinger, A. K., M. J. Foster, A. Walther, and X. Zhao, 2014: The Pathfinder atmospheres-extended AVHRR climate dataset. Bull. Amer. Meteor. Soc., 95, 909922, https://doi.org/10.1175/BAMS-D-12-00246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hodur, R. M., 1997: The Naval Research Laboratory’s Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Mon. Wea. Rev., 125, 14141430, https://doi.org/10.1175/1520-0493(1997)125<1414:TNRLSC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huffman, G. J., and Coauthors, 2019: NASA Global Precipitation Measurement (GPM) Integrated Multi-satellitE Retrievals for GPM (IMERG). NASA Algorithm Theoretical Basis Doc., version 06, 38 pp., https://gpm.nasa.gov/sites/default/files/document_files/IMERG_ATBD_V06.pdf.

  • Jin, D., L. Oreopoulos, and D. Lee, 2016: Regime-based evaluation of cloudiness in CMIP5 models. Climate Dyn., 48, 89112, https://doi.org/10.1007/s00382-016-3064-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, T. A., P. Skinner, K. Knopfmeier, E. Mansell, P. Minnis, R. Palikonda, and W. Smith Jr., 2018: Comparison of cloud microphysics schemes in a Warn-On-Forecast system using synthetic satellite objects. Wea. Forecasting, 33, 16811708, https://doi.org/10.1175/WAF-D-18-0112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and J. M. Fritsch, 1993: Convective parameterization for mesoscale models: The Kain-Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

    • Crossref
    • Export Citation
  • Klein, S. A., and D. L. Hartmann, 1993: The seasonal cycle of low stratiform clouds. J. Climate, 6, 15881606, https://doi.org/10.1175/1520-0442(1993)006<1587:TSCOLS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Klein, S. A., and C. Jakob, 1999: Validation and sensitivities of frontal clouds simulated by the ECMWF model. Mon. Wea. Rev., 127, 25142531, https://doi.org/10.1175/1520-0493(1999)127<2514:VASOFC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Koshiro, T., and M. Shiotani, 2013: Relationship between low stratiform cloud amount and estimated inversion strength in the lower troposphere over the global ocean in terms of cloud types. J. Meteor. Soc. Japan, 92, 107120, https://doi.org/10.2151/jmsj.2014-107.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuma, P., A. J. McDonald, O. Morgenstern, R. Querel, I. Silber, and C. J. Flynn, 2021: Ground-based lidar processing and simulator framework for comparing models and observations (ALCF 1.0). Geosci. Model Dev., 14, 4372, https://doi.org/10.5194/gmd-14-43-2021.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kumar, S. V., and Coauthors, 2006: Land information system: An interoperable framework for high resolution land surface modeling. Environ. Modell. Software, 21, 14021415, https://doi.org/10.1016/j.envsoft.2005.07.004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liu, M., J. E. Nachamkin, and D. L. Westphal, 2009: On the improvement of COAMPS weather forecasts using an advanced radiative transfer model. Wea. Forecasting, 24, 286306, https://doi.org/10.1175/2008WAF2222137.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mahajan, S., and B. Fataniya, 2020: Cloud detection methodologies: Variants and development—A review. Complex Intell. Syst., 6, 251261, https://doi.org/10.1007/s40747-019-00128-0.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McDonald, A. J., and S. Parsons, 2018: A comparison of cloud classification methodologies: Differences between cloud and dynamical regimes. J. Geophys. Res. Atmos., 123, 11 17311 193, https://doi.org/10.1029/2018JD028595.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McErlich, C., A. McDonald, A. Schuddeboom, and I. Silber, 2021: Comparing satellite and ground based observations of cloud occurrence over high southern latitude. J. Geophys. Res. Atmos., 126, e2020JD033607, https://doi.org/10.1029/2020JD033607.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Medeiros, B., and B. Stevens, 2011: Revealing differences in GCM representations of low clouds. Climate Dyn., 36, 385399, https://doi.org/10.1007/s00382-009-0694-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mellor, G. L., and T. Yamada, 1982: Development of a turbulence closure for geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851875, https://doi.org/10.1029/RG020i004p00851.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and Coauthors, 2014: Model-evaluation tools for three-dimensional cloud verification via spaceborne active sensors. J. Appl. Meteor. Climatol., 53, 21812195, https://doi.org/10.1175/JAMC-D-13-0322.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Minnis, P., and Coauthors, 2021: CERES MODIS cloud product retrievals for edition 4. Part I: Algorithm changes. IEEE Trans. Geosci. Remote Sens., 59, 27442780, https://doi.org/10.1109/TGRS.2020.3008866.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nachamkin, J. E., Y. Jin, L. D. Grasso, and K. Richardson, 2017: Using synthetic brightness temperatures to address uncertainties in cloud-top-height verification. J. Appl. Meteor. Climatol., 56, 283296, https://doi.org/10.1175/JAMC-D-16-0240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Naud, C. M., J. F. Booth, and A. D. Del Genio, 2016: The relationship between boundary layer stability and cloud cover in the post-cold-frontal region. J. Climate, 29, 81298149, https://doi.org/10.1175/JCLI-D-15-0700.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Niu, G.-Y., and Coauthors, 2011: The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res., 116, D12109, https://doi.org/10.1029/2010JD015139.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Noh, Y.-J., and Coauthors, 2017: Cloud-base height estimation from VIIRS. Part II: A statistical algorithm based on A-train satellite data. J. Atmos. Oceanic Technol., 34, 585598, https://doi.org/10.1175/JTECH-D-16-0110.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oreopoulos, L., and W. Rossow, 2011: The cloud radiative effects of International Satellite Cloud Climatology Project weather states. J. Geophys. Res., 116, D12202, https://doi.org/10.1029/2010JD015472.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Oreopoulos, L., N. Cho, D. Lee, and S. Kato, 2016: Radiative effects of global MODIS cloud regimes. J. Geophys. Res. Atmos., 121, 22992317, https://doi.org/10.1002/2015JD024502.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Otkin, J. A., T. J. Greenwald, J. Sieglaff, and H.-L. Huang, 2009: Validation of a large-scale simulated brightness temperature dataset using SEVIRI satellite observations. J. Appl. Meteor. Climatol., 48, 16131626, https://doi.org/10.1175/2009JAMC2142.1

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Protat, A. S. A., and Coauthors, 2014: Reconciling ground-based and space based estimates of the frequency of occurrence and radiative effect of clouds around Darwin, Australia. J. Appl. Meteor. Climatol., 53, 456478, https://doi.org/10.1175/JAMC-D-13-072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall acculations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, https://doi.org/10.1175/2007MWR2123.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rossow, W. B., and R. A. Schiffer, 1999: Advances in understanding clouds from ISCCP. Bull. Amer. Meteor. Soc., 80, 22612287, https://doi.org/10.1175/1520-0477(1999)080<2261:AIUCFI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rossow, W. B., G. Tselioudis, A. Polak, and C. Jakob, 2005: Tropical climate described as a distribution of weather states indicated by distinct mesoscale cloud property mixtures. Geophys. Res. Lett., 32, L21812, https://doi.org/10.1029/2005GL024584.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rutledge, S. A., and P. V. Hobbs, 1983: The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. VIII: A model for the “seeder-feeder” process in warm-frontal rainbands. J. Atmos. Sci., 40, 11851206, https://doi.org/10.1175/1520-0469(1983)040<1185:TMAMSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rutledge, S. A., and P. V. Hobbs, 1984: The mesoscale and microscale structure and organization of clouds and precipitation in midlatitude cyclones. XII: A diagnostic modeling study of precipitation development in narrow cold-frontal rainbands. J. Atmos. Sci., 40, 11851206, https://doi.org/10.1175/1520-0469(1983)040<1185:TMAMSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Smith, W. L., Jr., and Coauthors, 1996: Comparisons of cloud heights derived from satellite, aircraft, surface lidar and LITE data. Proc. Int. Radiation Symp., Fairbanks, AK, Int. Radiation Commission, 603–606.

  • Taylor, P. C., S. Kato, K.-M. Xu, and M. Cai, 2015: Covariance between Arctic sea ice and clouds within atmospheric state regimes at the satellite footprint level. J. Geophys. Res. Atmos., 120, 12 65612 678, https://doi.org/10.1002/2015JD023520.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tselioudis, G., W. Rossow, Y. Zhang, and D. Konsta, 2013: Global weather states and their properties from passive and active satellite cloud retrievals. J. Climate, 26, 77347746, https://doi.org/10.1175/JCLI-D-13-00024.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, S., L. W. O’Neill, Q. Jiang, S. P. de Szoeke, X. Hong, H. Jin, W. T. Thomson, and X. Zheng, 2011: A regional real-time forecast of marine boundary layers during VOCALS-Rex. Atmos. Chem. Phys., 11, 421437, https://doi.org/10.5194/acp-11-421-2011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Webb, M., C. Senior, S. Bony, and J. J. Morcrette, 2001: Combining ERBE and ISCCP data to assess clouds in the Hadley Centre, ECMWF and LMD atmospheric climate models. Climate Dyn., 17, 905922, https://doi.org/10.1007/s003820100157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 1995: Statistical Methods in the Atmospheric Sciences: An Introduction. International Geophysics Series, Vol. 59, Elsevier, 467 pp.

  • Williams, K. D., and M. J. Webb, 2009: A quantitative performance assessment of cloud regimes in climate models. Climate Dyn., 33, 141157, https://doi.org/10.1007/s00382-008-0443-1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Williams, K. D., and A. Bodas-Salcedo, 2017: A multi-diagnostic approach to cloud evaluation. Geosci. Model Dev., 10, 25472566, https://doi.org/10.5194/gmd-10-2547-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wood, R., and C. S. Bretherton, 2006: On the relationship between stratiform low cloud cover and lower-tropospheric stability. J. Climate, 19, 64256432, https://doi.org/10.1175/JCLI3988.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Yost, C., P. Minnis, S. Sun-Mack, Y. Chen, and W. L. Smith, 2021: CERES MODIS cloud product retrievals for edition 4—Part II: Comparisons to CloudSat and CALIPSO. IEEE Trans. Geosci. Remote Sens., 59, 3695–3724, https://doi.org/10.1109/TGRS.2020.3015155.

    • Crossref
    • Export Citation
  • Zhang, M. H., and Coauthors, 2005: Comparing clouds and their seasonal variations in 10 atmospheric general circulation models with satellite measurements. J. Geophys. Res., 110, D15S02, https://doi.org/10.1029/2004JD005021.

    • Search Google Scholar
    • Export Citation
1

COAMPS is a registered trademark of the Naval Research Laboratory.

Save