• Accadia, C., , Mariani S. , , Casaioli M. , , Lavagnini A. , , and Speranza A. , 2003: Sensitivity of precipitation forecast skill scores to bilinear interpolation and a simple nearest-neighbor average method on high-resolution verification grids. Wea. Forecasting, 18, 918932, doi:10.1175/1520-0434(2003)018<0918:SOPFSS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., 2013: Nonlinear characteristics of ensemble perturbation evolution and their application to forecasting high-impact events. Wea. Forecasting, 28, 13531365, doi:10.1175/WAF-D-12-00090.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., , Hoar T. , , Raeder K. , , Liu H. , , Collins N. , , Torn R. , , and Arellano A. , 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Search Google Scholar
    • Export Citation
  • Barker, D., and et al. , 2012: The Weather Research and Forecasting Model’s Community Variational/Ensemble Data Assimilation System: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., , Shutts G. J. , , Leutbecher M. , , and Palmer T. N. , 2009: A spectral stochastic kinetic energy backscatter scheme and its impact on flow-dependent predictability in the ECMWF Ensemble Prediction System. J. Atmos. Sci., 66, 603626, doi:10.1175/2008JAS2677.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., , Ha S.-Y. , , Hacker J. P. , , Fournier A. , , and Snyder C. , 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 19721995, doi:10.1175/2010MWR3595.1.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., , Vié B. , , Nuissier O. , , and Raynaud L. , 2012: Impact of stochastic physics in a convection-permitting ensemble. Mon. Wea. Rev., 140, 37063721, doi:10.1175/MWR-D-12-00031.1.

    • Search Google Scholar
    • Export Citation
  • Brier, G. W., 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 13, doi:10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bryan, G. H., , Wyngaard J. C. , , and Fritsch J. M. , 2003: Resolution requirements for the simulation of deep moist convection. Mon. Wea. Rev., 131, 23942416, doi:10.1175/1520-0493(2003)131<2394:RRFTSO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Cavallo, S. M., , Torn R. D. , , Snyder C. , , Davis C. , , Wang W. , , and Done J. , 2013: Evaluation of the Advanced Hurricane WRF data assimilation system for the 2009 Atlantic hurricane season. Mon. Wea. Rev., 141, 523541, doi:10.1175/MWR-D-12-00139.1.

    • Search Google Scholar
    • Export Citation
  • Chen, F., , and Dudhia J. , 2001: Coupling an advanced land-surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , Xue M. , , and Kong F. , 2009: A comparison of precipitation forecast skill between small convection-allowing and large convection-parameterizing ensembles. Wea. Forecasting, 24, 11211140, doi:10.1175/2009WAF2222222.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , and Weisman M. L. , 2010a: Neighborhood-based verification of precipitation forecasts from convection-allowing NCAR WRF Model simulations and the operational NAM. Wea. Forecasting, 25, 14951509, doi:10.1175/2010WAF2222404.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , Xue M. , , and Kong F. , 2010b: Growth of spread in convection-allowing and convection-parameterizing ensembles. Wea. Forecasting, 25, 594612, doi:10.1175/2009WAF2222318.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and et al. , 2011: Probabilistic precipitation forecast skill as a function of ensemble size and spatial scale in a convection-allowing ensemble. Mon. Wea. Rev., 139, 14101418, doi:10.1175/2010MWR3624.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and et al. , 2012: An overview of the 2010 Hazardous Weather Testbed Experimental Forecast Program Spring Experiment. Bull. Amer. Meteor. Soc., 93, 5574, doi:10.1175/BAMS-D-11-00040.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gao J. , , Marsh P. , , Smith T. , , Kain J. , , Correia J. , , Xue M. , , and Kong F. , 2013: Tornado pathlength forecasts from 2010 to 2011 using ensemble updraft helicity. Wea. Forecasting, 28, 387407, doi:10.1175/WAF-D-12-00038.1.

    • Search Google Scholar
    • Export Citation
  • Done, J., , Davis C. A. , , and Weisman M. L. , 2004: The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) Model. Atmos. Sci. Lett., 5, 110117, doi:10.1002/asl.72.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., , Zhang F. , , Wicker L. J. , , Snyder C. , , and Crook N. A. , 2004: Wind and temperature retrievals in the 17 May 1981 Arcadia, Oklahoma, supercell: Ensemble Kalman filter experiments. Mon. Wea. Rev., 132, 19822005, doi:10.1175/1520-0493(2004)132<1982:WATRIT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Du, J., , and Zhou B. , 2011: A dynamical performance-ranking method for predicting individual ensemble member performance and its application to ensemble averaging. Mon. Wea. Rev., 139, 32843303, doi:10.1175/MWR-D-10-05007.1.

    • Search Google Scholar
    • Export Citation
  • Du, J., , Dimego G. , , Toth Z. , , Jovic D. , , Zhou B. , , Zhu J. , , Wang J. , , and Juang H. , 2009: Recent upgrade of NCEP Short-Range Ensemble Forecast (SREF) system. Preprints, 19th Conf. on Numerical Weather Prediction/23rd Conf. on Weather Analysis and Forecasting, Omaha, NE, Amer. Meteor. Soc., 4A.4. [Available online at http://ams.confex.com/ams/pdfpapers/153264.pdf.]

  • Duc, L., , Saito K. , , and Seko H. , 2013: Spatial–temporal fractions verification for high-resolution ensemble forecasts. Tellus, 65A, 18171, doi:10.3402/tellusa.v65i0.18171.

    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., 2001: Ability of a poor man’s ensemble to predict the probability and distribution of precipitation. Mon. Wea. Rev., 129, 24612480, doi:10.1175/1520-0493(2001)129<2461:AOAPMS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., 2009: Neighborhood verification: A strategy for rewarding close forecasts. Wea. Forecasting, 24, 14981510, doi:10.1175/2009WAF2222251.1.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Gallus, W. A., Jr., 2002: Impact of verification grid-box size on warm-season QPF skill measures. Wea. Forecasting, 17, 12961302, doi:10.1175/1520-0434(2002)017<1296:IOVGBS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gebhardt, C., , Theis S. E. , , Paulat M. , , and Ben Bouallègue Z. , 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168177, doi:10.1016/j.atmosres.2010.12.008.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, doi:10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, 550560, doi:10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and Colucci S. J. , 1997: Verification of Eta–RSM short-range ensemble forecasts. Mon. Wea. Rev., 125, 13121327, doi:10.1175/1520-0493(1997)125<1312:VOERSR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and Colucci S. J. , 1998: Evaluation of Eta–RSM ensemble probabilistic precipitation forecasts. Mon. Wea. Rev., 126, 711724, doi:10.1175/1520-0493(1998)126<0711:EOEREP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , Whitaker J. S. , , Fiorino M. , , and Benjamin S. G. , 2011a: Global ensemble predictions of 2009’s tropical cyclones initialized with an ensemble Kalman filter. Mon. Wea. Rev., 139, 668688, doi:10.1175/2010MWR3456.1.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , Whitaker J. S. , , Kleist D. T. , , Fiorino M. , , and Benjamin S. G. , 2011b: Predictions of 2010’s tropical cyclones using the GFS and ensemble-based data assimilation methods. Mon. Wea. Rev., 139, 32433247, doi:10.1175/MWR-D-11-00079.1.

    • Search Google Scholar
    • Export Citation
  • Hanley, K. E., , Kirshbaum D. J. , , Belcher S. E. , , Roberts N. M. , , and Leoncini G. , 2011: Ensemble predictability of an isolated mountain thunderstorm in a high-resolution model. Quart. J. Roy. Meteor. Soc., 137, 21242137, doi:10.1002/qj.877.

    • Search Google Scholar
    • Export Citation
  • Hanley, K. E., , Kirshbaum D. J. , , Roberts N. M. , , and Leoncini G. , 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, doi:10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hohenegger, C., , and Schär C. , 2007: Predictability and error growth dynamics in cloud-resolving models. J. Atmos. Sci., 64, 44674478, doi:10.1175/2007JAS2143.1.

    • Search Google Scholar
    • Export Citation
  • Hohenegger, C., , Walser A. , , Langhans W. , , and Schär C. , 2008: Cloud-resolving ensemble simulations of the August 2005 Alpine flood. Quart. J. Roy. Meteor. Soc., 134, 889904, doi:10.1002/qj.252.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , Mitchell H. L. , , Pellerin G. , , Buehner M. , , Charron M. , , Spacek L. , , and Hansen B. , 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations. Mon. Wea. Rev., 133, 604620, doi:10.1175/MWR-2864.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , Deng X. , , Mitchell H. L. , , Baek S.-J. , , and Gagnon N. , 2014: Higher resolution in an operational ensemble Kalman filter. Mon. Wea. Rev., 142, 11431162, doi:10.1175/MWR-D-13-00138.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., , Delamere J. S. , , Mlawer E. J. , , Shephard M. W. , , Clough S. A. , , and Collins W. D. , 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927945, doi:10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp. [Available online at http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.]

  • Johnson, A., , and Wang X. , 2012: Verification and calibration of neighborhood and object-based probabilistic precipitation forecasts from a multimodel convection-allowing ensemble. Mon. Wea. Rev., 140, 30543077, doi:10.1175/MWR-D-11-00356.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, A., , and Wang X. , 2013: Object-based evaluation of a storm-scale ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment. Mon. Wea. Rev., 141, 10791098, doi:10.1175/MWR-D-12-00140.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, A., , Wang X. , , Xue M. , , and Kong F. , 2011: Hierarchical cluster analysis of a convection-allowing ensemble during the Hazardous Weather Testbed 2009 Spring Experiment. Part II: Season-long ensemble clustering and implication for optimal ensemble design. Mon. Wea. Rev., 139, 36943710, doi:10.1175/MWR-D-11-00016.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., , and Stensrud D. J. , 2012: Assimilating AIRS temperature and mixing ratio profiles using an ensemble Kalman filter approach for convective-scale forecasts. Wea. Forecasting, 27, 541564, doi:10.1175/WAF-D-11-00090.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., , Stensrud D. J. , , Minnis P. , , and Palikonda R. , 2013: Evaluation of a forward operator to assimilate cloud water path into WRF-DART. Mon. Wea. Rev., 141, 22722289, doi:10.1175/MWR-D-12-00238.1.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., , Weiss S. J. , , Levit J. J. , , Baldwin M. E. , , and Bright D. R. , 2006: Examination of convection-allowing configurations of the WRF Model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004. Wea. Forecasting, 21, 167181, doi:10.1175/WAF906.1.

    • Search Google Scholar
    • Export Citation
  • Kong, F., and et al. , 2008: Real-time storm-scale ensemble forecasting during the 2008 Spring Experiment. Preprints, 24th Conf. on Severe Local Storms, Savannah, GA, Amer. Meteor. Soc., 12.3. [Available online at https://ams.confex.com/ams/pdfpapers/141827.pdf.]

  • Kong, F., and et al. , 2009: A real-time storm-scale ensemble forecast system: 2009 Spring Experiment. Preprints, 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 16A.3. [Available online at https://ams.confex.com/ams/pdfpapers/154118.pdf.]

  • Lean, H. W., , Clark P. A. , , Dixon M. , , Roberts N. M. , , Fitch A. , , Forbes R. , , and Halliwell C. , 2008: Characteristics of high-resolution versions of the Met Office Unified Model for forecasting convection over the United Kingdom. Mon. Wea. Rev., 136, 34083424, doi:10.1175/2008MWR2332.1.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., , and Mitchell K. E. , 2005: The NCEP stage II/IV hourly precipitation analyses: Development and applications. Preprints, 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2. [Available online at http://ams.confex.com/ams/pdfpapers/83847.pdf.]

  • Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289307, doi:10.1111/j.2153-3490.1969.tb00444.x.

    • Search Google Scholar
    • Export Citation
  • Melhauser, C., , and Zhang F. , 2012: Practical and intrinsic predictability of severe and convective weather at the mesoscales. J. Atmos. Sci., 69, 33503371, doi:10.1175/JAS-D-11-0315.1.

    • Search Google Scholar
    • Export Citation
  • Mellor, G. L., , and Yamada T. , 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851875, doi:10.1029/RG020i004p00851.

    • Search Google Scholar
    • Export Citation
  • Migliorini, S., , Dixon M. , , Bannister R. , , and Ballard S. , 2011: Ensemble prediction for nowcasting with a convection-permitting model. I: Description of the system and the impact of radar-derived surface precipitation rates. Tellus, 63A, 468496, doi:10.1111/j.1600-0870.2010.00503.x.

    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., , and Roberts N. , 2010: Intercomparison of spatial forecast verification methods: Identifying skillful spatial scales using the fractions skill score. Wea. Forecasting, 25, 343354, doi:10.1175/2009WAF2222260.1.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., , Taubman S. J. , , Brown P. D. , , Iacono M. J. , , and Clough S. A. , 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the long-wave. J. Geophys. Res., 102, 16 66316 682, doi:10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Morrison, H., , Thompson G. , , and Tatarskii V. , 2009: Impact of cloud microphysics on the development of trailing stratiform precipitation in a simulated squall line: Comparison of one- and two-moment schemes. Mon. Wea. Rev., 137, 9911007, doi:10.1175/2008MWR2556.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281293, doi:10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nutter, P., , Stensrud D. , , and Xue M. , 2004a: Effects of coarsely resolved and temporally interpolated lateral boundary conditions on the dispersion of limited-area ensemble forecasts. Mon. Wea. Rev., 132, 23582377, doi:10.1175/1520-0493(2004)132<2358:EOCRAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nutter, P., , Xue M. , , and Stensrud D. , 2004b: Application of lateral boundary condition perturbations to help restore dispersion in limited-area ensemble forecasts. Mon. Wea. Rev., 132, 23782390, doi:10.1175/1520-0493(2004)132<2378:AOLBCP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peralta, C., , Bouallegue Z. B. , , Theis S. E. , , Gebhardt C. , , and Buchhold M. , 2012: Accounting for initial condition uncertainties in COSMO-DE-EPS. J. Geophys. Res., 117, D07108, doi:10.1029/2011JD016581.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., , and Lean H. W. , 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Rogers, E., and et al. , 2009: The NCEP North American Mesoscale modeling system: Recent changes and future plans. Preprints, 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 2A.4. [Available online at https://ams.confex.com/ams/pdfpapers/154114.pdf.]

  • Romine, G., , Schwartz C. S. , , Snyder C. , , Anderson J. , , and Weisman M. , 2013: Model bias in a continuously cycled assimilation system and its influence on convection-permitting forecasts. Mon. Wea. Rev., 141, 12631284, doi:10.1175/MWR-D-12-00112.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., , and Liu Z. , 2014: Convection-permitting forecasts initialized with continuously cycling limited-area 3DVAR, ensemble Kalman filter, and “hybrid” variational-ensemble data assimilation systems. Mon. Wea. Rev., 142, 716738, doi:10.1175/MWR-D-13-00100.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and et al. , 2009: Next-day convection-allowing WRF Model guidance: A second look at 2-km versus 4-km grid spacing. Mon. Wea. Rev., 137, 33513372, doi:10.1175/2009MWR2924.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and et al. , 2010: Toward improved convection-allowing ensembles: Model physics sensitivities and optimizing probabilistic guidance with small ensemble membership. Wea. Forecasting, 25, 263280, doi:10.1175/2009WAF2222267.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., , and Weisman M. L. , 2009: The impact of positive-definite moisture transport on NWP precipitation forecasts. Mon. Wea. Rev., 137, 488494, doi:10.1175/2008MWR2583.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and et al. , 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp. [Available from UCAR Communications, P.O. Box 3000, Boulder, CO 80307.]

  • Snyder, C., , and Zhang F. , 2003: Assimilation of simulated Doppler radar observations with an ensemble Kalman filter. Mon. Wea. Rev., 131, 16631677, doi:10.1175/2555.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., , and Yussouf N. , 2007: Reliable probabilistic quantitative precipitation forecasts from a short-range ensemble forecasting system. Wea. Forecasting, 22, 317, doi:10.1175/WAF968.1.

    • Search Google Scholar
    • Export Citation
  • Tanamachi, R. L., , Wicker L. J. , , Dowell D. C. , , Bluestein H. B. , , Dawson D. T. , , and Xue M. , 2013: EnKF assimilation of high-resolution, mobile Doppler radar data of the 4 May 2007 Greensburg, Kansas, supercell into a numerical cloud model. Mon. Wea. Rev., 141, 625648, doi:10.1175/MWR-D-12-00099.1.

    • Search Google Scholar
    • Export Citation
  • Tegen, I., , Hollrig P. , , Chin M. , , Fung I. , , Jacob D. , , and Penner J. , 1997: Contribution of different aerosol species to the global aerosol extinction optical thickness: Estimates from model results. J. Geophys. Res., 102, 23 89523 915, doi:10.1029/97JD01864.

    • Search Google Scholar
    • Export Citation
  • Theis, S. E., , Hense A. , , and Damrath U. , 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Meteor. Appl., 12, 257268, doi:10.1017/S1350482705001763.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., , Field P. R. , , Rasmussen R. M. , , and Hall W. D. , 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, doi:10.1175/2008MWR2387.1.

    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., 2010: Performance of a mesoscale ensemble Kalman filter (EnKF) during the NOAA High-Resolution Hurricane Test. Mon. Wea. Rev., 138, 43754392, doi:10.1175/2010MWR3361.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., , Hakim G. J. , , and Snyder C. , 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Vié, B., , Nuissier O. , , and Ducrocq V. , 2011: Cloud-resolving ensemble simulations of Mediterranean heavy precipitating events: Uncertainty on initial conditions and lateral boundary conditions. Mon. Wea. Rev., 139, 403423, doi:10.1175/2010MWR3487.1.

    • Search Google Scholar
    • Export Citation
  • Wang, X., , Parrish D. F. , , Kleist D. T. , , and Whitaker J. S. , 2013: GSI 3DVAR-based ensemble-variational hybrid data assimilation for NCEP Global Forecast System: Single-resolution experiments. Mon. Wea. Rev., 141, 40984117, doi:10.1175/MWR-D-12-00141.1.

    • Search Google Scholar
    • Export Citation
  • Wei, M., , Toth Z. , , Wobus R. , , and Zhu Y. , 2008: Initial perturbations based on the ensemble transform (ET) technique in the NCEP global operational forecast system. Tellus, 60A, 6279, doi:10.1111/j.1600-0870.2007.00273.x.

    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., , Davis C. A. , , Wang W. , , Manning K. W. , , and Klemp J. B. , 2008: Experiences with 0–36-h explicit convective forecasts with the WRF-ARW Model. Wea. Forecasting, 23, 407437, doi:10.1175/2007WAF2007005.1.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., , Hamill T. M. , , Wei X. , , Song Y. , , and Toth Z. , 2008: Ensemble data assimilation with the NCEP Global Forecast System. Mon. Wea. Rev., 136, 463482, doi:10.1175/2007MWR2018.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences: An Introduction. 2nd ed. Academic Press, 467 pp.

  • Xue, M., and et al. , 2010: CAPS real-time storm-scale ensemble and high-resolution forecasts for the NOAA Hazardous Weather Testbed 2010 Spring Experiment. Preprints, 25th Conf. on Severe Local Storms, Denver, CO, Amer. Meteor. Soc., 7B.3. [Available online at https://ams.confex.com/ams/pdfpapers/176056.pdf.]

  • Zhang, C., , Wang Y. , , and Hamilton K. , 2011: Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. Mon. Wea. Rev., 139, 34893513, doi:10.1175/MWR-D-10-05091.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., , Snyder C. , , and Rotunno R. , 2003: Effects of moist convection on mesoscale predictability. J. Atmos. Sci., 60, 11731185, doi:10.1175/1520-0469(2003)060<1173:EOMCOM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., , Snyder C. , , and Sun J. , 2004: Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter. Mon. Wea. Rev., 132, 12381253, doi:10.1175/1520-0493(2004)132<1238:IOIEAO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., , Zhang F. , , Huang X.-Y. , , and Zhang X. , 2011: Intercomparison of an ensemble Kalman filter with three- and four-dimensional variational data assimilation methods in a limited-area model over the month of June 2003. Mon. Wea. Rev., 139, 566572, doi:10.1175/2010MWR3610.1.

    • Search Google Scholar
    • Export Citation
  • View in gallery

    Computational domain. Objective precipitation verification only occurred in the speckled region of the 3-km domain.

  • View in gallery

    Ratio of ensemble mean RMS error to total spread of radiosonde (a) horizontal wind (m s−1), (b) temperature (K), and (c) dewpoint (K) observations aggregated over all 0000 UTC priors between 25 May and 25 June for selected pressure levels. The sample size at each pressure level is shown at the right.

  • View in gallery

    Total accumulated precipitation over the (a) verification and (b) full 3-km domains aggregated hourly over all forecasts and normalized by the total number of forecasts. The range of the individual ensemble members is shaded in gray.

  • View in gallery

    Fractional grid coverage (%) of hourly accumulated precipitation exceeding (a) 0.25, (b) 0.5, (c) 1.0, (d) 5.0, (e) 10.0, and (f) 20.0 mm h−1 over the verification domain, aggregated hourly over all forecasts. The range of the individual ensemble members is shaded in gray.

  • View in gallery

    Attributes diagrams computed over the verification domain for precipitation thresholds of (a) 0.25, (b) 1.0, (c) 5.0, and (d) 10.0 mm h−1, using a 50-km radius of influence, and aggregated hourly over the first 12 h for various ensemble sizes. Each boxplot depicts the extrema, interquartile range, and median of 100 realizations of observed relative frequency for each ensemble size. The horizontal line near the x axis represents the observed frequency of the event, the dashed line indicates the no-skill line, and the diagonal line is the line of perfect reliability. Filled circles above the diagonal line indicate those instances where the inner 90% of the n-member boxplot distribution contained the 50-member value. The forecast frequency (%) for the 50-member ensemble for each bin is shown on the top x axis, where a value of “−999999” indicates the 50-member ensemble had fewer than 1000 grid points with forecast probabilities in that bin over the verification domain.

  • View in gallery

    As in Fig. 5, but for hourly aggregated 18–36-h forecasts.

  • View in gallery

    Area under the ROC curve as a function of precipitation threshold for various ensemble sizes aggregated over the (a) first 12 h and (b) 18–36-h forecasts using a 50-km radius of influence. Each boxplot depicts the extrema, interquartile range, and median of 100 realizations of the ROC area for each ensemble size. Filled circles indicate those instances where the inner 90% of the n-member boxplot distribution contained the 50-member value.

  • View in gallery

    As in Fig. 7, but for aggregate FSS values computed with a 50-km radius of influence.

  • View in gallery

    Rank histogram based on total accumulated precipitation over the verification domain.

  • View in gallery

    NEP (%) of hourly precipitation meeting or exceeding 0.5 mm h−1 computed with a 50-km radius of influence for randomly drawn (a) 5-, (b) 10-, (c) 20-, (d) 30-, and (e) 40-member ensembles, as well as (f) the full 50-member ensemble for the 24-h forecast valid at 0000 UTC 31 May. (g) ST4 precipitation exceeding 0.5 mm h−1 (shaded). The plotted area is the verification domain.

  • View in gallery

    FSS values with a 50-km radius of influence aggregated over the (a),(b) first 12 h and (c),(d) 18–36-h forecasts for various accumulation thresholds for deterministic forecasts that required an ensemble of free forecasts (see text). “Closeness” in (a),(b) was determined by total precipitation (method I) and in (c),(d) by the FSS using the PMM as truth (method II; see text). The gray shading indicates the FSS range of the individual ensemble members. Bounds of the 90% bootstrap CIs are also shown.

  • View in gallery

    (a)–(d) ST4 observations and the corresponding (e)–(h) method I patched-together forecasts, (i)–(l) method II patched-together forecasts, (m)–(p) ensemble member 38 forecasts, and (q)–(t) PMM forecasts of 1–4-h hourly accumulated precipitation (mm) initialized at 0000 UTC 21 June.

  • View in gallery

    As in Fig. 12, but for the forecast initialized at 0000 UTC 26 May and the use of ensemble member 44 in (m)–(p). The asterisk is provided as a reference.

  • View in gallery

    As in Fig. 12, but for the forecast initialized at 0000 UTC 15 June and the use of ensemble member 49 in (m)–(p).

  • View in gallery

    FSS values with a 50-km radius of influence aggregated over the (a) first 12 h and (b) 18–36-h forecasts for various accumulation thresholds for deterministic forecasts that did not require an ensemble of free forecasts (see text). Bounds of the 90% bootstrap CIs are shown for selected curves.

  • View in gallery

    FSS as a function of the radius of influence based on hourly precipitation aggregated over the first 12 forecast hours and all forecasts for (a) 0.25, (b) 0.5, (c) 1.0, (d) 5.0, (e) 10.0, and (f) 20.0 mm h−1 accumulation thresholds. Bounds of the 90% CIs are shown and the ranges of the individual ensemble members’ FSS are shaded in gray.

  • View in gallery

    As in Fig. 16, but for FSS aggregated hourly for 18–36-h forecasts.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 118 118 9
PDF Downloads 102 102 8

Characterizing and Optimizing Precipitation Forecasts from a Convection-Permitting Ensemble Initialized by a Mesoscale Ensemble Kalman Filter

View More View Less
  • 1 National Center for Atmospheric Research,* Boulder, Colorado
© Get Permissions
Full access

Abstract

Convection-permitting Weather Research and Forecasting (WRF) Model forecasts with 3-km horizontal grid spacing were produced for a 50-member ensemble over a domain spanning three-quarters of the contiguous United States between 25 May and 25 June 2012. Initial conditions for the 3-km forecasts were provided by a continuously cycling ensemble Kalman filter (EnKF) analysis–forecast system with 15-km horizontal grid length. The 3-km forecasts were evaluated using both probabilistic and deterministic techniques with a focus on hourly precipitation. All 3-km ensemble members overpredicted rainfall and there was insufficient forecast precipitation spread. However, the ensemble demonstrated skill at discriminating between both light and heavy rainfall events, as measured by the area under the relative operating characteristic curve. Subensembles composed of 20–30 members usually demonstrated comparable resolution, reliability, and skill as the full 50-member ensemble. On average, deterministic forecasts initialized from mean EnKF analyses were at least as or more skillful than forecasts initialized from individual ensemble members “closest” to the mean EnKF analyses, and “patched together” forecasts composed of members closest to the ensemble mean during each forecast interval were skillful but came with caveats. The collective results underscore the need to improve convection-permitting ensemble spread and have important implications for optimizing EnKF-initialized forecasts.

NCAR is sponsored by the National Science Foundation.

Corresponding author address: Craig Schwartz, NCAR, 3090 Center Green Dr., Boulder, CO 80301. E-mail: schwartz@ucar.edu

Abstract

Convection-permitting Weather Research and Forecasting (WRF) Model forecasts with 3-km horizontal grid spacing were produced for a 50-member ensemble over a domain spanning three-quarters of the contiguous United States between 25 May and 25 June 2012. Initial conditions for the 3-km forecasts were provided by a continuously cycling ensemble Kalman filter (EnKF) analysis–forecast system with 15-km horizontal grid length. The 3-km forecasts were evaluated using both probabilistic and deterministic techniques with a focus on hourly precipitation. All 3-km ensemble members overpredicted rainfall and there was insufficient forecast precipitation spread. However, the ensemble demonstrated skill at discriminating between both light and heavy rainfall events, as measured by the area under the relative operating characteristic curve. Subensembles composed of 20–30 members usually demonstrated comparable resolution, reliability, and skill as the full 50-member ensemble. On average, deterministic forecasts initialized from mean EnKF analyses were at least as or more skillful than forecasts initialized from individual ensemble members “closest” to the mean EnKF analyses, and “patched together” forecasts composed of members closest to the ensemble mean during each forecast interval were skillful but came with caveats. The collective results underscore the need to improve convection-permitting ensemble spread and have important implications for optimizing EnKF-initialized forecasts.

NCAR is sponsored by the National Science Foundation.

Corresponding author address: Craig Schwartz, NCAR, 3090 Center Green Dr., Boulder, CO 80301. E-mail: schwartz@ucar.edu

1. Introduction

Numerical weather prediction (NWP) models with grid spacing fine enough to remove convective parameterization1 have been shown to produce better precipitation forecasts than NWP models with parameterized convection (e.g., Done et al. 2004; Kain et al. 2006; Lean et al. 2008; Roberts and Lean 2008; Weisman et al. 2008; Schwartz et al. 2009; Clark et al. 2010a). However, there is much uncertainty at convective scales due to highly nonlinear flows and rapid error growth (e.g., Lorenz 1969). Motivated by successes with deterministic convection-permitting NWP models and in recognition of storm-scale uncertainties, starting in 2007, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma has produced annual real-time, convection-allowing ensemble forecasts over a synoptic-scale domain with a focus on springtime convective forecasting (e.g., Kong et al. 2008, 2009; Xue et al. 2010; Clark et al. 2012). To achieve diversity, the CAPS ensembles employed physics, model, and initial condition (IC) perturbations to ensembles composed of 10–50 members with 4-km horizontal grid spacing and produced forecasts with durations between 30 and 48 h. These ensemble forecasts have been leveraged to explore many important aspects of convection-permitting ensembles, including sensitivity to physics and postprocessing techniques (e.g., Clark et al. 2009, 2010b, 2011, 2013; Kong et al. 2008, 2009; Schwartz et al. 2010; Xue et al. 2010; Johnson et al. 2011; Johnson and Wang 2012, 2013).

CAPS typically extracted perturbations from the operational Short Range Ensemble Forecast (SREF; Du et al. 2009) system and added them to a high-resolution control [such as the North American Mesoscale Forecast System (NAM; Rogers et al. 2009) analysis interpolated onto the 4-km domain] to produce their initial high-resolution ensembles (Kong et al. 2008, 2009). Elsewhere, approaches for producing high-resolution ensemble ICs that also relied on downscaling external models yielded successful results (e.g., Hohenegger et al. 2008; Gebhardt et al. 2011; Hanley et al. 2011, 2013; Vié et al. 2011; Peralta et al. 2012; Duc et al. 2013), and convection-allowing ensemble ICs have also been produced by an ensemble of storm-scale perturbed-observation three-dimensional variational data assimilation (3DVAR) analyses (Vié et al. 2011).

Convection-permitting ensemble forecasts can also be initialized by an ensemble Kalman filter (EnKF; Evensen 1994) analysis–forecast system. EnKFs use an ensemble to calculate temporally and spatially evolving forecast errors for data assimilation (DA) purposes. Modern EnKFs typically employ “continuous cycling” that unifies ensemble forecasting and analysis steps. For example, the analysis step combines a “background” (or “prior”) ensemble at time T with real observations and their error statistics to produce an ensemble of “analyses” at T. Then, the forecast step uses the analysis ensemble at T as ICs for a P NWP model forecast (h; typically, P ≤ 6 h). The ensemble forecast valid at T + P is used as the background for another EnKF analysis, and this cyclic analysis–forecast pattern with a P-h period can continue indefinitely. This seamless integration of DA with forecasting means dynamically consistent initial ensembles, which differ from initial ensembles generated by downscaling external models.

Ideally, EnKF analyses would always initialize an ensemble of “free forecasts” (defined here as a forecast longer than P2) to generate probabilistic guidance (e.g., Murphy 1993). However, producing an ensemble of free forecasts is computationally expensive, especially at convection-permitting resolutions over domains large enough to resolve synoptic-scale circulations. Thus, most work examining free forecasts of convection-permitting ensembles initialized by EnKFs has focused on case studies over storm-scale [e.g., ~100 km × 100 km; Snyder and Zhang (2003); Zhang et al. (2004); Dowell et al. (2004); Tanamachi et al. (2013), and references therein] or regional domains usually smaller than ~10° × 10° (e.g., Migliorini et al. 2011; Melhauser and Zhang 2012; Jones and Stensrud 2012; Jones et al. 2013).

In fact, sometimes computational constraints may altogether prohibit EnKFs from initializing an ensemble of free forecasts. Nonetheless, even in these cases, it may remain desirable to use EnKFs as analysis systems, as free forecasts initialized from mean EnKF analyses often outperform forecasts initialized by 3DVAR systems for a variety of applications (e.g., Whitaker et al. 2008; M. Zhang et al. 2011; Hamill et al. 2011a,b; Wang et al. 2013; Schwartz and Liu 2014), illustrating that EnKFs can produce valuable deterministic guidance.

However, while on average, the EnKF mean analysis has a smaller root-mean-square (RMS) error compared to observations than any individual analysis member, it can be argued that does not accurately represent the model, as it does not lie on the forecast model “attractor”; is overly smooth; and in some cases may be physically unrealistic (Ancell 2013, hereafter A13). Moreover, anecdotally, it has been suggested that smooth mean analyses may degrade initializations and subsequent forecasts of small-scale features with uncertain positions, such as tropical cyclone (TC) centers or convection. Given these concerns, Torn (2010) and Cavallo et al. (2013) initialized free forecasts solely from the single EnKF analysis member (of 96) closest to the mean EnKF analysis, rather than itself, for TC applications. Similarly, Romine et al. (2013, hereafter R13) chose the single EnKF analysis member (of 50) whose state vector was nearest that of to initialize 3-km forecasts of convection. Yet, Schwartz and Liu (2014) initialized skillful 4-km convection-permitting forecasts from 20-km mean EnKF analyses, suggesting that may provide acceptable ICs for convective forecasts. More generally, a rigorous comparison of free forecasts initialized from versus those initialized from individual EnKF analysis members has not been performed for either TC or convective applications.

Another approach to optimizing deterministic EnKF-based forecast guidance was examined by A13, who studied landfalling midlatitude cyclones in the Pacific Northwest with a cycling 80-member EnKF at 36-km horizontal grid spacing. Instead of focusing on the closeness of each EnKF analysis member to , A13 examined the closeness of the members’ forecasts to the 80-member forecast average. A13 noted that constructing a “patched together” forecast composed of the single ensemble member nearest the ensemble mean at each 6-h forecast interval produced similar-quality forecasts of sea level pressure (SLP) as the 80-member forecast mean while retaining better sharpness than the smooth forecast average. However, it is unclear whether this approach can be successfully applied to high-resolution precipitation forecasts due to potential temporal and spatial discontinuities between ensemble members.

Given the few studies examining convection-permitting EnKF-initialized ensemble predictions, this paper documents characteristics of precipitation forecasts produced by a 50-member ensemble with 3-km horizontal grid spacing over three-quarters of the conterminous United States (CONUS) that were initialized from downscaled 15-km EnKF analyses during spring 2012. Further, approaches of generating deterministic guidance from EnKFs are considered, probabilistic forecast sensitivity to ensemble size is examined, and probabilistic and deterministic EnKF-initialized forecasts are directly compared.

Section 2 describes the ensemble configuration, initialization approach, and experimental design. The quality of the 15-km EnKF analysis system is briefly assessed in section 3 before rigorously verifying precipitation forecasts in section 4. A discussion is provided in section 5 before concluding in section 6.

2. Methodology

a. Model configurations

All weather forecasts were produced by version 3.3.1 of the nonhydrostatic Advanced Research core of the Weather Research and Forecasting (hereafter WRF; Skamarock et al. 2008) Model over a nested computational domain spanning the CONUS and adjacent areas (Fig. 1). The horizontal grid spacing was 15 km (415 × 325 grid points) in the outer domain and 3 km (1046 × 871 grid boxes) in the inner nest. Both domains were configured with 40 vertical levels and a 50-hPa top. The time step was 75 s in the 15-km domain and 18.75 s in the 3-km nest, and two-way feedback linked the domains.

Fig. 1.
Fig. 1.

Computational domain. Objective precipitation verification only occurred in the speckled region of the 3-km domain.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Positive definite moisture advection (Skamarock and Weisman 2009) and the following physical parameterizations were used in both domains: the Rapid Radiative Transfer Model (RRTM) for Global Climate Models (RRTMG; Mlawer et al. 1997; Iacono et al. 2008) long- and shortwave radiation schemes with ozone and aerosol climatologies (Tegen et al. 1997); the Mellor–Yamada–Janjić (MYJ; Mellor and Yamada 1982; Janjić 1994, 2002) planetary boundary layer scheme; and the Noah land surface model (Chen and Dudhia 2001). Furthermore, the Tiedtke cumulus parameterization (Tiedtke 1989; C. Zhang et al. 2011) and Morrison double-moment microphysics scheme (Morrison et al. 2009) were employed in the 15-km domain. However, in the 3-km domain, no cumulus parameterization was used and the Thompson microphysics (Thompson et al. 2008) scheme was employed (extensive testing revealed using Morrison microphysics on the 3-km grid engendered very high precipitation biases during the first few hours of model integration). Each ensemble member shared identical physical parameterizations and WRF settings.

b. Generation of ensembles and experimental design

A 50-member ensemble adjustment Kalman filter (EAKF; Anderson 2001, 2003) from the Data Assimilation Research Testbed (DART; Anderson et al. 2009) software was used to initialize ensemble forecasts. The EAKF configuration was similar to R13. In summary, the EAKF system was run at 15-km horizontal grid spacing, assimilated many surface and upper-air observations, employed vertical and horizontal localization to reduce spurious correlations due to sampling errors, and used prior adaptive inflation (Anderson 2009) to preserve ensemble spread. Table 1 notes some EAKF settings.

Table 1.

Localization values and analysis variables in the EAKF system.

Table 1.

The initial 15-km ensemble was produced by taking Gaussian random draws with zero mean and covariances from global background error covariances provided by the WRF DA system (Barker et al. 2012) and adding them to the 1800 UTC 29 April 2012 Global Forecast System (GFS) analysis interpolated onto the 15-km WRF domain (Torn et al. 2006). This randomly produced ensemble served as the ICs for 6-h WRF Model forecasts, and the ensemble valid at 0000 UTC 30 April was the prior ensemble for the first analysis. The 0000 UTC 30 April analysis ensemble served as the ICs for 6-h WRF forecasts, and the second EAKF analysis occurred at 0600 UTC 30 April. This cyclic analysis–forecast pattern with a 6-h period continued until 0000 UTC 2 July.

The EnKF DA system was run solely over the 15-km domain. From 25 May to 25 June, each 0000 UTC analysis initialized a 50-member ensemble of 36-h WRF Model forecasts containing the nested 3-km domain (Fig. 1). The 3-km ensembles were initialized by interpolating the 15-km EnKF analysis ensembles onto the 3-km grid. Nested 15-/3-km WRF Model forecasts were also initialized from mean EnKF analyses over the same period. Although EnKF analyses were produced through all of May, there were few convective outbreaks over the Great Plains before 25 May, so high-resolution ensemble forecasts were not initialized prior to 25 May.

GFS forecasts provided lateral boundary conditions (LBCs) for the 15-km domain and were identical for all members (e.g., no LBC perturbations). While this approach may limit ensemble spread (e.g., Nutter et al. 2004a,b), this method ensured forecast differences between ensemble members could be attributed solely to differing ICs, which was critical for the “pick a member” investigations (section 4c). Forecasts initialized from also used GFS LBCs.

Each forecast hour, the probability matched mean (PMM; Ebert 2001) and ensemble mean (EM) precipitation were computed. The EM at point i was simply the sum of all members’ rainfall at i divided by the ensemble size (50 members). To produce the PMM field for a particular time, we formed the ensemble rainfall distribution by pooling the rainfall amounts from all 50 ensemble members at each grid point, ordering them from highest to lowest, and keeping every 50th value. Similarly, we ranked the EM rainfall amounts from highest to lowest and determined the grid point corresponding to each EM rainfall amount. Then, we assigned the grid point containing the highest EM rainfall amount the highest rainfall value in the ensemble distribution, and so on. If the EM field had zero precipitation at point i, we forced the PMM at i to zero, preserving the structure of the EM field (Ebert 2001).

3. Quality of the EnKF analysis–forecast system

Before evaluating the precipitation forecasts, we briefly assess the caliber of the 15-km EnKF used to produce the 3-km ICs. The most commonly used metric for evaluating cyclic EnKF quality is the consistency of the prior ensemble spread and mean RMS error. Specifically, in a well-calibrated system, when compared to observations, the ratio of the prior ensemble mean RMS error and “total spread,” defined as the square root of the sum of the observation error variance and ensemble variance of the simulated observations, will be near 1.0 (Houtekamer et al. 2005).

The ratio of the prior ensemble mean RMS error to total spread aggregated over all 0000 UTC priors between 25 May and 25 June is shown in Fig. 2 for radiosonde observations. For wind and temperature observations (Figs. 2a,b), these ratios varied between 0.85 and 1.1, and in fact, ratios were <1 in the mid- and upper troposphere, indicating too much spread. However, at all levels, there was not enough spread for dewpoint (Fig. 2c) observations, but below 300 hPa, where tropospheric moisture is most important for convective processes, the ratios were typically <1.1. Overall, these metrics indicate that the 15-km EnKF system was reasonably well tuned.

Fig. 2.
Fig. 2.

Ratio of ensemble mean RMS error to total spread of radiosonde (a) horizontal wind (m s−1), (b) temperature (K), and (c) dewpoint (K) observations aggregated over all 0000 UTC priors between 25 May and 25 June for selected pressure levels. The sample size at each pressure level is shown at the right.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

4. Precipitation verification

The 3-km precipitation forecasts were verified using both deterministic and probabilistic techniques. Gridded stage IV (ST4) data (Lin and Mitchell 2005) from the National Centers for Environmental Prediction (NCEP) were used as “truth.” The ST4 analyses were bilinearly interpolated onto the 3-km model grid for comparison with the WRF forecasts.3

Objective verification was performed over a fixed domain encompassing most of the central CONUS (Fig. 1), distant from lateral boundaries and where ST4 data were robust. First, we describe general precipitation characteristics before presenting probabilistic verification. Then, we assess a variety of methods for extracting deterministic and probabilistic guidance from the EnKF-initialized forecasts.

a. General precipitation characteristics

The total accumulated precipitation over the verification domain, aggregated hourly over all forecasts and normalized by the total number of forecasts, is shown in Fig. 3a. All ensemble members produced too much rainfall. While the forecasts reasonably depicted the diurnal cycle, the maxima and minima were slightly early compared to the observations. Forecasts initialized from spun up precipitation more slowly than forecasts initialized from the ensemble members, but once rainfall developed, the amounts were near the top or outside of the ensemble envelope between ~12 and 28 h. R13 also noted that forecasts initialized from produced more precipitation than individual members’ forecasts. The PMM rainfall usually was within the top half of or higher than the ensemble range. We note that when total precipitation was calculated over the entire 3-km domain (Fig. 3b), the PMM and EM precipitation totals were nearly identical. That the PMM rainfall exceeded the EM rainfall over the verification domain indicates that locations of heaviest EM rainfall were primarily over the verification domain.

Fig. 3.
Fig. 3.

Total accumulated precipitation over the (a) verification and (b) full 3-km domains aggregated hourly over all forecasts and normalized by the total number of forecasts. The range of the individual ensemble members is shaded in gray.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

While the EM total precipitation was similar to that of individual members, the distributions of the members’ and EM precipitation amounts differed markedly. Figure 4 depicts the fractional occurrence of various events, defined as precipitation exceedance of certain accumulation thresholds (q; e.g., q = 1.0 mm h−1) over the verification domain and aggregated hourly over all forecasts. At all thresholds, the ensemble members overpredicted rainfall, with worse overprediction as q increased. Compared to individual members, the EM had greater coverages for q ≤ 1.0 mm h−1 (Figs. 4a–c) and considerably smaller coverages for larger q (Figs. 4d–f), illustrating how ensemble averaging increases areas of light precipitation and diminishes regions of heavy rainfall. The PMM coverages compared to the ensemble members varied substantially with time and threshold. Forecasts initialized from produced areal coverages toward the top of the ensemble envelope from ~12 to 28 h, consistent with the total precipitation.

Fig. 4.
Fig. 4.

Fractional grid coverage (%) of hourly accumulated precipitation exceeding (a) 0.25, (b) 0.5, (c) 1.0, (d) 5.0, (e) 10.0, and (f) 20.0 mm h−1 over the verification domain, aggregated hourly over all forecasts. The range of the individual ensemble members is shaded in gray.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

b. Probabilistic verification

Areas under the relative operating characteristic (ROC) curve, fractions skill scores (FSS; Roberts and Lean 2008), and attributes statistics (Wilks 2006) were calculated for a range of precipitation thresholds. As high-resolution NWP models are not skillful at the grid scale, a neighborhood approach was employed to postprocess the probabilistic fields before performing verification (e.g., Schwartz et al. 2010; Johnson and Wang 2012; Duc et al. 2013). Specifically, the neighborhood ensemble probability NEP (Schwartz et al. 2010) was computed by averaging the point-based ensemble probability EP over a radius of influence r (e.g., r = 25 km). The EP at the ith grid point EPi is simply the number of ensemble members with precipitation ≥q at point i divided by the ensemble size n, and the NEP at the ith point NEPi is found by averaging EPi over the Nb grid boxes within the r neighborhood of i (km).

To assess forecast sensitivity to ensemble size, probabilistic metrics were computed for subensembles composed of n = 5, 10, 20, 30, and 40 members (in addition to the full 50-member ensemble) using NEP fields constructed with r = 50 km. As there were many possible combinations of 5-, 10-, 20-, 30-, and 40-member ensembles, statistics were computed for 100 unique combinations of randomly drawn members for n = 5, 10, 20, 30, and 40, similar to Clark et al. (2011). The distribution of scores is presented as boxplots that depict the median, interquartile range, and extrema of the 100 computations for each n ≠ 50. As a simple, pragmatic way of comparing the 50-member scores to the n = 5, 10, 20, 30, and 40 boxplot distributions, if the inner 90% of a particular boxplot for n ≠ 50 contained the 50-member value, there was considered to be no practical difference between the 50- and n-member scores.

1) Attributes statistics

Attributes statistics with forecast probability bins of 0%–5%, 5%–15%, 15%–25%, … , 85%–95%, and 95%–100% were aggregated hourly over the first 12 forecast hours (Fig. 5) and between hours 18 and 36 (Fig. 6). Perfect reliability is achieved for curves on the diagonal, and points above the no-skill line contribute positively to the Brier skill score (Brier 1950), computed with a reference forecast of climatology (i.e., the observed frequency). Values are not plotted for a particular bin and n if any combination of the 100 n-member ensembles had fewer than 1000 grid points with forecast probabilities in that bin over the verification domain.

Fig. 5.
Fig. 5.

Attributes diagrams computed over the verification domain for precipitation thresholds of (a) 0.25, (b) 1.0, (c) 5.0, and (d) 10.0 mm h−1, using a 50-km radius of influence, and aggregated hourly over the first 12 h for various ensemble sizes. Each boxplot depicts the extrema, interquartile range, and median of 100 realizations of observed relative frequency for each ensemble size. The horizontal line near the x axis represents the observed frequency of the event, the dashed line indicates the no-skill line, and the diagonal line is the line of perfect reliability. Filled circles above the diagonal line indicate those instances where the inner 90% of the n-member boxplot distribution contained the 50-member value. The forecast frequency (%) for the 50-member ensemble for each bin is shown on the top x axis, where a value of “−999999” indicates the 50-member ensemble had fewer than 1000 grid points with forecast probabilities in that bin over the verification domain.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for hourly aggregated 18–36-h forecasts.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Boxplot ranges were largest for the 5- and 10-member ensembles and increased both with forecast hour and probability bin. Ensemble reliability typically improved as n increased, but for n ≥ 20–30, the middle 90% usually included the 50-member observed relative frequencies. All n-member ensembles were skillful compared to a climatological forecast at the 0.25 and 1.0 mm h−1 thresholds. However, the ensembles usually produced overconfident probabilities, suggesting insufficient ensemble spread. For example, for q = 1.0 mm h−1, when the 50-member ensemble produced forecast probabilities between 85% and 95% during the first 12 h (Fig. 5b), the event only occurred ~68% of the time. When q = 5.0 and 10.0 mm h−1, there was little to no skill compared to a climatological forecast, particularly for n = 5 and 10. Despite overall insufficient spread, calibration techniques [e.g., Hamill and Colucci (1997, 1998); Stensrud and Yussouf (2007); Johnson and Wang (2012) and references therein] can improve ensemble reliability.

2) ROC area

Unlike reliability, the ROC, which measures the ensemble’s resolution—its capability to discriminate between events—is not easily calibrated. To produce the ROC, probabilistic forecast thresholds p of 0%, 5%, 15%, 25%, … , 85%, 95%, and 100% were chosen. Then, a series of 2 × 2 contingency tables (Table 2) was populated for each p for different precipitation thresholds. Denoting ST4 precipitation at point i as Oi and the NEP of precipitation ≥q at point i as NEPi,q; the ith grid point fell into category a if NEPi,qp and Oiq, b if NEPi,qp and Oi < q, c if NEPi,q < p and Oiq, and d if NEPi,q < p and Oi < q. Using the elements of Table 2, the probability of detection [POD = a/(a + c)] and probability of false detection [POFD = b/(b + d)] were computed for each probability threshold, and the ROC was formed by plotting the POFD against the POD over the range of p. The area under this curve (called the ROC area) was used to summarize the ROC and computed using a trapezoidal approximation. Since NEP fields were verified, the range of forecast probabilities for each n was continuous, rather than discrete, thus, alleviating potential problems comparing ROC areas between ensembles with different sizes.

Table 2.

Standard 2 × 2 contingency table for dichotomous events.

Table 2.

Boxplots of ROC areas for each n-member ensemble aggregated hourly over the first 12 h and between 18- and 36-h forecasts are shown in Fig. 7. For both periods, ROC areas were >0.5 at all thresholds, indicating discriminating ability. Similar to reliability, the boxplot ranges broadened as q (n) increased (decreased). ROC areas increased steadily as n increased from 5 to 20 or 30 for q ≤ 5.0 mm h−1, but for larger q, the middle 90% for n ≠ 50 often included the 50-member ROC area.

Fig. 7.
Fig. 7.

Area under the ROC curve as a function of precipitation threshold for various ensemble sizes aggregated over the (a) first 12 h and (b) 18–36-h forecasts using a 50-km radius of influence. Each boxplot depicts the extrema, interquartile range, and median of 100 realizations of the ROC area for each ensemble size. Filled circles indicate those instances where the inner 90% of the n-member boxplot distribution contained the 50-member value.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

3) Fractions skill scores

The FSS is a neighborhood approach that measures spatial skill and can be used to verify any probabilistic field, including NEPs. The FSS requires transforming the observations (i.e., ST4 grids) into “observed fractions,” which is achieved for the ith point by counting the number of observed points within r of i containing accumulated precipitation ≥q and dividing by the number of points in the neighborhood (i.e., Nb). The forecast probabilities and observed fractions are then directly compared to produce the FSS (Roberts and Lean 2008). The FSS ranges from 0 to 1, with a perfect forecast attaining a score of 1 and a score of 0 indicating no skill.

Consistent with the ROC area and reliability, aggregate FSS values for r = 50 km decreased with forecast length and as q increased (Fig. 8). The boxplots spanned larger ranges for smaller n. FSS values increased as n increased from 5 to 20, but differences between the 20-, 30-, 40-, and 50-member ensembles were usually small.

Fig. 8.
Fig. 8.

As in Fig. 7, but for aggregate FSS values computed with a 50-km radius of influence.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

4) Rank histograms

Rank histograms (Hamill 2001) were computed for the full 50-member ensemble based on total hourly accumulated rainfall over the verification domain. The rank histogram containing all rainfall totals over all forecast hours (Fig. 9) featured a U shape, revealing that many observations fell outside the range of the ensemble. In particular, the observed total precipitation was outside the low range of the ensemble ~29% of the time, commensurate with a high precipitation bias. This finding suggests that the ensemble was not sampling a sufficiently large probability distribution function (PDF) and was underdispersive (e.g., Hamill 2001; Clark et al. 2011), which is consistent with reliability statistics (Figs. 5 and 6).

Fig. 9.
Fig. 9.

Rank histogram based on total accumulated precipitation over the verification domain.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

5) Example NEP fields

Example NEP forecasts for the full 50-member ensemble and randomly selected 5-, 10-, 20-, 30-, and 40-member ensembles, as well as the corresponding ST4 observations, are shown in Fig. 10. These NEPs were for the 24-h forecast initialized at 0000 UTC 30 May (valid at 0000 UTC 31 May) and produced using r = 50 km and q = 0.5 mm h−1. All n-member NEPs were closer to each other than any individual forecast was to the observations. The ensembles correctly highlighted areas of rainfall in South Dakota, Minnesota, and Mississippi while hinting at convection in parts of Texas. However, the primary feature at this time—a convective line in Kansas—was primarily forecast with probabilities ≤40%. Despite objective metrics showing benefits of increasing ensemble size past 5–10 members, given the stark similarity of the n-member NEP fields for r = 50 km, it may be difficult to justify n > 10 for operational purposes with limited computational resources.

Fig. 10.
Fig. 10.

NEP (%) of hourly precipitation meeting or exceeding 0.5 mm h−1 computed with a 50-km radius of influence for randomly drawn (a) 5-, (b) 10-, (c) 20-, (d) 30-, and (e) 40-member ensembles, as well as (f) the full 50-member ensemble for the 24-h forecast valid at 0000 UTC 31 May. (g) ST4 precipitation exceeding 0.5 mm h−1 (shaded). The plotted area is the verification domain.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

6) Summary

Reliability statistics and the rank histogram indicated the ensemble was underdispersive, but ROC areas revealed skill at discriminating between events at all precipitation thresholds. Subensembles composed of 20–30 members typically had comparable resolution, reliability, and skill as the full 50-member ensemble. Further discussion regarding the probabilistic forecasts is provided in section 5.

c. Optimizing deterministic guidance from the ensemble

Methods of producing deterministic precipitation forecasts based on individual ensemble members’ proximity to the ensemble mean were explored. The skill of the PMM, EM, and EnKF mean-initialized forecasts was also examined via the FSS. To compute the FSS for a deterministic forecast, the deterministic field is first transformed into a probabilistic field (e.g., Theis et al. 2005) using the same procedure for creating observed fractions. These “forecast fractions” generated from a deterministic grid can be interpreted exactly as the EPi discussed in section 4b.

FSS statistical significance was determined by a bootstrap technique (e.g., Hamill 1999). Resamples were randomly drawn from all forecast cases and the FSS was computed for each resample. This procedure was repeated 1000 times to estimate bounds of the 90% confidence interval (CI). When the CIs associated with two different curves did not overlap, the differences were statistically significant at the 95% level or higher.

Deterministic guidance was produced from the EnKF by two methods:

  1. A single free forecast was produced from the EnKF analysis suite, which did not require an ensemble of free forecasts.
  2. The EnKF initialized an ensemble of free forecasts and the ensemble members were combined into a single deterministic field.

We discuss these two approaches separately in the following subsections.

1) Deterministic guidance requiring an ensemble of forecasts

Two deterministic fields derived from ensemble output are the EM and PMM. Furthermore, A13 noted that a deterministic patched-together forecast composed of the single ensemble member “closest” to the EM each forecast interval produced high quality SLP forecasts of landfalling synoptic midlatitude cyclones while providing more detail than the EM. We now investigate whether the method of A13 can be used for convective applications.

A13 measured “closeness” by comparing the EM SLP in a 216 km × 216 km box centered around an extratropical cyclone to each member’s corresponding SLP field each hour and forecast. This method was broadly similar to that of Du and Zhou (2011), who ranked ensemble members based on their domain average proximity to the EM, but is more “feature based.” Here, we focused on precipitation and defined “closeness to the mean” using two separate metrics to assess sensitivity to the definition of closeness. Both methods were used to calculate closeness each forecast and hour, so the “closest member” changed hourly. In the first method (method I), the closest member was simply the member with total precipitation over the verification domain (Figs. 1 and 3a) closest to that of the corresponding EM field. For the second definition of closeness (method II), the FSS with r = 50 km was calculated over the verification domain for q = 0.5, 1.0, 5.0, and 10.0 mm h−1 for each ensemble member using the corresponding PMM field as truth.4 Each member’s FSS was averaged over the four values of q, and the closest member had the highest FSS.

Each day, a continuous forecast was produced by stitching together forecasts from the individual ensemble members closest to the mean each hour (as measured by the two definitions). A similar procedure was performed for the ensemble members farthest from the mean, and forecasts from randomly chosen members each hour were also patched together daily. The same member was rarely closest to or farthest from the mean for consecutive hourly intervals. Aggregate FSS values (using ST4 analyses as truth) were computed over the verification domain and all stitched-together forecasts composed of the closest, farthest, and random members. Similar aggregate FSS values were also calculated for the PMM and EM forecasts.

Using method I to define closeness, at all thresholds, FSS values for r = 50 km aggregated over the first 12 h (Fig. 11a) revealed that forecasts composed of members farthest from the mean each forecast hour produced FSS values at the bottom of the ensemble envelope that were considerably lower than FSS values from the forecasts composed of members closest to the mean each hour. However, picking a random member each forecast hour yielded FSS values similar to the closest-member forecasts. The PMM had comparable or higher FSS values that were not significantly different than those of the closest and random member forecasts, and the EM FSS values were significantly lower than those of any ensemble member for q = 0.25 and 20.0 mm h−1 and toward the bottom of the ensemble envelope for the 0.5 and 10.0 mm h−1 thresholds. The low EM FSS values were due to the poor EM bias (e.g., Fig. 4). In fact, when percentile thresholds were used to compute the FSS (e.g., Roberts and Lean 2008; Mittermaier and Roberts 2010; Schwartz and Liu 2014), effectively removing bias, the EM performed quite well (not shown), revealing how the EM can accurately predict precipitation placement despite poorly forecasting precipitation magnitudes, which reflects the rationale behind probability matching (Ebert 2001). When method II was used to define closeness (Fig. 11b), the forecasts produced by stitching together the farthest members had FSS values significantly lower than the ensemble range, while the patched-together closest-member forecasts were near the top of the ensemble envelope but not significantly better than the stitched-together random members or PMM forecasts. The closest-member FSS values produced from method II were higher than those from method I.

Fig. 11.
Fig. 11.

FSS values with a 50-km radius of influence aggregated over the (a),(b) first 12 h and (c),(d) 18–36-h forecasts for various accumulation thresholds for deterministic forecasts that required an ensemble of free forecasts (see text). “Closeness” in (a),(b) was determined by total precipitation (method I) and in (c),(d) by the FSS using the PMM as truth (method II; see text). The gray shading indicates the FSS range of the individual ensemble members. Bounds of the 90% bootstrap CIs are also shown.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

FSS values for r = 50 km aggregated hourly between 18 and 36 h differed little between the closest, farthest, and random member forecasts produced from method I (Fig. 11c). However, more pronounced differences between the closest and farthest member forecasts were noted with method II (Fig. 11d), as the farthest members had significantly worse FSS values than the ensemble range. As in the earlier forecast period (Figs. 11a,b), the closest-member FSS values were not significantly different from the random members or PMM. Again, closest-member FSS values from method II were higher than those from method I and EM FSS values were substantially worse than the ensemble envelope for several q.

A13 noted that concatenating forecasts of different ensemble members produced SLP forecasts of synoptic midlatitude cyclones with logical, visually reasonable evolutions. To determine whether stitched-together precipitation forecasts show reasonable continuity, three patched-together, single-member, and PMM 1–4-h precipitation forecasts are examined. Although 1–4-h forecasts were during the spinup, this period was chosen because FSS differences between forecasts consisting of random, closest, and farthest members were maximized during this period (Fig. 11).

The 1–4-h precipitation forecasts initialized at 0000 UTC 21 June are shown in Fig. 12. During this period, a band of rainfall moved steadily eastward through portions of Iowa, Nebraska, Kansas, and Minnesota (Figs. 12a–d). The patched-together forecasts generated using methods I (Figs. 12e–h) and II (Figs. 12i–l) to define closeness clearly showed this evolution and there was reasonable coherence regarding the evolution of individual features within the precipitation envelope. In fact, the evolutions of the patched-together forecasts were as believable as the continuous forecast from member 38 (Figs. 12m–p). The patched-together forecast produced from method II was quite similar to the PMM forecast (Figs. 12q–t) but had greater structural detail.

Fig. 12.
Fig. 12.

(a)–(d) ST4 observations and the corresponding (e)–(h) method I patched-together forecasts, (i)–(l) method II patched-together forecasts, (m)–(p) ensemble member 38 forecasts, and (q)–(t) PMM forecasts of 1–4-h hourly accumulated precipitation (mm) initialized at 0000 UTC 21 June.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Figure 13 shows 1–4-h forecasts initialized at 0000 UTC 26 May. In this case, slow-moving supercells near Russell, Kansas, produced several tornadoes (Figs. 13a–d). The patched-together forecast produced using method II (Figs. 13i–l) generated an accurate forecast with good continuity (note the same member was closest for hours 2 and 3). The 1–4-h forecast from member 44 (Figs. 13m–p) also showed good continuity. Conversely, while the stitched-together forecast based on method I (Figs. 13e–h) developed convection in approximately the correct area, the movement was somewhat unusual. For example, between hours 2 and 3, the northeastern-most convection weakened rapidly and was refocused to the southwest. Then, due northward motion was forecast between hours 3 and 4. While isolated convective cells can move erratically and new storms can rapidly develop, this overall motion seemed unusual, and this erraticism was emphasized when the forecast was viewed as a movie, as would probably occur in an operational environment. However, for forecast applications, true convective motions would be unknown a priori, and this patched-together forecast may not be discounted as unrealistic. The patched-together forecasts exhibited more structural detail than the PMM (Figs. 13q–t), which only had one updraft core while both patched-together forecasts had two distinct cores.

Fig. 13.
Fig. 13.

As in Fig. 12, but for the forecast initialized at 0000 UTC 26 May and the use of ensemble member 44 in (m)–(p). The asterisk is provided as a reference.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Finally, 1–4-h forecasts initialized at 0000 UTC 15 June are shown in Fig. 14. During this period, convection in Nebraska and Kansas organized into a southeastward-moving squall line while rainfall dissipated in Iowa and Minnesota (Figs. 14a–d). The forecast from member 49 (Figs. 14m–p) captured this evolution, although the convection was displaced to the northwest compared to the observations. However, some apparent oddities were noted in the patched-together forecast produced from method I (Figs. 14e–h). For instance, in the first 3 h, the cells in Kansas seemingly weakened before intensifying again, whereas the continuous forecasts from member 49 and the PMM (Figs. 14q–t) evolved these features more organically. Additionally, in the concatenated forecast generated from method I, the southeastward progression of the precipitation shield was stunted in the first 3 h before “jumping” in hour 4, contrasting the persistent forward motion in the single-member and PMM forecasts. The patched-together forecast produced from method II (Figs. 14i–l) had greater continuity between hours 2 and 4, but there was little forward motion in northeast Nebraska and Iowa between hours 1 and 2. Again, the individual members had more detail than did the PMM.

Fig. 14.
Fig. 14.

As in Fig. 12, but for the forecast initialized at 0000 UTC 15 June and the use of ensemble member 49 in (m)–(p).

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Overall, application of the approach of A13 to high-resolution precipitation forecasts yielded mixed results and depended somewhat on the definition of closeness. At most locations, patched-together precipitation forecasts composed of members closest to the mean displayed a surprising amount of continuity—especially using method II—but sometimes convective evolution appeared unphysical. However, in a better-calibrated (i.e., more reliable) ensemble, the potential for discontinuities in a concatenated forecast would be exacerbated due to greater spread among the ensemble members. Therefore, patching together individual members’ forecasts may not consistently produce realistically evolving convective forecasts. Furthermore, using both definitions of closeness, forecasts produced by stitching together random members yielded forecasts with aggregate FSS values comparable to those generated by patching together closest members and the PMM, indicating that picking the member nearest the mean each hour does not necessarily yield higher quality precipitation forecasts.

Compared to method I, method II yielded greater differences between the closest and farthest member forecasts and closest-member forecasts with higher FSS values. Furthermore, method II considered spatial differences between ensemble members whereas method I did not. Given these considerations, method II may be more appropriate than method I for measuring closeness for precipitation applications. Employing more sophisticated object-based metrics (e.g., Johnson and Wang 2013) to define closeness may be worth exploring in future work. However, while patched-together forecasts provided more structural detail than the PMM, these details occurred at smaller scales where high-resolution forecasts are less trustworthy. This fact, along with the possibility that patched-together precipitation forecasts may unrealistically evolve, means individual users should carefully consider whether it is desirable to construct stitched-together forecasts or simply use the PMM.

2) Deterministic guidance where an ensemble is not required

The simplest method of generating deterministic guidance from an EnKF is to initialize a forecast from the mean EnKF analysis . However, given concerns about the smoothness of and the implications for subsequent forecasts, pick-a-member approaches have been developed to select a single EnKF analysis member from which to initialize a free forecast. While in an unweighted ensemble (such as the one discussed herein) all members are equally likely of producing a forecast closest to the true atmospheric state, individual member performance is not equal for a given forecast (e.g., Du and Zhou 2011), so it is interesting to assess the utility of pick-a-member methods.

One pick-a-member algorithm measures the normalized RMS difference between each ensemble member’s analysis state vector and the mean EnKF analysis state (Table 1 lists the state variables) and initializes a continuous free forecast from the member with the minimum normalized RMS difference, as described by R13. This procedure, which does not require concatenating forecasts from different members, was adopted here and performed for each 0000 UTC EnKF analysis. Aggregate FSS values were obtained for forecasts initialized from the analysis members closest to and farthest from the mean analysis. Additionally, aggregate FSS values were computed for forecasts initialized from a randomly drawn member for each analysis and for forecasts initialized from . We note that when discussing closeness, due to the high dimensionality of the analysis–forecast system, it is unlikely that any analysis member truly resembles .

FSS values aggregated hourly over the first 12 h for r = 50 km (Fig. 15a) revealed only small differences between forecasts initialized from closest, farthest, and random members for q ≤ 1.0 mm h−1. At higher thresholds, forecasts initialized from random members performed slightly better than those initialized from closest members, which in turn were better than the forecasts initialized from farthest members. Forecasts initialized from typically performed near the top of the ensemble range and better than or comparably to the other forecasts. Between 18 and 36 h (Fig. 15b), differences between the forecasts initialized from closest, farthest, and random members were even smaller. The forecasts initialized from had the highest FSS values for q ≤ 1.0 mm h−1 but fell within the ensemble range at heavier thresholds. Similar results were obtained when these FSS values were computed for different r (not shown).

Fig. 15.
Fig. 15.

FSS values with a 50-km radius of influence aggregated over the (a) first 12 h and (b) 18–36-h forecasts for various accumulation thresholds for deterministic forecasts that did not require an ensemble of free forecasts (see text). Bounds of the 90% bootstrap CIs are shown for selected curves.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Clearly, attempting to improve precipitation forecasts by initializing free forecasts from the member closest to was unfruitful, as there were usually only small differences between the forecasts initialized from random, closest, and farthest members. These findings conform to the expectation of equal likelihood of ensemble members and suggest that deliberately initializing free forecasts from the single member with a state closest to , in hopes of producing better forecasts, may be unnecessary.

3) Summary

Comparing all deterministic guidance (Figs. 11 and 15), it appears that forecasts initialized from , patched-together closest-member forecasts using method II, and PMM forecasts performed best. As EnKF mean-initialized forecasts do not require an ensemble and are less expensive than PMM and stitched-together forecasts, these results suggest that if solely deterministic guidance is required from an EnKF, should be used as the ICs. The only caveat is that has an effectively coarser resolution than any individual ensemble member and is slower to spin up precipitation and develop storm-scale structures. The next subsection addresses this issue.

d. Comparing probabilistic and deterministic guidance

We now directly compare the probabilistic and deterministic forecasts using the FSS aggregated over the first 12 h (Fig. 16) and between hours 18 and 36 (Fig. 17) for several r. During both time periods, NEP FSS values for a randomly selected 10-member ensemble were consistently higher than those of any ensemble member, the PMM, EM, or mean-initialized forecasts, and between 18 and 36 h, the NEP FSS values were significantly higher than any other FSS. Thus, ultimately, the best ensemble guidance was realized from combining a neighborhood approach with probabilistic forecasts, even for small n. Clark et al. (2009) also noted the utility of small convection-allowing ensembles.

Fig. 16.
Fig. 16.

FSS as a function of the radius of influence based on hourly precipitation aggregated over the first 12 forecast hours and all forecasts for (a) 0.25, (b) 0.5, (c) 1.0, (d) 5.0, (e) 10.0, and (f) 20.0 mm h−1 accumulation thresholds. Bounds of the 90% CIs are shown and the ranges of the individual ensemble members’ FSS are shaded in gray.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

Fig. 17.
Fig. 17.

As in Fig. 16, but for FSS aggregated hourly for 18–36-h forecasts.

Citation: Weather and Forecasting 29, 6; 10.1175/WAF-D-13-00145.1

These results also address concerns regarding the appropriateness of EnKF mean analyses as ICs for high-resolution precipitation forecasts. For q ≤ 1.0 mm h−1, the EnKF mean-initialized forecast FSS values were higher than individual members’ FSS at all scales (Figs. 16a–c and 17a–c), and for q = 5.0 and 10.0 mm h−1 between 18 and 36 h (Figs. 17d,e), the FSS values for the mean-initialized forecasts were toward the top of the ensemble envelope. Furthermore, examination of FSS time series during the first 6 h for r ranging from 5 to 150 km indicated the forecasts initialized from had comparable skill as the individual members for all q (not shown). However, had effectively coarser resolution than the individual members’ analyses, and, consequently, forecasts initialized from featured different spinup characteristics (e.g., Fig. 3) and fewer small-scale details at early times compared to individual members’ forecasts. Yet, the added details from the individual members at early times were on scales where the model had less predictive skill, so and its associated short-term forecasts were not heavily penalized by the absence of these features. These collective findings provide no evidence that initializing free forecasts from is less optimal than initializing forecasts from individual members, but users interested in short-term forecasts and small scales may find the slower spinup of forecasts initialized from undesirable.

5. Discussion

Attributes diagrams and the rank histogram indicated insufficient ensemble spread of precipitation forecasts. This finding seemingly contrasts the spread–skill diagnostics showing reasonable EnKF performance (Fig. 2). However, the standards measuring the forecast precipitation spread and EnKF calibration differed substantially, so the spread–skill of the precipitation forecasts and 15-km EnKF cannot be directly compared. Nonetheless, the metric presented in Fig. 2 is perhaps the most commonly used method of assessing EnKF quality, and developers usually tune EnKF parameters, such as inflation coefficients and observation errors, until prior consistency ratios approach 1.0. These results suggest that a well-calibrated mesoscale EnKF, as measured by conventional standards, does not necessarily translate into reliable high-resolution precipitation forecasts, perhaps because during the assimilation of observations, the EnKF spread is reduced. Thus, it may be desirable to perform posterior (after assimilation) inflation on a new, separate, ensemble composed solely of members initializing free forecasts, but not on the full ensemble used for cycling EnKF data assimilation [Wei et al. (2008) describe an example of this method], whose parameters are tuned for optimal EnKF performance (e.g., Fig. 2).

Furthermore, the insufficient 3-km precipitation spread is likely partially attributed to the relatively coarse 15-km fields from which the initial perturbations were extracted. These perturbations did not represent small-scale errors on the high-resolution grid (e.g., Nutter et al. 2004a; Clark et al. 2011). As small-scale errors grow faster than larger-scale errors (e.g., Lorenz 1969; Zhang et al. 2003; Hohenegger and Schär 2007), the absence of initial small-scale errors would imply insufficient error growth and, hence, spread. A dearth of ensemble spread has been noted for other convection-permitting ensemble forecasts of precipitation initialized with coarser-resolution ICs (e.g., Kong et al. 2008, 2009; Clark et al. 2011; Duc et al. 2013), indicating that underdispersion is not limited to EnKF-initialized ensembles.

These quantitative results also suggest little necessity of initializing free forecasts from more than 20–30 EnKF analysis members when precipitation is the field of interest. While larger ensembles sample more of the PDF than smaller ones, additional members are not helpful unless they broaden the tails of the sampled PDF. Moreover, as convection-permitting forecasts have little grid-scale skill, neighborhood, or other postprocessing approaches that effectively smooth model output onto scales at which there is greater skill (e.g., Ebert 2009), should be regularly applied. However, these smoothing procedures “fill in” the PDF, particularly when n is small, mitigating potential benefits of a larger ensemble (Clark et al. 2011). In fact, subjective assessment of NEP fields revealed only minor differences between ensembles of different sizes (Fig. 10), suggesting ensembles of fewer than 20 members may suffice for operational purposes.

But, if free forecasts are produced from fewer than 20 members, these findings suggest that different combinations of ensemble members can produce very different results, especially at longer forecast times and at heavier rainfall rates. This result has implications for integrated EnKF–ensemble prediction systems where free forecasts are initialized from a subset of analysis members, such as at the Canadian Meteorological Centre, where a 192-member global EnKF initializes free forecasts from just 20 analysis members (Houtekamer et al. 2014). It appears that ensembles with n > ~30 can be constructed with greater confidence of less forecast sensitivity to which members are chosen (Figs. 58).

However, if ensemble spread can be improved, these conclusions may change. For example, if adding new ensemble members broadens the ensemble PDF, then increased n should improve reliability and lessen underdispersion compared to smaller ensembles (Clark et al. 2011). Of course, the improvement of larger, more reliable ensembles must be weighed against its costs: greater computational resources and the potentially deleterious effect of introducing forecast sensitivity to which members compose the free forecast ensemble for n > 30.

6. Summary

Ensemble forecasts with 50 members and 3-km horizontal grid spacing were produced over a large domain spanning three-quarters of the CONUS in May and June 2012. The forecasts were initialized by downscaling 15-km EnKF analyses onto the 3-km grid and verified with a focus on precipitation using both probabilistic and deterministic methods.

Employing an EnKF to initialize these forecasts meant dynamically consistent initial ensembles, contrasting much previous work where relatively ad hoc approaches, such as extracting perturbations from external models, were used to produce initial convection-allowing ensembles. While ROC areas indicated discriminating ability for all thresholds, all ensemble members overpredicted precipitation and the ensemble was underdispersive. Ensemble reliability, resolution, and skill improved as the number of ensemble members increased from 5 to 20 or 30, but the results indicate that subensembles of 20–30 members usually provided as much reliability, resolution, and skill as the full 50-member ensemble.

There did not appear to be a method of optimizing deterministic guidance by focusing on precipitation forecasts initialized from a single member based on its closeness to the mean EnKF analysis. Thus, users employing EnKFs to initialize a single deterministic free forecast can safely use a random member for ICs. However, these results suggest initializing forecasts from EnKF mean analyses may be best, though care must be taken to understand bias and spinup properties. If an ensemble forecast is computationally affordable, then the most skillful and valuable (e.g., Murphy 1993) EnKF forecast guidance is achieved by probabilistic forecasts, even if the ensemble is small.

Collectively, these results have many implications for future EnKF-initialized convection-allowing ensemble forecasts. Perhaps the biggest challenge toward better high-resolution probabilistic guidance is improving the convection-permitting ensemble spread. While convection-allowing EnKF analysis systems may better represent small-scale errors and produce free forecasts with better precipitation reliability, computational constraints in the near future will likely prohibit extensive testing of convective-scale EnKFs over domains large enough to resolve synoptic-scale features. Therefore, other methods that introduce smaller-scale errors into either the analyses or forecasts may be necessary to achieve proper convection-permitting ensemble spread. These methods include perturbed LBCs (Nutter et al. 2004a,b; Hohenegger et al. 2008; Gebhardt et al. 2011; Vié et al. 2011), use of stochastic physics (e.g., Berner et al. 2009, 2011; Bouttier et al. 2012), or multiphysics ensembles (e.g., Clark et al. 2010b; Berner et al. 2011; Gebhardt et al. 2011). These topics demand further attention at convection-permitting scales and will be the focus of future work.

Acknowledgments

We would like to acknowledge high-performance computing support from Yellowstone (ark:/85065/d7wd3xhc) provided by NCAR’s Computational and Information Systems Laboratory. Two anonymous reviewers provided constructive comments that improved this paper. NCAR is sponsored by the National Science Foundation.

REFERENCES

  • Accadia, C., , Mariani S. , , Casaioli M. , , Lavagnini A. , , and Speranza A. , 2003: Sensitivity of precipitation forecast skill scores to bilinear interpolation and a simple nearest-neighbor average method on high-resolution verification grids. Wea. Forecasting, 18, 918932, doi:10.1175/1520-0434(2003)018<0918:SOPFSS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ancell, B. C., 2013: Nonlinear characteristics of ensemble perturbation evolution and their application to forecasting high-impact events. Wea. Forecasting, 28, 13531365, doi:10.1175/WAF-D-12-00090.1.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., , Hoar T. , , Raeder K. , , Liu H. , , Collins N. , , Torn R. , , and Arellano A. , 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Search Google Scholar
    • Export Citation
  • Barker, D., and et al. , 2012: The Weather Research and Forecasting Model’s Community Variational/Ensemble Data Assimilation System: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., , Shutts G. J. , , Leutbecher M. , , and Palmer T. N. , 2009: A spectral stochastic kinetic energy backscatter scheme and its impact on flow-dependent predictability in the ECMWF Ensemble Prediction System. J. Atmos. Sci., 66, 603626, doi:10.1175/2008JAS2677.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., , Ha S.-Y. , , Hacker J. P. , , Fournier A. , , and Snyder C. , 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 19721995, doi:10.1175/2010MWR3595.1.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., , Vié B. , , Nuissier O. , , and Raynaud L. , 2012: Impact of stochastic physics in a convection-permitting ensemble. Mon. Wea. Rev., 140, 37063721, doi:10.1175/MWR-D-12-00031.1.

    • Search Google Scholar
    • Export Citation
  • Brier, G. W., 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 13, doi:10.1175/1520-0493(1950)078<0001:VOFEIT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bryan, G. H., , Wyngaard J. C. , , and Fritsch J. M. , 2003: Resolution requirements for the simulation of deep moist convection. Mon. Wea. Rev., 131, 23942416, doi:10.1175/1520-0493(2003)131<2394:RRFTSO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Cavallo, S. M., , Torn R. D. , , Snyder C. , , Davis C. , , Wang W. , , and Done J. , 2013: Evaluation of the Advanced Hurricane WRF data assimilation system for the 2009 Atlantic hurricane season. Mon. Wea. Rev., 141, 523541, doi:10.1175/MWR-D-12-00139.1.

    • Search Google Scholar
    • Export Citation
  • Chen, F., , and Dudhia J. , 2001: Coupling an advanced land-surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , Xue M. , , and Kong F. , 2009: A comparison of precipitation forecast skill between small convection-allowing and large convection-parameterizing ensembles. Wea. Forecasting, 24, 11211140, doi:10.1175/2009WAF2222222.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , and Weisman M. L. , 2010a: Neighborhood-based verification of precipitation forecasts from convection-allowing NCAR WRF Model simulations and the operational NAM. Wea. Forecasting, 25, 14951509, doi:10.1175/2010WAF2222404.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gallus W. A. Jr., , Xue M. , , and Kong F. , 2010b: Growth of spread in convection-allowing and convection-parameterizing ensembles. Wea. Forecasting, 25, 594612, doi:10.1175/2009WAF2222318.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and et al. , 2011: Probabilistic precipitation forecast skill as a function of ensemble size and spatial scale in a convection-allowing ensemble. Mon. Wea. Rev., 139, 14101418, doi:10.1175/2010MWR3624.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and et al. , 2012: An overview of the 2010 Hazardous Weather Testbed Experimental Forecast Program Spring Experiment. Bull. Amer. Meteor. Soc., 93, 5574, doi:10.1175/BAMS-D-11-00040.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., , Gao J. , , Marsh P. , , Smith T. , , Kain J. , , Correia J. , , Xue M. , , and Kong F. , 2013: Tornado pathlength forecasts from 2010 to 2011 using ensemble updraft helicity. Wea. Forecasting, 28, 387407, doi:10.1175/WAF-D-12-00038.1.

    • Search Google Scholar
    • Export Citation
  • Done, J., , Davis C. A. , , and Weisman M. L. , 2004: The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) Model. Atmos. Sci. Lett., 5, 110117, doi:10.1002/asl.72.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., , Zhang F. , , Wicker L. J. , , Snyder C. , , and Crook N. A. , 2004: Wind and temperature retrievals in the 17 May 1981 Arcadia, Oklahoma, supercell: Ensemble Kalman filter experiments. Mon. Wea. Rev., 132, 19822005, doi:10.1175/1520-0493(2004)132<1982:WATRIT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Du, J., , and Zhou B. , 2011: A dynamical performance-ranking method for predicting individual ensemble member performance and its application to ensemble averaging. Mon. Wea. Rev., 139, 32843303, doi:10.1175/MWR-D-10-05007.1.

    • Search Google Scholar
    • Export Citation
  • Du, J., , Dimego G. , , Toth Z. , , Jovic D. , , Zhou B. , , Zhu J. , , Wang J. , , and Juang H. , 2009: Recent upgrade of NCEP Short-Range Ensemble Forecast (SREF) system. Preprints, 19th Conf. on Numerical Weather Prediction/23rd Conf. on Weather Analysis and Forecasting, Omaha, NE, Amer. Meteor. Soc., 4A.4. [Available online at http://ams.confex.com/ams/pdfpapers/153264.pdf.]

  • Duc, L., , Saito K. , , and Seko H. , 2013: Spatial–temporal fractions verification for high-resolution ensemble forecasts. Tellus, 65A, 18171, doi:10.3402/tellusa.v65i0.18171.

    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., 2001: Ability of a poor man’s ensemble to predict the probability and distribution of precipitation. Mon. Wea. Rev., 129, 24612480, doi:10.1175/1520-0493(2001)129<2461:AOAPMS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Ebert, E. E., 2009: Neighborhood verification: A strategy for rewarding close forecasts. Wea. Forecasting, 24, 14981510, doi:10.1175/2009WAF2222251.1.

    • Search Google Scholar
    • Export Citation
  • Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 14310 162, doi:10.1029/94JC00572.

    • Search Google Scholar
    • Export Citation
  • Gallus, W. A., Jr., 2002: Impact of verification grid-box size on warm-season QPF skill measures. Wea. Forecasting, 17, 12961302, doi:10.1175/1520-0434(2002)017<1296:IOVGBS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Gebhardt, C., , Theis S. E. , , Paulat M. , , and Ben Bouallègue Z. , 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168177, doi:10.1016/j.atmosres.2010.12.008.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, doi:10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, 550560, doi:10.1175/1520-0493(2001)129<0550:IORHFV>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and Colucci S. J. , 1997: Verification of Eta–RSM short-range ensemble forecasts. Mon. Wea. Rev., 125, 13121327, doi:10.1175/1520-0493(1997)125<1312:VOERSR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , and Colucci S. J. , 1998: Evaluation of Eta–RSM ensemble probabilistic precipitation forecasts. Mon. Wea. Rev., 126, 711724, doi:10.1175/1520-0493(1998)126<0711:EOEREP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , Whitaker J. S. , , Fiorino M. , , and Benjamin S. G. , 2011a: Global ensemble predictions of 2009’s tropical cyclones initialized with an ensemble Kalman filter. Mon. Wea. Rev., 139, 668688, doi:10.1175/2010MWR3456.1.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., , Whitaker J. S. , , Kleist D. T. , , Fiorino M. , , and Benjamin S. G. , 2011b: Predictions of 2010’s tropical cyclones using the GFS and ensemble-based data assimilation methods. Mon. Wea. Rev., 139, 32433247, doi:10.1175/MWR-D-11-00079.1.

    • Search Google Scholar
    • Export Citation
  • Hanley, K. E., , Kirshbaum D. J. , , Belcher S. E. , , Roberts N. M. , , and Leoncini G. , 2011: Ensemble predictability of an isolated mountain thunderstorm in a high-resolution model. Quart. J. Roy. Meteor. Soc., 137, 21242137, doi:10.1002/qj.877.

    • Search Google Scholar
    • Export Citation
  • Hanley, K. E., , Kirshbaum D. J. , , Roberts N. M. , , and Leoncini G. , 2013: Sensitivities of a squall line over central Europe in a convective-scale ensemble. Mon. Wea. Rev., 141, 112133, doi:10.1175/MWR-D-12-00013.1.

    • Search Google Scholar
    • Export Citation
  • Hohenegger, C., , and Schär C. , 2007: Predictability and error growth dynamics in cloud-resolving models. J. Atmos. Sci., 64, 44674478, doi:10.1175/2007JAS2143.1.

    • Search Google Scholar
    • Export Citation
  • Hohenegger, C., , Walser A. , , Langhans W. , , and Schär C. , 2008: Cloud-resolving ensemble simulations of the August 2005 Alpine flood. Quart. J. Roy. Meteor. Soc., 134, 889904, doi:10.1002/qj.252.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , Mitchell H. L. , , Pellerin G. , , Buehner M. , , Charron M. , , Spacek L. , , and Hansen B. , 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations. Mon. Wea. Rev., 133, 604620, doi:10.1175/MWR-2864.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., , Deng X. , , Mitchell H. L. , , Baek S.-J. , , and Gagnon N. , 2014: Higher resolution in an operational ensemble Kalman filter. Mon. Wea. Rev., 142, 11431162, doi:10.1175/MWR-D-13-00138.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., , Delamere J. S. , , Mlawer E. J. , , Shephard M. W. , , Clough S. A. , , and Collins W. D. , 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927945, doi:10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp. [Available online at http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.]

  • Johnson, A., , and Wang X. , 2012: Verification and calibration of neighborhood and object-based probabilistic precipitation forecasts from a multimodel convection-allowing ensemble. Mon. Wea. Rev., 140, 30543077, doi:10.1175/MWR-D-11-00356.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, A., , and Wang X. , 2013: Object-based evaluation of a storm-scale ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment. Mon. Wea. Rev., 141, 10791098, doi:10.1175/MWR-D-12-00140.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, A., , Wang X. , , Xue M. , , and Kong F. , 2011: Hierarchical cluster analysis of a convection-allowing ensemble during the Hazardous Weather Testbed 2009 Spring Experiment. Part II: Season-long ensemble clustering and implication for optimal ensemble design. Mon. Wea. Rev., 139, 36943710, doi:10.1175/MWR-D-11-00016.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., , and Stensrud D. J. , 2012: Assimilating AIRS temperature and mixing ratio profiles using an ensemble Kalman filter approach for convective-scale forecasts. Wea. Forecasting, 27, 541564, doi:10.1175/WAF-D-11-00090.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., , Stensrud D. J. , , Minnis P. , , and Palikonda R. , 2013: Evaluation of a forward operator to assimilate cloud water path into WRF-DART. Mon. Wea. Rev., 141, 22722289, doi:10.1175/MWR-D-12-00238.1.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., , Weiss S. J. , , Levit J. J. , , Baldwin M. E. , , and Bright D. R. , 2006: Examination of convection-allowing configurations of the WRF Model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004. Wea. Forecasting, 21, 167181, doi:10.1175/WAF906.1.

    • Search Google Scholar
    • Export Citation
  • Kong, F., and et al. , 2008: Real-time storm-scale ensemble forecasting during the 2008 Spring Experiment. Preprints, 24th Conf. on Severe Local Storms, Savannah, GA, Amer. Meteor. Soc., 12.3. [Available online at https://ams.confex.com/ams/pdfpapers/141827.pdf.]

  • Kong, F., and et al. , 2009: A real-time storm-scale ensemble forecast system: 2009 Spring Experiment. Preprints, 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 16A.3. [Available online at https://ams.confex.com/ams/pdfpapers/154118.pdf.]

  • Lean, H. W., , Clark P. A. , , Dixon M. , , Roberts N. M. , , Fitch A. , , Forbes R. , , and Halliwell C. , 2008: Characteristics of high-resolution versions of the Met Office Unified Model for forecasting convection over the United Kingdom. Mon. Wea. Rev., 136, 34083424, doi:10.1175/2008MWR2332.1.

    • Search Google Scholar
    • Export Citation
  • Lin, Y., , and Mitchell K. E. , 2005: The NCEP stage II/IV hourly precipitation analyses: Development and applications. Preprints, 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2. [Available online at http://ams.confex.com/ams/pdfpapers/83847.pdf.]

  • Lorenz, E. N., 1969: The predictability of a flow which possesses many scales of motion. Tellus, 21, 289307, doi:10.1111/j.2153-3490.1969.tb00444.x.

    • Search Google Scholar
    • Export Citation
  • Melhauser, C., , and Zhang F. , 2012: Practical and intrinsic predictability of severe and convective weather at the mesoscales. J. Atmos. Sci., 69, 33503371, doi:10.1175/JAS-D-11-0315.1.

    • Search Google Scholar
    • Export Citation
  • Mellor, G. L., , and Yamada T. , 1982: Development of a turbulence closure model for geophysical fluid problems. Rev. Geophys. Space Phys., 20, 851875, doi:10.1029/RG020i004p00851.

    • Search Google Scholar
    • Export Citation
  • Migliorini, S., , Dixon M. , , Bannister R. , , and Ballard S. , 2011: Ensemble prediction for nowcasting with a convection-permitting model. I: Description of the system and the impact of radar-derived surface precipitation rates. Tellus, 63A, 468496, doi:10.1111/j.1600-0870.2010.00503.x.

    • Search Google Scholar
    • Export Citation
  • Mittermaier, M., , and Roberts N. , 2010: Intercomparison of spatial forecast verification methods: Identifying skillful spatial scales using the fractions skill score. Wea. Forecasting, 25, 343354, doi:10.1175/2009WAF2222260.1.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., , Taubman S. J. , , Brown P. D. , , Iacono M. J. , , and Clough S. A. , 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the long-wave. J. Geophys. Res., 102, 16 66316 682, doi:10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Morrison, H., , Thompson G. , , and Tatarskii V. , 2009: Impact of cloud microphysics on the development of trailing stratiform precipitation in a simulated squall line: Comparison of one- and two-moment schemes. Mon. Wea. Rev., 137, 9911007, doi:10.1175/2008MWR2556.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281293, doi:10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nutter, P., , Stensrud D. , , and Xue M. , 2004a: Effects of coarsely resolved and temporally interpolated lateral boundary conditions on the dispersion of limited-area ensemble forecasts. Mon. Wea. Rev., 132, 23582377, doi:10.1175/1520-0493(2004)132<2358:EOCRAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Nutter, P., , Xue M. , , and Stensrud D. , 2004b: Application of lateral boundary condition perturbations to help restore dispersion in limited-area ensemble forecasts. Mon. Wea. Rev., 132, 23782390, doi:10.1175/1520-0493(2004)132<2378:AOLBCP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peralta, C., , Bouallegue Z. B. , , Theis S. E. , , Gebhardt C. , , and Buchhold M. , 2012: Accounting for initial condition uncertainties in COSMO-DE-EPS. J. Geophys. Res., 117, D07108, doi:10.1029/2011JD016581.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., , and Lean H. W. , 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, doi:10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Rogers, E., and et al. , 2009: The NCEP North American Mesoscale modeling system: Recent changes and future plans. Preprints, 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 2A.4. [Available online at https://ams.confex.com/ams/pdfpapers/154114.pdf.]

  • Romine, G., , Schwartz C. S. , , Snyder C. , , Anderson J. , , and Weisman M. , 2013: Model bias in a continuously cycled assimilation system and its influence on convection-permitting forecasts. Mon. Wea. Rev., 141, 12631284, doi:10.1175/MWR-D-12-00112.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., , and Liu Z. , 2014: Convection-permitting forecasts initialized with continuously cycling limited-area 3DVAR, ensemble Kalman filter, and “hybrid” variational-ensemble data assimilation systems. Mon. Wea. Rev., 142, 716738, doi:10.1175/MWR-D-13-00100.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and et al. , 2009: Next-day convection-allowing WRF Model guidance: A second look at 2-km versus 4-km grid spacing. Mon. Wea. Rev., 137, 33513372, doi:10.1175/2009MWR2924.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and et al. , 2010: Toward improved convection-allowing ensembles: Model physics sensitivities and optimizing probabilistic guidance with small ensemble membership. Wea. Forecasting, 25, 263280, doi:10.1175/2009WAF2222267.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., , and Weisman M. L. , 2009: The impact of positive-definite moisture transport on NWP precipitation forecasts. Mon. Wea. Rev., 137, 488494, doi:10.1175/2008MWR2583.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and et al. , 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp. [Available from UCAR Communications, P.O. Box 3000, Boulder, CO 80307.]

  • Snyder, C., , and Zhang F. , 2003: Assimilation of simulated Doppler radar observations with an ensemble Kalman filter. Mon. Wea. Rev., 131, 16631677, doi:10.1175/2555.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., , and Yussouf N. , 2007: Reliable probabilistic quantitative precipitation forecasts from a short-range ensemble forecasting system. Wea. Forecasting, 22, 317, doi:10.1175/WAF968.1.

    • Search Google Scholar
    • Export Citation
  • Tanamachi, R. L., , Wicker L. J. , , Dowell D. C. , , Bluestein H. B. , , Dawson D. T. , , and Xue M. , 2013: EnKF assimilation of high-resolution, mobile Doppler radar data of the 4 May 2007 Greensburg, Kansas, supercell into a numerical cloud model. Mon. Wea. Rev., 141, 625648, doi:10.1175/MWR-D-12-00099.1.

    • Search Google Scholar
    • Export Citation
  • Tegen, I., , Hollrig P. , , Chin M. , , Fung I. , , Jacob D. , , and Penner J. , 1997: Contribution of different aerosol species to the global aerosol extinction optical thickness: Estimates from model results. J. Geophys. Res., 102, 23 89523 915, doi:10.1029/97JD01864.

    • Search Google Scholar
    • Export Citation
  • Theis, S. E., , Hense A. , , and Damrath U. , 2005: Probabilistic precipitation forecasts from a deterministic model: A pragmatic approach. Meteor. Appl., 12, 257268, doi:10.1017/S1350482705001763.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., , Field P. R. , , Rasmussen R. M. , , and Hall W. D. , 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, doi:10.1175/2008MWR2387.1.

    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., 2010: Performance of a mesoscale ensemble Kalman filter (EnKF) during the NOAA High-Resolution Hurricane Test. Mon. Wea. Rev., 138, 43754392, doi:10.1175/2010MWR3361.1.

    • Search Google Scholar
    • Export Citation
  • Torn, R. D., , Hakim G. J. , , and Snyder C. , 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Search Google Scholar
    • Export Citation
  • Vié, B., , Nuissier O. , , and Ducrocq V. , 2011: Cloud-resolving ensemble simulations of Mediterranean heavy precipitating events: Uncertainty on initial conditions and lateral boundary conditions. Mon. Wea. Rev., 139, 403423, doi:10.1175/2010MWR3487.1.

    • Search Google Scholar
    • Export Citation
  • Wang, X., , Parrish D. F. , , Kleist D. T. , , and Whitaker J. S. , 2013: GSI 3DVAR-based ensemble-variational hybrid data assimilation for NCEP Global Forecast System: Single-resolution experiments. Mon. Wea. Rev., 141, 40984117, doi:10.1175/MWR-D-12-00141.1.

    • Search Google Scholar
    • Export Citation
  • Wei, M., , Toth Z. , , Wobus R. , , and Zhu Y. , 2008: Initial perturbations based on the ensemble transform (ET) technique in the NCEP global operational forecast system. Tellus, 60A, 6279, doi:10.1111/j.1600-0870.2007.00273.x.

    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., , Davis C. A. , , Wang W. , , Manning K. W. , , and Klemp J. B. , 2008: Experiences with 0–36-h explicit convective forecasts with the WRF-ARW Model. Wea. Forecasting, 23, 407437, doi:10.1175/2007WAF2007005.1.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., , Hamill T. M. , , Wei X. , , Song Y. , , and Toth Z. , 2008: Ensemble data assimilation with the NCEP Global Forecast System. Mon. Wea. Rev., 136, 463482, doi:10.1175/2007MWR2018.1.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences: An Introduction. 2nd ed. Academic Press, 467 pp.

  • Xue, M., and et al. , 2010: CAPS real-time storm-scale ensemble and high-resolution forecasts for the NOAA Hazardous Weather Testbed 2010 Spring Experiment. Preprints, 25th Conf. on Severe Local Storms, Denver, CO, Amer. Meteor. Soc., 7B.3. [Available online at https://ams.confex.com/ams/pdfpapers/176056.pdf.]

  • Zhang, C., , Wang Y. , , and Hamilton K. , 2011: Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. Mon. Wea. Rev., 139, 34893513, doi:10.1175/MWR-D-10-05091.1.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., , Snyder C. , , and Rotunno R. , 2003: Effects of moist convection on mesoscale predictability. J. Atmos. Sci., 60, 11731185, doi:10.1175/1520-0469(2003)060<1173:EOMCOM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, F., , Snyder C. , , and Sun J. , 2004: Impacts of initial estimate and observation availability on convective-scale data assimilation with an ensemble Kalman filter. Mon. Wea. Rev., 132, 12381253, doi:10.1175/1520-0493(2004)132<1238:IOIEAO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, M., , Zhang F. , , Huang X.-Y. , , and Zhang X. , 2011: Intercomparison of an ensemble Kalman filter with three- and four-dimensional variational data assimilation methods in a limited-area model over the month of June 2003. Mon. Wea. Rev., 139, 566572, doi:10.1175/2010MWR3610.1.

    • Search Google Scholar
    • Export Citation
1

The horizontal grid length at which convective parameterization can be safely removed varies among different NWP models and geographic regions. Over the United States, horizontal grid spacing of ~4 km is often sufficient to obviate the need for convective parameterization (e.g., Schwartz et al. 2009). However, models with 4-km grid spacing still cannot fully “resolve” convection (e.g., Bryan et al. 2003).

2

Some EnKF systems actually produce forecasts greater than P each analysis–forecast cycle (usually not more than 1.5P) to introduce four-dimensional characteristics into the EnKF (e.g., Wang et al. 2013).

3

As precipitation verification can be sensitive to the interpolation method and verifying gridbox size (e.g., Gallus 2002; Accadia et al. 2003), we also performed verification by interpolating the model output onto the ST4 grid using a budget interpolation algorithm. However, results obtained using this approach yielded identical conclusions as the results attained from performing verification on the model grid.

4

We also used r = 100 km for these calculations and obtained similar results.

Save