• Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2012: Localization and sampling error correction in ensemble Kalman filter data assimilation. Mon. Wea. Rev., 140, 23592371, doi:10.1175/MWR-D-11-00013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Arellano, 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Barker, D. M., and Coauthors, 2012: The Weather Research and Forecasting Model’s community variational/ensemble data assimilation system: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518, doi:10.1175/1520-0493(2004)132<0495:AHACTR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burghardt, B., C. Evans, and P. Roebber, 2014: Assessing the predictability of convection initiation across the high plains using an object-based approach. Wea. Forecasting, 29, 403418, doi:10.1175/WAF-D-13-00089.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • CISL, 2012: Yellowstone. Computational and Information Systems Laboratory, National Center for Atmospheric Research. [Available online at http://n2t.net/ark:/85065/d7wd3xhc.]

  • Clark, A. J., M. C. Coniglio, B. E. Coffer, G. Thompson, M. Xue, and F. Kong, 2015a: Sensitivity of 24-h forecast dryline position and structure to boundary layer parameterizations in convection-allowing WRF model simulations. Wea. Forecasting, 30, 613638, doi:10.1175/WAF-D-14-00078.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2015b: Spring Forecasting Experiment 2015: Program overview and operations plan. NOAA/NSSL/Storm Prediction Center, 24 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2015/HWT_SFE_2015_OPS_plan_final.pdf.]

  • Cohen, A. E., S. M. Cavallo, M. C. Coniglio, and H. E. Brooks, 2015: A review of planetary boundary layer parameterization schemes and their sensitivity in simulating southeastern U.S. cold season severe weather environments. Wea. Forecasting, 30, 591612, doi:10.1175/WAF-D-14-00105.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coniglio, M. C., J. Correia Jr., P. T. Marsh, and F. Kong, 2013: Verification of convection-allowing WRF model forecasts of the planetary boundary layer using sounding observations. Wea. Forecasting, 28, 842862, doi:10.1175/WAF-D-12-00103.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coniglio, M. C., S. M. Hitchcock, and K. H. Knopfmeier, 2016: Impact of assimilating preconvective upsonde observations on short-term forecasts of convection observed during MPEX. Mon. Wea. Rev., 144, 43014325, doi:10.1175/MWR-D-16-0091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crook, A. N., 1996: Sensitivity of moist convection forced by boundary layer processes to low-level thermodynamic fields. Mon. Wea. Rev., 124, 17671785, doi:10.1175/1520-0493(1996)124<1767:SOMCFB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Done, J., C. Davis, and M. Weisman, 2004: The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) model. Atmos. Sci. Lett., 5, 110117, doi:10.1002/asl.72.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 1987: The distinction between large-scale and mesoscale contribution to severe convection: A case study example. Wea. Forecasting, 2, 316, doi:10.1175/1520-0434(1987)002<0003:TDBLSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 2004: Weather forecasting by humans—Heuristics and decision making. Wea. Forecasting, 19, 11151126, doi:10.1175/WAF-821.1.

  • Duda, J. D., and W. A. Gallus Jr., 2013: The impact of large-scale forcing on skill of simulated convective initiation and upscale evolution with convection-allowing grid spacings in the WRF. Wea. Forecasting, 28, 9941018, doi:10.1175/WAF-D-13-00005.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fowle, M. A., and P. J. Roebber, 2003: Short-range (0–48 h) numerical prediction of convective occurrence, mode, and location. Wea. Forecasting, 18, 782794, doi:10.1175/1520-0434(2003)018<0782:SHNPOC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gremillion, M. S., and R. E. Orville, 1999: Thunderstorm characteristics of cloud-to-ground lightning at the Kennedy Space Center, Florida: A study of lightning initiation signatures as indicated by the WSR-88D. Wea. Forecasting, 14, 640649, doi:10.1175/1520-0434(1999)014<0640:TCOCTG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and H.-L. Pan, 1996: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124, 23222339, doi:10.1175/1520-0493(1996)124<2322:NBLVDI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Houston, A. L., and D. Niyogi, 2007: The sensitivity of convective initiation to the lapse rate of the active cloud-bearing layer. Mon. Wea. Rev., 135, 30133032, doi:10.1175/MWR3449.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hu, X.-M., J. W. Nielsen-Gammon, and F. Zhang, 2010: Evaluation of three planetary boundary layer schemes in the WRF model. J. Appl. Meteor. Climatol., 49, 18311843, doi:10.1175/2010JAMC2432.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927945, doi:10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jirak, I., A. Clark, J. Correia, K. Knopfmeier, C. Melick, B. Twiest, M. Coniglio, and S. Weiss, 2015: Spring Forecasting Experiment 2015: Preliminary findings and results. NOAA/NSSL/Storm Prediction Center, 32 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2015/HWT_SFE_2015_Prelim_Findings_Final.pdf.]

  • Jorgensen, D. P., and T. M. Weckwerth, 2003: Forcing and organization of convective systems. Radar and Atmospheric Science: A Collection of Essays in Honor of David Atlas, Meteor. Monogr., No. 52, Amer. Meteor. Soc., 75–103.

    • Crossref
    • Export Citation
  • Kain, J. S., S. J. Weiss, J. J. Levit, M. E. Baldwin, and D. R. Bright, 2006: Examination of convection-allowing configurations of the WRF model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004. Wea. Forecasting, 21, 167181, doi:10.1175/WAF906.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2008: Some practical considerations regarding horizontal resolution in the first generation of operational convection-allowing NWP. Wea. Forecasting, 23, 931952, doi:10.1175/WAF2007106.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2013: A feasibility study for probabilistic convection initiation forecasts based on explicit numerical guidance. Bull. Amer. Meteor. Soc., 94, 12131225, doi:10.1175/BAMS-D-11-00264.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2017: Collaborative efforts between the U.S. and U.K. to advance prediction of high-impact weather. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-15-00199.1, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., 2012: Automating the Analysis of Spatial Grids: A Practical Guide to Data Mining Geospatial Images for Human and Environmental Applications. Springer, 320 pp.

    • Crossref
    • Export Citation
  • Lakshmanan, V., and T. Smith, 2010: An objective method of evaluating and devising storm-tracking algorithms. Wea. Forecasting, 29, 701709, 701–709, doi:10.1175/2009WAF2222330.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., and T. W. Humphrey, 2014: A MapReduce technique to mosaic continental-scale weather radar data in real-time. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7, 721732, doi:10.1109/JSTARS.2013.2282040.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., T. Smith, K. Hondl, G. J. Stumpf, and A. Witt, 2006: A real-time, three-dimensional, rapidly updating, heterogeneous radar merger technique for reflectivity, velocity, and derived products. Wea. Forecasting, 21, 802823, doi:10.1175/WAF942.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., T. Smith, G. J. Stumpf, and K. Hondl, 2007: The Warning Decision Support System–Integrated Information. Wea. Forecasting, 22, 596612, doi:10.1175/WAF1009.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., K. Hondl, and R. Rabin, 2009: An efficient, general-purpose technique for identifying storm cells in geospatial images. J. Ocean. Atmos. Technol., 26, 523537, doi:10.1175/2008JTECHA1153.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., C. Karstens, J. Krause, and L. Tang, 2014: Quality control of weather radar using polarimetric variables. J. Atmos. Oceanic Technol., 31, 12341249, doi:10.1175/JTECH-D-13-00073.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, B. D., R. D. Farley, and M. R. Hjelmfelt, 1991: A numerical case study of convection initiation along colliding convergence boundaries in northeast Colorado. J. Atmos. Sci., 48, 23502366, doi:10.1175/1520-0469(1991)048<2350:ANCSOC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lilly, D. K., 1990: Numerical prediction of thunderstorms—Has its time come? Quart. J. Roy. Meteor. Soc., 116, 779798, doi:10.1002/qj.49711649402.

    • Search Google Scholar
    • Export Citation
  • Lock, N. A., and A. L. Houston, 2015: Spatiotemporal distribution of thunderstorm initiation in the US Great Plains from 2005 to 2007. Int. J. Climatol., 35, 40474056, doi:10.1002/joc.4261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Markowski, P., and C. Hannon, 2006: Multiple-Doppler radar observations of the evolution of vorticity extrema in a convective boundary layer. Mon. Wea. Rev., 134, 355374, doi:10.1175/MWR3060.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Markowski, P., and Y. Richardson, 2010: Mesoscale Meteorology in Midlatitudes. Wiley-Blackwell, 397 pp.

    • Crossref
    • Export Citation
  • Murphey, H. V., R. M. Wakimoto, C. Flamant, and D. E. Kingsmill, 2006: Dryline on 19 June 2002 during IHOP. Part I: Airborne Doppler and LEANDRE II analyses of the thin line structure and convection initiation. Mon. Wea. Rev., 134, 406430, doi:10.1175/MWR3063.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nakanishi, M., and H. Niino, 2009: Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteor. Soc. Japan, 87, 895912, doi:10.2151/jmsj.87.895.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pleim, J. E., 2007: A combined local and nonlocal closure model for the atmospheric boundary layer. Part I: Model description and testing. J. Appl. Meteor. Climatol., 46, 13831395, doi:10.1175/JAM2539.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., 2009: Visualizing multiple measures of forecast quality. Wea. Forecasting, 24, 601608, doi:10.1175/2008WAF2222159.1.

  • Roebber, P. J., D. M. Schultz, B. A. Colle, and D. J. Stensrud, 2004: Toward improved prediction: High-resolution and ensemble modeling systems in operations. Wea. Forecasting, 19, 936949, doi:10.1175/1520-0434(2004)019<0936:TIPHAE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, C. Snyder, J. L. Anderson, and M. L. Weisman, 2013: Model bias in a continuously cycled assimilation system and its influence on convection-permitting forecasts. Mon. Wea. Rev., 141, 12631284, doi:10.1175/MWR-D-12-00112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, J. Berner, K. R. Fossell, C. Snyder, J. L. Anderson, and M. L. Weisman, 2014: Representing forecast error in a convection-permitting ensemble forecast system. Mon. Wea. Rev., 142, 45194541, doi:10.1175/MWR-D-14-00100.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, R. D. Torn, and M. L. Weisman, 2016: Impact of assimilating dropsonde observations from MPEX on ensemble forecasts of severe weather events. Mon. Wea. Rev., 144, 37993823, doi:10.1175/MWR-D-15-0407.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schumacher, R. S., 2015: Resolution dependence of initiation and upscale growth of deep convection in convection-allowing forecasts of the 31 May–1 June 2013 supercell and MCS. Mon. Wea. Rev., 143, 43314354, doi:10.1175/MWR-D-15-0179.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, M. L. Weisman, R. A. Sobash, K. R. Fossell, K. W. Manning, and S. B. Trier, 2015: A real-time convection-allowing ensemble prediction system initialized by mesoscale ensemble Kalman filter analyses. Wea. Forecasting, 30, 11581181, doi:10.1175/WAF-D-15-0013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and M. L. Weisman, 2009: The impact of positive-definite moisture transport on NWP precipitation forecasts. Mon. Wea. Rev., 137, 488494, doi:10.1175/2008MWR2583.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

    • Crossref
    • Export Citation
  • Skinner, P. S., L. J. Wicker, D. M. Wheatley, and K. H. Knopfmeier, 2016: Application of two spatial verification methods to ensemble forecasts of low-level rotation. Wea. Forecasting, 31, 713735, doi:10.1175/WAF-D-15-0129.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., 2007: Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models. Cambridge University Press, 459 pp.

    • Crossref
    • Export Citation
  • Stratman, D. R., M. C. Coniglio, S. E. Koch, and M. Xue, 2013: Use of multiple verification methods to evaluate forecasts of convection from hot- and cold-start convection-allowing models. Wea. Forecasting, 28, 119138, doi:10.1175/WAF-D-12-00022.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sukoriansky, S., B. Galperian, and V. Perov, 2005: Application of a new spectral theory of stable stratified turbulence to the atmospheric boundary layer over sea ice. Bound.-Layer Meteor., 117, 231257, doi:10.1007/s10546-004-6848-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, doi:10.1175/2008MWR2387.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., 2010: Performance of a mesoscale ensemble Kalman filter (EnKF) during the NOAA High-Resolution Hurricane test. Mon. Wea. Rev., 138, 43754392, doi:10.1175/2010MWR3361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and C. A. Davis, 2012: The influence of shallow convection on tropical cyclone track forecasts. Mon. Wea. Rev., 140, 21882197, doi:10.1175/MWR-D-11-00246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. S. Romine, 2015: Sensitivity of central Oklahoma convection forecasts to upstream potential vorticity anomalies during two strongly forced cases during MPEX. Mon. Wea. Rev., 143, 40644087, doi:10.1175/MWR-D-15-0085.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., G. J. Hakim, and C. Snyder, 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trapp, R. J., D. J. Stensrud, M. C. Coniglio, R. S. Schumacher, M. E. Baldwin, S. Waugh, and D. T. Conlee, 2016: Mobile radiosonde deployments during the Mesoscale Predictability Experiment (MPEX): Rapid and adaptive sampling of upscale convective feedbacks. Bull. Amer. Meteor. Soc., 97, 329336, doi:10.1175/BAMS-D-14-00258.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trier, S. B., G. S. Romine, D. A. Ahijevych, R. J. Trapp, R. S. Schumacher, M. C. Coniglio, and D. J. Stensrud, 2015: Mesoscale thermodynamic influences on convection initiation near a surface dryline in a convection-permitting ensemble. Mon. Wea. Rev., 143, 37263753, doi:10.1175/MWR-D-15-0133.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Van Klooster, S. L., and P. J. Roebber, 2009: Surface-based convective potential in the contiguous United States in a business-as-usual future climate. J. Climate, 22, 33173330, doi:10.1175/2009JCLI2697.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Walters, D. N., and Coauthors, 2011: The Met Office Unified Model Global Atmosphere 3.0/3.1 and JULES Global Land 3.0/3.1 configurations. Geosci. Model Dev., 4, 919941, doi:10.5194/gmd-4-919-2011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., and D. B. Parsons, 2006: A review of convection initiation and motivation for IHOP_2002. Mon. Wea. Rev., 134, 522, doi:10.1175/MWR3067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., J. W. Wilson, and R. M. Wakimoto, 1996: Thermodynamic variability within the convective boundary layer due to horizontal convective rolls. Mon. Wea. Rev., 124, 769784, doi:10.1175/1520-0493(1996)124<0769:TVWTCB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., H. V. Murphey, C. Flamant, J. Goldstein, and C. R. Pettet, 2008: An observational study of convection initiation on 12 June 2002 during IHOP_2002. Mon. Wea. Rev., 136, 22832304, doi:10.1175/2007MWR2128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., C. Davis, W. Wang, K. W. Manning, and J. B. Klemp, 2008: Experiences with 0–36-h explicit convective forecasts with the WRF-ARW model. Wea. Forecasting, 23, 407437, doi:10.1175/2007WAF2007005.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., and Coauthors, 2015: The Mesoscale Predictability Experiment (MPEX). Bull. Amer. Meteor. Soc., 96, 21272149, doi:10.1175/BAMS-D-13-00281.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, S., and Coauthors, 2011: Experimental Forecast Program Spring Experiment 2011: Program overview and operations plan. NOAA/NSSL/Storm Prediction Center, 62 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2011/Spring_Experiment_2011_ops_plan_13May_v5.pdf.]

  • Wilks, D. S., 1995: Statistical Methods in Atmospheric Sciences: An Introduction. Academic Press, 500 pp.

  • Wilks, D. S., 2010: Sampling distributions of the Brier score and Brier skill score under serial dependence. Quart. J. Roy. Meteor. Soc., 136, 21092118, doi:10.1002/qj.709.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., and R. D. Roberts, 2006: Summary of convective storm initiation and evolution during IHOP: Observational and modeling perspective. Mon. Wea. Rev., 134, 2347, doi:10.1175/MWR3069.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and W. J. Martin, 2006a: A high-resolution modeling study of the 24 May 2002 dryline case during IHOP. Part I: Numerical simulation and general evolution of the dryline and convection. Mon. Wea. Rev., 134, 149171, doi:10.1175/MWR3071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and W. J. Martin, 2006b: A high-resolution modeling study of the 24 May 2002 dryline case during IHOP. Part II: Horizontal convective rolls and convective initiation. Mon. Wea. Rev., 134, 172191, doi:10.1175/MWR3072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, C., Y. Wang, and K. Hamilton, 2011: Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. Mon. Wea. Rev., 139, 34893513, doi:10.1175/MWR-D-10-05091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ziegler, C. L., T. J. Lee, and R. A. Pielke, 1997: Convective initiation at the dryline: A modeling study. Mon. Wea. Rev., 125, 10011026, doi:10.1175/1520-0493(1997)125<1001:CIATDA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    The 1500 UTC 0-h 20-km Rapid Refresh–analyzed (left) 500-hPa geopotential height (contour; m), wind (barbs; half-flag, 5 kt, where 1 kt = 0.51 m s−1; full flag, 10 kt; pennant, 50 kt), and wind speed (shaded per the top color bar; kt), and (right) mean sea level pressure (contour; hPa), 10-m wind (barbs; half-flag, 5 kt; full flag, 10 kt; pennant, 50 kt), and surface-based CAPE (shaded per the bottom color bar; J kg−1) for (a),(d) RF4, (b),(e) RF10, and (c),(f) RF12. In all panels, the NCAR G-V aircraft track along which targeted dropsonde observations are collected is depicted by the dark green line except for the final hour, which is depicted by the red line.

  • View in gallery

    Areal extent of the outer (D01) and inner (D02) simulation domains. The outer domain is used for both cycled analysis and ensemble forecasts, while the inner domain is used only for ensemble forecasts.

  • View in gallery

    WSR-88D sites (blue circles, selected stations labeled) from which level II reflectivity data are used to identify observed CI events using WDSS-II. The black box represents the areal bounds of the 0.03° × 0.03° grid to which observed and simulated radar reflectivities at the −10°C level are interpolated for verification purposes.

  • View in gallery

    Representative example of WDSS-II object identification and tracking. (left) Observed reflectivity at the −10°C level on the merged analysis grid (shaded; dBZ, 35 dBZ contoured). (right) WDSS-II-identified convective objects (shaded; each color represents a different object).

  • View in gallery

    CI event cumulative distributions for (a) RF4, (b) RF10, and (c) RF12. Note the different vertical axes in each panel. The ACM2, MYJ, MYNN, QNSE, and YSU ensemble mean CI event counts are given by the orange, blue, green, purple, and red lines, respectively, while the observed CI event count is given by the solid black line. Local closure parameterizations in this and subsequent figures are listed with (L) in the legend and solid lines; nonlocal closure parameterizations are listed with (NL) in the legend and dashed lines.

  • View in gallery

    Paintball plot for RF4 from a randomly selected representative ensemble member for the (top left) ACM2, (top center) MYJ, (top right) MYNN, (bottom left) QNSE, and (bottom center) YSU ensembles. Each paintball, or dot, reflects a simulated CI event, drawn at the location of the simulated event and color coded by the time of the simulated event per the legend at right. (bottom right) The corresponding observed CI events are depicted.

  • View in gallery

    As in Fig. 6, but for RF12.

  • View in gallery

    As in Fig. 6, but for RF10.

  • View in gallery

    Temporal bias distributions for matched CI events, aggregated over all members for each ensemble (shaded horizontal bars), for (a) RF4, (b) RF10, and (c) RF12. Binning increment is 5 min. Positive (negative) values indicate early (late) forecast biases. The shaded box with each distribution indicates the mean (notches), median (open dots), and ±1 standard deviation (box extents) temporal bias. The shaded curve at the left of each distribution is a polynomial fit to the data, obtained using kernel density estimation, while the shaded curve at the right of each distribution is a normal distribution with mean and standard deviation equal to that of the data. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

  • View in gallery

    As in Fig. 9, but for spatial error. Binning increment is 5 km.

  • View in gallery

    Histograms, scaled by the maximum value within the distribution, of misses (above the zero line) and false alarms (below the zero line) aggregated over all ensemble members in 5-min bins for (a) RF4, (b) RF10, and (c) RF12. The solid curve and shading underneath represent a polynomial fit to the scaled histogram data. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

  • View in gallery

    Performance diagrams (Roebber 2009) for (a) RF4, (b) RF10, and (c) RF12 at the 100 km/1 h spatiotemporal verification threshold. Bias curves are given by the light-blue lines originating in the bottom-left corner of each panel and CSI curves are given by the black hyperbolic curves. Large circles represent ensemble mean statistics and small boxes represent individual ensemble member statistics. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

  • View in gallery

    As in Fig. 12, but for ensemble mean statistics as a function of varying spatiotemporal verification threshold per the legend at right.

  • View in gallery

    Mean error of (left) ensemble mean 2-m T (°C) and (right) 2-m Td (°C) relative to MADIS METAR observations between 1500 and 0600 UTC for (top) RF4 (circles), (middle) RF10 (squares), and (bottom) RF12 (triangles). ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

  • View in gallery

    Locations of MPEX mobile (red squares) and NWS (green circles) rawinsonde observations used to evaluate vertical profiles of mean error for selected variables for (a) RF4, (b) RF10, and (c) RF12. Shaded in each panel per the color bar at bottom is the 1500 UTC Rapid Refresh 3–9-h forecast accumulated precipitation (mm).

  • View in gallery

    Vertical profiles between 1000 and 600 hPa of mean error of (left) ensemble mean temperature (°C) and (right) dewpoint temperature (°C) for (a),(d) RF4 (circles), (b),(e) RF10 (squares), and (d),(f) RF12 (triangles) for the rawinsonde profiles indicated in Fig. 15. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively. Note the nonuniform horizontal axes between cases for the panels in (d)–(f).

  • View in gallery

    Skew T–logp diagram valid 1800 UTC 31 May 2013 (during RF10) from Springfield, MO (SGF; 37.23°N, 93.4°W). Temperature (dewpoint temperature) (°C) is depicted in solid (dashed) lines. The observed profile is in black, while 3-h forecast profiles from a randomly selected representative ensemble member are depicted in orange, blue, purple, green, and red for the ACM2, MYJ, MYNN, QNSE, and YSU PBL parameterizations, respectively. At left, the vertical profile of relative humidity (%, per the horizontal axis labeled at top left) is included.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 14 14 10
PDF Downloads 14 14 10

The Influence of PBL Parameterization on the Practical Predictability of Convection Initiation during the Mesoscale Predictability Experiment (MPEX)

View More View Less
  • 1 Atmospheric Science Program, Department of Mathematical Sciences, University of Wisconsin–Milwaukee, Milwaukee, Wisconsin
© Get Permissions
Full access

Abstract

This study evaluates the influence of planetary boundary layer parameterization on short-range (0–15 h) convection initiation (CI) forecasts within convection-allowing ensembles that utilize subsynoptic-scale observations collected during the Mesoscale Predictability Experiment. Three cases, 19–20 May, 31 May–1 June, and 8–9 June 2013, are considered, each characterized by a different large-scale flow pattern. An object-based method is used to verify and analyze CI forecasts. Local mixing parameterizations have, relative to nonlocal mixing parameterizations, higher probabilities of detection but also higher false alarm ratios, such that the ensemble mean forecast skill only subtly varied between parameterizations considered. Temporal error distributions associated with matched events are approximately normal around a zero mean, suggesting little systematic timing bias. Spatial error distributions are skewed, with average mean (median) distance errors of approximately 44 km (28 km). Matched event cumulative distribution functions suggest limited forecast skill increases beyond temporal and spatial thresholds of 1 h and 100 km, respectively. Forecast skill variation is greatest between cases with smaller variation between PBL parameterizations or between individual ensemble members for a given case, implying greatest control on CI forecast skill by larger-scale features than PBL parameterization. In agreement with previous studies, local mixing parameterizations tend to produce simulated boundary layers that are too shallow, cool, and moist, while nonlocal mixing parameterizations tend to be deeper, warmer, and drier. Forecasts poorly resolve strong capping inversions across all parameterizations, which is hypothesized to result primarily from implicit numerical diffusion associated with the default finite-differencing formulation for vertical advection used herein.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Prof. Clark Evans, evans36@uwm.edu

Abstract

This study evaluates the influence of planetary boundary layer parameterization on short-range (0–15 h) convection initiation (CI) forecasts within convection-allowing ensembles that utilize subsynoptic-scale observations collected during the Mesoscale Predictability Experiment. Three cases, 19–20 May, 31 May–1 June, and 8–9 June 2013, are considered, each characterized by a different large-scale flow pattern. An object-based method is used to verify and analyze CI forecasts. Local mixing parameterizations have, relative to nonlocal mixing parameterizations, higher probabilities of detection but also higher false alarm ratios, such that the ensemble mean forecast skill only subtly varied between parameterizations considered. Temporal error distributions associated with matched events are approximately normal around a zero mean, suggesting little systematic timing bias. Spatial error distributions are skewed, with average mean (median) distance errors of approximately 44 km (28 km). Matched event cumulative distribution functions suggest limited forecast skill increases beyond temporal and spatial thresholds of 1 h and 100 km, respectively. Forecast skill variation is greatest between cases with smaller variation between PBL parameterizations or between individual ensemble members for a given case, implying greatest control on CI forecast skill by larger-scale features than PBL parameterization. In agreement with previous studies, local mixing parameterizations tend to produce simulated boundary layers that are too shallow, cool, and moist, while nonlocal mixing parameterizations tend to be deeper, warmer, and drier. Forecasts poorly resolve strong capping inversions across all parameterizations, which is hypothesized to result primarily from implicit numerical diffusion associated with the default finite-differencing formulation for vertical advection used herein.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Prof. Clark Evans, evans36@uwm.edu

1. Introduction

In his seminal work on the numerical prediction of thunderstorms, Lilly (1990) argues that “the most difficult prediction task is to foresee development of the first convective storm in an area,” or convection initiation (CI). CI represents the culmination of a series of physical processes by which air parcels are brought to their level of free convection by means of a triggering mechanism and subsequently remain positively buoyant over a great vertical depth (Doswell 1987; Markowski and Richardson 2010). Thunderstorms that result from CI have the potential to impact life and property, particularly those associated with severe wind, hail, tornadoes, and/or flash flooding; Weiss et al. (2011) note that thunderstorm-related hazards result in an estimated $4 billion in annual damage across the United States. CI can result from many triggering mechanisms, including but not limited to frontal boundaries, drylines, gust fronts, undular bores, orographic circulations, and elevated convergence (Jorgensen and Weckwerth 2003; Weckwerth and Parsons 2006). It is sensitive to local variability of similar magnitude to observational uncertainty in near-surface temperature and moisture, convergence, and lapse rate (e.g., Lee et al. 1991; Crook 1996; Weckwerth et al. 1996; Houston and Niyogi 2007). Accordingly, Markowski and Richardson (2010) argue that CI forecasting skill has advanced at a slower rate relative to our ability to forecast convective storm type, organization, and accompanying severe weather threats.

The accurate prediction of CI is a problem that spans across the synoptic, meso-α, meso-β, meso-γ, and microscales and is highly sensitive to the interactions between the surface and free atmosphere that occur within the planetary boundary layer (PBL; Stensrud 2007). Spatial variability in PBL structure reflects the variability in low-level moisture, temperature, and winds, which in turn affects forcing for vertical motion, thermodynamic instability, convective inhibition, and, by extension, CI (e.g., Roebber et al. 2004; Hu et al. 2010; Coniglio et al. 2013; Clark et al. 2015a; Cohen et al. 2015). PBL parameterizations influence the subgrid horizontal variability within the meso-β environment, the subgrid vertical transport of turbulent kinetic energy (TKE), and the moistening of the meso-γ environment (Ziegler et al. 1997; Murphey et al. 2006; Markowski and Hannon 2006; Xue and Martin 2006b; Weckwerth et al. 2008).

Short-range CI predictability has been examined utilizing both convection-parameterizing (generally, Δx > 10 km) and convection-allowing (generally, Δx ≤ 4 km) numerical simulations (e.g., Fowle and Roebber 2003; Done et al. 2004; Wilson and Roberts 2006; Xue and Martin 2006a,b; Kain et al. 2006, 2008; Weisman et al. 2008; Duda and Gallus 2013; Kain et al. 2013; Burghardt et al. 2014). In general, studies conducted prior to 2010 indicate that skillful CI forecasts to lead times of 24–48 h are possible, at least when verified on the meso-α to synoptic scales, and that convection-allowing forecasts are more skillful than convection-parameterizing forecasts. More recent studies that exclusively utilize convection-allowing forecast frameworks have refined our understanding of the scales at which CI forecasts have meaningful skill.

Duda and Gallus (2013) examine the predictability of warm-season CI preceding mesoscale convective system formation in convection-allowing numerical forecasts, finding a mean 105-km initiation displacement error with no timing bias. Burghardt et al. (2014) examine warm-season CI predictability in the high plains using an object-based approach within subkilometer horizontal grid-spacing deterministic forecasts. For correctly forecast CI events within 100-km and 1-h matching thresholds, they identify an average distance error of 38.4 km and average timing error of 2.78 min. However, model forecasts generally overforecast CI events, particularly near elevated terrain and, thus, have relatively low skill at this and finer matching thresholds. Neither Duda and Gallus (2013) nor Burghardt et al. (2014) identify a statistically significant link between large-scale forcing magnitude and forecast skill. Within ensemble numerical simulations, Kain et al. (2013) find that many key aspects of CI are explicitly represented with a relatively high probability of detection and minimal systematic bias in the timing of the first CI event, albeit within rather coarse spatial and temporal matching thresholds. Kain et al. (2013) also find that automated CI identification algorithms do not discriminate between CI events preceding weak versus strong convection, leading them to suggest that, while important, automated CI identification algorithms are “often inadequate indicators of impending hazardous or disruptive weather.”

Previous studies have examined PBL parameterization influences upon specific forecast elements within convection-allowing numerical forecasts. In short-range warm-season numerical simulations, Hu et al. (2010) find that the Asymmetric Cloud Model version 2 (ACM2; Pleim 2007) and Yonsei University (YSU; Hong and Pan 1996) parameterizations are characterized by overly vigorous PBL vertical mixing relative to observations and to the Mellor–Yamada–Janjić (MYJ; Janjić 1994) parameterization, itself characterized by insufficient mixing. Similarly, Coniglio et al. (2013) examine the ability of five PBL parameterizations—ACM2, MYJ, Mellor–Yamada–Nakanishi–Niino level 2.5 closure (MYNN; Nakanishi and Niino 2009), quasi-normal scale elimination (QNSE; Sukoriansky et al. 2005), and YSU—to accurately forecast warm-season preconvective environments to 36-h lead times. Consistent with Hu et al. (2010), they find that local closure parameterizations such as MYJ and QNSE typically simulate PBLs that are too shallow and moist, while nonlocal closure parameterizations such as YSU and ACM2 typically simulate PBLs that are too deep and dry. They caution, however, that improved PBL structure may not necessarily translate to better performance for specific forecast elements. To this point, Clark et al. (2015a) identify differences in warm-season dryline position biases that result from PBL parameterization variation, with the best-performing parameterization (MYNN) in Coniglio et al. (2013) having the largest average eastward dryline position bias. Additionally, Cohen et al. (2015) identify a tendency for local closure parameterizations to stunt the growth of the daytime PBL relative to nonlocal closure parameterizations for cool-season southeast U.S. severe weather environments.

The Mesoscale Predictability Experiment (MPEX; Weisman et al. 2015) is motivated in part by the hypothesis that targeted subsynoptic-scale observations collected in the upstream, preconvective environment across the Intermountain West and their subsequent assimilation into convection-allowing numerical forecasts will result in significant improvements in forecasts of convection timing, location, mode, and downstream evolution. Several published studies utilize data and numerical simulations collected during MPEX to study, at least in part, CI and its predictability. Schumacher (2015) finds that finer horizontal grid spacing results in degraded forecast quality for the initiation of the convective feature that produced a strong tornado near El Reno, Oklahoma, and flash flooding near Oklahoma City, Oklahoma, on 31 May–1 June 2013. Using output from a 10-member convection-allowing ensemble, Trier et al. (2015) demonstrate that the minimum buoyancy of PBL air parcels must be small in order for CI to occur. A. M. Keclik et al. (2017, unpublished manuscript, hereafter KERR) find that the assimilation of MPEX targeted observations into model ensemble initial conditions does not result in a statistically significant improvement in CI forecast skill over the set of MPEX cases considered, contrasting with Romine et al. (2016), who identified a small but statistically significant increase in precipitation forecast skill. This apparent discrepancy may result from observation targeting that focused on ensemble variation in accumulated precipitation rather than CI and/or the different verification methods used by these studies.

Although previous investigators have quantified PBL parameterization control on boundary layer thermodynamic fields known to be important for CI, the specific control of PBL parameterization on CI has yet to be explicitly quantified. To that end, this study analyzes the control exerted by PBL parameterization on the forecast skill of warm-season CI at 0–15-h lead times within convection-allowing ensemble forecasts initialized using initial conditions that incorporate targeted observations gathered during the MPEX field campaign. The remainder of this study is structured as follows. The methodology, including the cases studied, model and ensemble analysis configuration, and CI verification techniques, are presented in section 2. Results are discussed in section 3, and a summary and conclusions are provided in section 4.

2. Data and methodology

a. Event descriptions

Three cases from the set of all MPEX research flights (RFs), each characterized by different large-scale flow directions, are considered: 19 May (RF4), 31 May (RF10), and 8 June (RF12) 2013. RF4 is characterized by a middle-tropospheric trough over the Intermountain West with southwesterly flow aloft across the central Great Plains (Fig. 1a) and a surface cold front extending south-southwestward from a surface cyclone along the North Dakota–South Dakota border into western Texas (Fig. 1d). In this case, CI occurs primarily during the local afternoon hours across Oklahoma, Kansas, and Nebraska (section 3a) along the eastward-advancing cold front and a prefrontal trough in a region of moderate surface-based CAPE (Fig. 1d).

Fig. 1.
Fig. 1.

The 1500 UTC 0-h 20-km Rapid Refresh–analyzed (left) 500-hPa geopotential height (contour; m), wind (barbs; half-flag, 5 kt, where 1 kt = 0.51 m s−1; full flag, 10 kt; pennant, 50 kt), and wind speed (shaded per the top color bar; kt), and (right) mean sea level pressure (contour; hPa), 10-m wind (barbs; half-flag, 5 kt; full flag, 10 kt; pennant, 50 kt), and surface-based CAPE (shaded per the bottom color bar; J kg−1) for (a),(d) RF4, (b),(e) RF10, and (c),(f) RF12. In all panels, the NCAR G-V aircraft track along which targeted dropsonde observations are collected is depicted by the dark green line except for the final hour, which is depicted by the red line.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

RF10 is characterized by westerly middle-tropospheric flow across the central and southern Great Plains south of a closed upper-tropospheric low centered over South Dakota (Fig. 1b), a surface cold front that extends southwestward from eastern South Dakota to western Kansas, and a dryline along the Oklahoma and Texas Panhandles (Fig. 1e). In this case, CI occurs primarily during the local late afternoon and evening hours from southwestern Missouri to central Oklahoma (section 3a) along the aforementioned surface cold front and dryline in a region of large surface-based CAPE (Fig. 1e). The fraction of CI events that occur along the intersection of a surface cold front and dryline in west-central Oklahoma with this case is the focus of Schumacher (2015).

RF12 is characterized by northwesterly middle-tropospheric flow from the Intermountain West to the central Great Plains in conjunction with a weak short wave over the northern high plains (Fig. 1c). At the surface, weak lee cyclogenesis is noted along the U.S.–Canada border, in South Dakota, and in Colorado, with a north–south-oriented surface cold front located across the Great Plains (Fig. 1f). In this case, CI occurs throughout the forecast period from southeast Nebraska and southwest Iowa to the Texas and Oklahoma Panhandles (section 3a) along the southeastward-advancing cold front in a region of small-to-moderate CAPE (Fig. 1f). In all three cases, most CI events are surface based and occur along well-defined frontal boundaries.

b. Model configuration

Version 3.4.1 of the Advanced Research version of the WRF (WRF-ARW; Skamarock et al. 2008) Model is used to conduct ensemble forecasts of the three cases described in the previous section. The forecast model configuration closely resembles those described in Schwartz et al. (2015), Torn and Romine (2015), and Romine et al. (2016). A two-way nested domain configuration is used with horizontal grid spacing Δx = 15 km (3 km) on the outer (inner) domain, 40 vertical levels (including 9 in the PBL, with finer refinement nearer the surface), and a model top of 50 hPa. The horizontal grid covers 415 × 325 (1046 × 871) grid points on the outer (inner) domain, with the outer domain centered at 39.0°N, 101.0°W (Fig. 2). Only output from the inner domain is considered for the results. Selected physical parameterizations on both domains include the Thompson et al. (2008) bulk microphysical, Rapid Radiative Transfer Model for global climate models (RRTMG; Iacono et al. 2008) longwave and shortwave radiation, and Noah land surface model (Chen and Dudhia 2001) parameterizations. Convection is parameterized using the Tiedtke cumulus parameterization (Tiedtke 1989; Zhang et al. 2011) on the outer domain only, chosen given its superior treatment of shallow convection [albeit demonstrated primarily for tropical cyclone–focused applications; Torn and Davis (2012)]. Positive-definite moisture advection is used on both domains (Skamarock and Weisman 2009). Simulations begin at 1500 UTC for each case and extend forward 15 h. The generation of initial and lateral boundary conditions for ensemble forecasts is described in section 2c. The PBL parameterizations selected for ensemble forecasts are described in section 2d; note, however, that the cycled analysis system used to generate the ensemble initial and lateral boundary conditions uses the MYJ PBL parameterization.

Fig. 2.
Fig. 2.

Areal extent of the outer (D01) and inner (D02) simulation domains. The outer domain is used for both cycled analysis and ensemble forecasts, while the inner domain is used only for ensemble forecasts.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

c. Cycled analysis system

A 50-member cycled mesoscale (Δx = 15 km) analysis system is used to generate ensemble initial conditions for the convection-allowing numerical forecasts described in section 2b. Only the first 30 ensemble members are selected for the ensemble initial conditions. The ensemble initial conditions are identical to those of Romine et al. (2016); the procedures by which ensemble initial conditions are generated are briefly described below. The analysis utilizes the ensemble adjustment Kalman filter (EAKF; Anderson 2001, 2003) implemented within the Data Assimilation Research Testbed (DART; Anderson et al. 2009). Adaptive (in time and space) prior inflation (Anderson 2009), with an initial mean of 1.0 and spread of 0.8, and sampling error correction (Anderson 2012) are utilized to help maintain the ensemble spread. A Gaspari–Cohn filter (Gaspari and Cohn 1999), with horizontal and vertical localization half-widths of 635 and 8 km, respectively, is used to further minimize the sampling error. These localization length scales are reduced where the local observation density exceeds 2000 observations (e.g., Torn 2010). Observations three or more standard deviations outside the prior probability density function are not assimilated.

Routine observations assimilated by the cycled analysis system follow the work of Romine et al. (2013, 2014, 2016) and Schwartz et al. (2015), including rawinsonde, aircraft, METAR, surface synoptic, buoy, ship, atmospheric motion vector, and global positioning system refractivity observations. Observation error specification, observation processing, and quality control procedures for each platform follow the work of Romine et al. (2013, 2014). Observation windows are 30 min for surface and marine observations and 60 min for all other observations, and no time–space conversion is used. Variables updated by the cycled analysis system follow Romine et al. (2016) and include the horizontal wind (u, υ); perturbation potential temperature and geopotential height; water vapor, cloud water, rain, graupel, cloud ice, and snow mixing ratios; rain and cloud ice number concentrations; and diabatic heating rate. Cycled analysis performance is described in section 2c of Romine et al. (2016).

The initial ensemble is initialized at 1800 UTC 30 April 2013 by perturbing the 0-h 0.5° Global Forecast System (GFS) analysis with 50 random samples from the WRF Data Assimilation System (WRFDA; Barker et al. 2012) global background error covariance matrix. The fixed covariance perturbation technique of Torn et al. (2006) applied to the corresponding 0.5° GFS forecast is used to obtain ensemble lateral boundary conditions for this and subsequent cycles and free forecasts. Using the outer domain model configuration described in section 2b, ensemble analyses are then integrated forward 6 h, at which time observations are assimilated. A cycling interval of 6 h is used until the analysis system is terminated at 1200 UTC 16 June 2013, shortly after the completion of MPEX. At 0000 UTC on the day of each MPEX intensive observation period, two new forks of the cycled analysis are obtained: one that assimilates both routine and MPEX dropsonde observations and one that only assimilates routine observations. For more details about MPEX dropsonde targeting, assimilation, and biases, please see Romine et al. (2016).

To reduce time-dependent background errors, each cycled analysis in the two new forks of the analysis system uses a cycling interval of 1 h through 1500 UTC, or the end of each MPEX flight mission; over this period, shorter observation windows of 15 min (surface and marine observations) and 30 min (all other observations) are used (Romine et al. 2016). Only the cycled analysis that assimilates MPEX dropsonde observations is used in this study. On average, assimilating MPEX dropsonde observations results in a negligible (KERR, for CI) to small positive [Romine et al. (2016), for precipitation] change in short-range ensemble forecast skill. Of note, both the work of KERR and Romine et al. (2016) indicate that the three cases considered herein are among those with the highest ensemble forecast skill during MPEX. Thus, it is possible that the results obtained in this study to some degree understate the PBL parameterization control on short-range ensemble CI forecast skill over a larger set of CI episodes with varying degrees of intrinsic predictability.

d. PBL parameterizations

Five PBL parameterizations are utilized to quantify their influence upon CI predictability, following Coniglio et al. (2013), Clark et al. (2015a), and Cohen et al. (2015): MYJ, YSU, QNSE, ACM2, and MYNN. Each is paired with its default surface layer parameterization. These parameterizations can be categorized into two main groups, local and nonlocal, based on the closure used to obtain turbulent fluxes from resolved-scale atmospheric quantities. Local closures, such as used by the MYJ, MYNN, and QNSE parameterizations, estimate turbulent fluxes and unknown atmospheric variables at every grid point using only adjacent vertical levels (Stensrud 2007). The three local parameterizations used here are variations of the MYJ level-2.5 scheme (Janjić 1994), compute TKE by diagnosing the variance and covariance of potential temperature and water vapor mixing ratio (Coniglio et al. 2013), and use the predicted TKE to estimate PBL height (Stensrud 2007). Nonlocal closures, such as used by the YSU and, for upward mixing, ACM2 parameterizations, estimate turbulent fluxes and unknown atmospheric variables at every grid point using multiple vertical levels in a column in an attempt to simulate the effects of larger eddies in the convective PBL (Stensrud 2007). Nonlocal schemes do not typically diagnose TKE, but rather the PBL height is estimated using empirical formulas based on wind speed, vertical gradients of virtual potential temperature, and the critical bulk Richardson number (Stensrud 2007). More details regarding specific attributes of the chosen PBL parameterizations are provided by Conigilio et al. (2013).

Note that the cycled analysis system used to generate ensemble initial conditions utilizes only the MYJ parameterization; separate cycled analyses using the other parameterizations are not conducted. This ensures a consistent set of initial conditions for ensemble model forecasts. Further, note that boundary layer parameterization tendency variables (which differ between parameterizations) are not updated by cycled assimilation. Thus, there is no discrepancy in the extent to which the parameterized boundary layer is spun up at forecast initialization time between ensembles. While the ensemble initial conditions will, in part, be influenced by MYJ parameterization biases that are not fully corrected by the 1500 UTC data assimilation cycle, we believe that the benefits of using consistent initial conditions outweigh this drawback and justify the chosen methodology.

e. CI identification

As in Kain et al. (2013) and Burghardt et al. (2014), CI is defined using radar reflectivity ≥ 35 dBZ at the −10°C level. The reflectivity criterion follows from the empirical analysis of Gremillion and Orville (1999), while the −10°C level is used to mitigate the potentially deleterious effects of brightbanding due to hydrometeor melting. These criteria are required to be met for at least 30 min such that only sustained CI events are considered. Sensitivity to the chosen CI definition is tested, with variations in both the minimum reflectivity and longevity thresholds considered. While doing so changes the numbers of identified modeled and observed events, it does not significantly impact the forecast skill metrics considered in this work (not shown). Thus, it is believed that the results are relatively consistent across CI definitions.

To identify the observed CI events, the Warning Decision Support System–Integrated Information (WDSS-II; Lakshmanan et al. 2007) spatial analysis tools (Lakshmanan 2012) are used. First, Next Generation Weather Radar (NEXRAD) level II reflectivity data from the National Centers for Environmental Information are obtained every 5 min between 1500 and 0600 UTC for each case considered from 42 National Weather Service (NWS) Weather Surveillance Radars-1988 Doppler (WSR-88Ds) across the central United States (Fig. 3). Velocity data are dealiased and reflectivity data are quality controlled using a neural network approach involving polarimetric variables (Lakshmanan et al. 2014). Reflectivity data are then concurrently merged onto a uniform 0.03° × 0.03° gridded domain (Lakshmanan et al. 2006; Lakshmanan and Humphrey 2014) and interpolated to the −10°C isotherm obtained from hourly 0-h Rapid Update Cycle (RUC; Benjamin et al. 2004) analyses. Individual objects in the merged reflectivity field are identified using a watershed transform technique (Lakshmanan et al. 2009), here requiring at least four contiguous pixels with reflectivity ≥ 35 dBZ on a single scale with no data smoothing, and tracked forward in time (Lakshmanan and Smith 2010). CI events are identified objects in which the identification criteria are met, with CI time and location set to that at the start of the 30-min evaluation period. Figure 4 shows an example of the CI event identification process.

Fig. 3.
Fig. 3.

WSR-88D sites (blue circles, selected stations labeled) from which level II reflectivity data are used to identify observed CI events using WDSS-II. The black box represents the areal bounds of the 0.03° × 0.03° grid to which observed and simulated radar reflectivities at the −10°C level are interpolated for verification purposes.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Fig. 4.
Fig. 4.

Representative example of WDSS-II object identification and tracking. (left) Observed reflectivity at the −10°C level on the merged analysis grid (shaded; dBZ, 35 dBZ contoured). (right) WDSS-II-identified convective objects (shaded; each color represents a different object).

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Modeled CI events are identified similarly. Simulated reflectivity interpolated to the −10°C isotherm is computed in-line with the model simulation from the model-simulated model-level temperature, height, and reflectivity fields at each model time step and written to file at 5-min intervals. These data are bilinearly interpolated to the same uniform 0.03° × 0.03° gridded domain as the observed data. These data are subsequently ingested by WDSS-II, with which CI events are identified using the same methodology as above. This allows for the comparison of modeled to observed CI events using a uniform temporal and spatial discretization and a consistent identification algorithm. No attempt is made, however, to sample the model data in a similar fashion to that of the observations; that is, radar gaps, terrain blocking, and nonuniform vertical data distribution are not accounted for within the analysis. This is believed to have minimal effect on the results presented herein given that most CI events occur far enough east to avoid elevated terrain where this is expected to be more of a concern.

f. Event matching and forecast verification

Event matching is facilitated using the spatiotemporal flow-dependent error metric of Burghardt et al. (2014):
e1
The error metric C has units of kilometers, with representing the absolute distance error (km) and representing the absolute time error (s) between observed and modeled CI events. Here, V corresponds to the estimated average translation speed (km s−1), as determined by the WDSS-II w2segmotionll tool, for the observed CI event. This definition of V differs slightly from that of Burghardt et al. (2014), which used an hourly, area- and layer-averaged middle-tropospheric horizontal wind to approximate the deep-layer steering flow. As CI has a local occurrence frequency approaching zero in the central United States (Lock and Houston 2015), each ensemble member is verified against the observations deterministically rather than using a probabilistic method such as the Brier score, the sampling uncertainty of which is underestimated for small independent samples of rare events (Wilks 2010) such as those considered here.

To limit the spatiotemporal bounds of verification, is constrained to be ≤100 km and is constrained to be ≤3600 s (or 1 h). These constraints are consistent with Duda and Gallus (2013), who found a mean absolute displacement error of 105 km and no systematic temporal error for CI preceding mesoscale convective system formation, and Burghardt et al. (2014). These thresholds subjectively reflect constraints that provide useful information to an operational forecaster [e.g., as manifest within the conceptually similar interest metric of Skinner et al. (2016)], wherein a CI forecast with errors of 1 h and 100 km has utility whereas a forecast with displacement errors of 3 h and 300 km does not (e.g., Kain et al. 2013). Sensitivity to other spatiotemporal thresholds, introduced later in the manuscript, is conducted to quantify CI forecast skill variation across spatiotemporal thresholds. Verification is computed over the entire 0.03° × 0.03° gridded domain used to identify CI events (Fig. 2, black box).

To verify CI forecasts, the chosen spatiotemporal thresholds are first applied to each observed CI event to discount those modeled CI events in each ensemble member falling outside of the thresholds. Next, (1) is used to identify the modeled CI event from each ensemble member that best matches each observed CI event. If a modeled CI event is the best match to multiple observed events, it is said to match the observed event to which it has the lowest value for (1), and the next-best modeled event is then said to match the other observed event. A 2 × 2 contingency table (Table 1; Wilks 1995, Fowle and Roebber 2003, Kain et al. 2013) is used to verify events. Matched events are classified as true positives or hits (A in Table 1), modeled events not matched with observed events are classified as false positives (B), and observed events not matched with modeled events are classified as false negatives or misses (C). Correct negatives (D), taken herein to represent the correct forecast of no CI event, are not considered. No attempt is made to classify failed CI attempts (e.g., convection objects not meeting the 30-min duration threshold to be classified as CI) as correct negatives.

Table 1.

Contingency table used to compute forecast skill metrics and verify ensemble forecasts.

Table 1.

Utilizing the contingency table classifications, several verification metrics are computed deterministically for each ensemble member and then averaged for the ensemble mean. Metrics computed include the POD [(2)], representing the ratio of correctly forecast CI events (A) to the total number of observed CI events (A + C); false alarm ratio (FAR), representing the ratio of false positives (B) to the total number of forecast CI events (A + B); Bias, representing the ratio of the total number of forecast CI events (A + B) to the total number of observed CI events (A + C); and the critical success index (CSI), or threat score, representing the ratio of correctly forecast CI events (A) to the total number of observed and forecast CI events (A + B + C):
e2
e3
e4
e5
POD, FAR, and CSI range between 0 and 1, with 1 as the best-possible POD and CSI and 0 as the best-possible FAR. Bias values greater (less) than 1 imply overforecast (underforecast) CI.

Following Burghardt et al. (2014), all CI events during an MPEX case, and not just those that may be classified as occurring within convectively undisturbed, or pristine, environments [e.g., as in Kain et al. (2013) and, to a lesser extent, Lock and Houston (2015)], are considered. This is due to the limited sample of CI events within pristine environments among the cases studied and the considerable computational and storage expense associated with conducting the requisite ensemble numerical simulations over a larger sample of cases. For example, defining a pristine environment as one in which the observed reflectivity at the −10°C isotherm within 40 km of an observed CI event did not exceed 20 dBZ over the 1 h prior to the observed event, only 24, 6, and 11 CI events are identified for RF4, RF10, and RF12, respectively (not shown). Thus, the results presented below should be interpreted as applying to all CI events, with their specific application to CI events within pristine environments left for future investigation.

3. Results

a. Model verification

There are 238, 75, and 180 observed CI events for RF4, RF10, and RF12, respectively (Fig. 5). Averaging the modeled CI event counts for each ensemble member and each ensemble, there are 400, 118, and 326 modeled CI events for RF4, RF10, and RF12, respectively. This is similar to the results of Burghardt et al. (2014), who identified 641 modeled CI events to 249 observed events in 25 high-resolution deterministic simulations of warm-season CI events in the central high plains; Coniglio et al. (2016), who identified too many forecasted storms relative to observations in short-range ensemble forecasts of convection during MPEX; and Kain et al. (2017), who identified too many forecasted storms relative to observations in deterministic forecasts of convection during the 2014 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. In the ensemble means, the ACM2-based ensemble forecasts are associated with the smallest high forecast biases, whereas the MYJ-based ensemble forecasts are associated with the largest high forecast biases (Fig. 5). For RF4 (Fig. 6) and RF12 (Fig. 7), CI is primarily overforecast in eastern Colorado, northwestern Kansas, and Nebraska between 1600 and 2100 UTC. For RF10 (Fig. 8), with fewer observed CI events, CI is primarily overforecast in the Texas Panhandle, northwest Missouri, and northern Iowa. It is hypothesized that this overproduction may partially result from PBL parameterizations’ tendencies to overpredict mixed-layer convective available potential energy (MLCAPE) and underpredict mixed-layer convection inhibition (MLCIN) during local daytime hours in preconvective environments (e.g., Coniglio et al. 2013). In section 3b, ensemble-forecast boundary layer thermodynamic fields are verified against observations to attempt to better understand factors influencing this overproduction.

Fig. 5.
Fig. 5.

CI event cumulative distributions for (a) RF4, (b) RF10, and (c) RF12. Note the different vertical axes in each panel. The ACM2, MYJ, MYNN, QNSE, and YSU ensemble mean CI event counts are given by the orange, blue, green, purple, and red lines, respectively, while the observed CI event count is given by the solid black line. Local closure parameterizations in this and subsequent figures are listed with (L) in the legend and solid lines; nonlocal closure parameterizations are listed with (NL) in the legend and dashed lines.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Fig. 6.
Fig. 6.

Paintball plot for RF4 from a randomly selected representative ensemble member for the (top left) ACM2, (top center) MYJ, (top right) MYNN, (bottom left) QNSE, and (bottom center) YSU ensembles. Each paintball, or dot, reflects a simulated CI event, drawn at the location of the simulated event and color coded by the time of the simulated event per the legend at right. (bottom right) The corresponding observed CI events are depicted.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Fig. 7.
Fig. 7.

As in Fig. 6, but for RF12.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Fig. 8.
Fig. 8.

As in Fig. 6, but for RF10.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

For all cases, ensembles utilizing the MYJ and QNSE local closure PBL parameterizations, which on average forecast the most CI events (Fig. 5), had the most hits, fewest misses, and most false alarms (Table 2). Conversely, ensembles utilizing the ACM2 nonlocal (for upward mixing) closure PBL parameterization, which on average forecast the fewest CI events, had the fewest hits, most misses, and fewest false alarms for all cases (Table 2). These results speak to the “duality of error” (Doswell 2004), wherein for a fixed forecast accuracy increasing POD results in an increase in FAR as misses are exchanged for false alarms.

Table 2.

Ensemble mean hits (A), false alarms (B), and misses (C) for each case at the 100 km/1 h spatiotemporal verification thresholds. Boldface (italicized) numbers for each case correspond to the maximum (minimum) ensemble mean value for each metric.

Table 2.

For each case, temporal bias distributions are approximately normal, consistent with Duda and Gallus (2013), Kain et al. (2013), and Burghardt et al. (2014), with RF4 and RF10 exhibiting near-zero temporal bias and RF12 exhibiting a small late bias (Fig. 9). The lack of temporal bias variation between PBL ensembles indicates that, on average, modeled CI event timing is largely insensitive to variability associated with vertical mixing differences between PBL parameterizations. Spatial error distributions for matched events are slightly skewed, with mean errors of 44, 43, and 47 km and mean modes of 27, 30, and 27 km for RF4, RF10, and RF12, respectively (Fig. 10). On average, modeled CI events are found slightly north, slightly southwest, and northwest of their observed locations for RF4, RF10, and RF12, respectively (not shown), to large extent consistent with Figs. 68. The northwest position and late time biases for RF12 imply that the ensemble-simulated propagation of the southeastward-advancing surface cold front along which most observed CI events in this case occur is too slow. Spatial error distributions for the PBL ensembles are also nearly identical for each case (Fig. 10), further suggesting that errors in the positions of the features along which CI occurs, rather than PBL parameterization, exert primary control over where matched modeled CI events occur relative to observed events. Both ensemble mean temporal and spatial error distributions asymptote toward zero as they approach the 1-h and 100-km matching thresholds for RF4 and RF10, suggestive of diminishing returns at more lenient matching thresholds for these cases.

Fig. 9.
Fig. 9.

Temporal bias distributions for matched CI events, aggregated over all members for each ensemble (shaded horizontal bars), for (a) RF4, (b) RF10, and (c) RF12. Binning increment is 5 min. Positive (negative) values indicate early (late) forecast biases. The shaded box with each distribution indicates the mean (notches), median (open dots), and ±1 standard deviation (box extents) temporal bias. The shaded curve at the left of each distribution is a polynomial fit to the data, obtained using kernel density estimation, while the shaded curve at the right of each distribution is a normal distribution with mean and standard deviation equal to that of the data. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Fig. 10.
Fig. 10.

As in Fig. 9, but for spatial error. Binning increment is 5 km.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Examining the temporal distributions of misses and false alarms for each case aggregated for each PBL ensemble (Fig. 11) reveals qualitatively similar behavior across PBL ensembles for a given case but, as expected, significant case-to-case variability. To some extent, false alarms most commonly occur prior to approximately 0200 UTC and missed events are more common at and after approximately 2100 UTC; both have high frequency (relative to earlier and later times) during the diurnal CI peak between 2100 and 0200 UTC. Many of the false alarms at early forecast times occur in marginally unstable environments (not shown), and the lack of false alarms relative to misses at late forecast hours suggests that modeled CI event overproduction may be more common for surface-based rather than elevated convection. However, the limited sample size of events precludes the generalization of these findings and hypotheses.

Fig. 11.
Fig. 11.

Histograms, scaled by the maximum value within the distribution, of misses (above the zero line) and false alarms (below the zero line) aggregated over all ensemble members in 5-min bins for (a) RF4, (b) RF10, and (c) RF12. The solid curve and shading underneath represent a polynomial fit to the scaled histogram data. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

On average, RF4 has the highest forecast skill, as measured by CSI, but also the highest CI event forecast bias (Fig. 12a). RF10 has similar forecast skill to RF4 but is characterized by greater spread between ensemble means and, for each ensemble, individual ensemble members (Fig. 12b). This is believed to primarily be a statistical artifact resulting from the much reduced sample size of observed and modeled CI events in RF10 relative to RF4 and RF12 (Fig. 5; Table 2). RF12 has similar bias and ensemble forecast skill spread characteristics to RF4 but also the lowest forecast skill of the three cases considered (Fig. 12c). Van Klooster and Roebber (2009, their Fig. 1) indicate that, independent of vertical wind shear, climatological CI likelihood increases with MLCAPE, particularly for MLCAPE values at or below 2000 J kg−1; in other words, CI is more conditional (e.g., in the absence of a strong capping inversion, the presence of sufficient local and large-scale ascent to allow parcels to reach their level of free convection, etc.) in lower-MLCAPE environments. It is hypothesized that reduced forecast skill for RF12 relative to RF4 and RF10 results from low MLCAPE in RF12 (1000–1500 J kg−1; not shown) compared with the other two cases (>2000 J kg−1; not shown), for example, that the necessary conditions for CI are more frequently met in the model simulations but not observations (or vice versa) for RF12 versus RF4 and RF10. For each case, only minor variations in ensemble mean CSI across the five PBL ensembles are seen. Nonlocal closure PBL parameterizations have marginally higher CSI and lower bias than their local closure counterparts.

Fig. 12.
Fig. 12.

Performance diagrams (Roebber 2009) for (a) RF4, (b) RF10, and (c) RF12 at the 100 km/1 h spatiotemporal verification threshold. Bias curves are given by the light-blue lines originating in the bottom-left corner of each panel and CSI curves are given by the black hyperbolic curves. Large circles represent ensemble mean statistics and small boxes represent individual ensemble member statistics. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Next, sensitivity in forecast skill to the chosen spatiotemporal threshold is evaluated. Here, six additional thresholds (where the first number is the spatial threshold and the second number is the temporal threshold) are considered: 25 km/0.25 h, 50 km/0.5 h, 100 km/1.5 h, 150 km/1.5 h, 200 km/2 h, and 300 km/3 h. For all three cases, there is no skill (in terms of ensemble mean CSI) at the 25 km/0.25 h and 50 km/0.5 h thresholds; only at the 100 km/1 h and more lenient thresholds can the forecasts be classified as skillful (Fig. 13), consistent with Burghardt et al. (2014). The greatest forecast skill increase occurs between the 50 km/0.5 h and 100 km/1 h thresholds with smaller increases between larger thresholds, implying that the amount of useful information is approaching its limit at the 100 km/1 h threshold. Although ensemble mean forecast skill increases as the matching criteria are relaxed, forecast utility decreases, consistent with Kain et al. (2013).

Fig. 13.
Fig. 13.

As in Fig. 12, but for ensemble mean statistics as a function of varying spatiotemporal verification threshold per the legend at right.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

b. Sensitivity of CI environment to PBL scheme

Hourly ensemble forecasts are evaluated against available surface observations to quantify environmental differences between PBL parameterizations. Surface variables evaluated include 2-m temperature T (°C) and 2-m dewpoint temperature Td (°C). Observations used in this evaluation are obtained hourly from the National Centers for Environmental Prediction (NCEP) Meteorological Assimilation Data Ingest System (MADIS). Hourly MADIS METAR observations are extracted for locations within the verification domain depicted in Fig. 3, from which the mean error (ME) for each of the ensemble means is computed. Mean error is computed hourly with an average of approximately 2000 observations evaluated per hour per case, albeit with high autocorrelation between individual observations in close spatial proximity. The mean error for each variable is computed as
e6
where N is the total number of observations available at a given analysis time, xi is the ensemble mean value of the ith observation, and oi is the observation value itself. This domain-averaged mean error quantifies spatially coherent biases and the temporal evolution therein for each ensemble.

For thermodynamic variables, ensemble initial conditions are cool and moist biased, to varying extents, for all three cases (Fig. 14). This, at least in part, likely results from the cool and moist bias documented for warm-season environments with the MYJ PBL parameterization used in the cycled analysis system (e.g., Hu et al. 2010; Coniglio et al. 2013). These mean biases are to some extent maintained throughout each ensemble forecast. The local mixing MYJ, QNSE, and to lesser extent MYNN parameterizations exhibit the largest cool and moist biases. The nonlocal mixing ACM2 and YSU parameterizations exhibit the smallest cool and moist biases or, at later times for RF12, the largest warm and dry biases. In the aggregate, the ACM2 and YSU parameterizations are the least biased relative to the MADIS METAR observations. These results are consistent with Hu et al. (2010), Coniglio et al. (2013), and Clark et al. (2015a) and are reflective of biases in vertical mixing associated with each class of parameterization.

Fig. 14.
Fig. 14.

Mean error of (left) ensemble mean 2-m T (°C) and (right) 2-m Td (°C) relative to MADIS METAR observations between 1500 and 0600 UTC for (top) RF4 (circles), (middle) RF10 (squares), and (bottom) RF12 (triangles). ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

Simulated PBL structures in nonconvectively overturned environments are next evaluated against available rawinsonde observations. Rawinsonde observations considered in this evaluation (Fig. 15) include special and routine National Weather Service observations from 1800 and 0000 UTC, respectively, and selected soundings obtained by MPEX field teams (Trapp et al. 2016). The earliest rawinsonde observations are at approximately 1600 UTC, whereas the latest rawinsonde observations are at approximately 0300 UTC, with most occurring close to 0000 UTC. Approximately 20 observations, again with high autocorrelation between observations in close spatial and temporal proximity, are considered per case. The limited sample size increases the likelihood that statistical-significance testing on the differences between the respective ensemble mean profiles of mean error, computed using (6), would not confirm significance. Data every 25 hPa, matching the model postprocessing vertical discretization, between 1000 and 600 hPa (without interpolation below ground level, ~950 hPa in most instances) are considered in this evaluation.

Fig. 15.
Fig. 15.

Locations of MPEX mobile (red squares) and NWS (green circles) rawinsonde observations used to evaluate vertical profiles of mean error for selected variables for (a) RF4, (b) RF10, and (c) RF12. Shaded in each panel per the color bar at bottom is the 1500 UTC Rapid Refresh 3–9-h forecast accumulated precipitation (mm).

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

For all three cases, the ACM2 ensemble generally has the largest warm and smallest moist bias in the PBL and largest cool and smallest dry bias above the PBL (Fig. 16). The opposite holds for the MYJ and QNSE ensembles. This is all consistent with the PBL vertical mixing differences between these parameterizations highlighted by Hu et al. (2010) and Coniglio et al. (2013). In the aggregate, the YSU ensemble has the lowest ensemble mean error for the available observations.

Fig. 16.
Fig. 16.

Vertical profiles between 1000 and 600 hPa of mean error of (left) ensemble mean temperature (°C) and (right) dewpoint temperature (°C) for (a),(d) RF4 (circles), (b),(e) RF10 (squares), and (d),(f) RF12 (triangles) for the rawinsonde profiles indicated in Fig. 15. ACM2, MYJ, MYNN, QNSE, and YSU ensemble statistics are depicted in orange, blue, green, purple, and red, respectively. Note the nonuniform horizontal axes between cases for the panels in (d)–(f).

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

For RF4 (Fig. 16, leftmost panels), ensemble mean temperature is warm biased and dewpoint temperature is high biased below 850 hPa for all PBL ensembles. All else being equal, this implies a more unstable environment in each ensemble forecast than was observed, hypothesized to result in the overforecast simulated CI event counts from each ensemble. Note, however, that the geographic extent of the observations entering into this analysis (Fig. 15a) is limited, such that these biases should only be interpreted as local features. The MYJ and QNSE ensembles are associated with the lowest warm temperature but largest high dewpoint biases, whereas the ACM2 ensemble is associated with the highest warm temperature but smallest high dewpoint bias. This is similarly hypothesized to result in greater overforecast simulated CI event counts for the MYJ and QNSE ensembles and lesser overforecast counts for the ACM2 ensembles. At higher altitudes, ensemble mean dewpoint temperature is low biased above 800 hPa, with the MYJ and QNSE ensembles having the largest and the ACM2 ensemble the smallest biases.

However, these findings do not readily generalize to the other cases given the variability in ensemble mean bias profiles between cases (Fig. 16, center and rightmost panels). Of the other cases, RF10 bears the most similarity in terms of ensemble mean temperature and dewpoint biases. Apart from a small cold surface temperature bias and smaller dewpoint temperature dry biases at and above 800 hPa, ensemble mean temperature and dewpoint temperature bias profiles between RF4 and RF10 are similar. For RF12, ensemble mean temperatures are cold biased below 900 hPa and warm biased above; however, ensemble mean dewpoint temperature biases are generally of like sign, if not like magnitude, to those of RF4 and RF10. Verification thus provides an incomplete picture of why the PBL ensembles uniformly overforecast CI events while providing more insight into differences in the overforecast magnitude between PBL ensembles.

In addition to the aggregate verification statistics presented above, individual rawinsonde profiles are evaluated against individual ensemble member forecasts. In general, ensemble member forecasts are unable to adequately resolve sharp vertical gradients associated with moderate to strong capping inversions (e.g., Fig. 17), consistent with Coniglio et al. (2013) and Clark et al. (2015b). The precise impact this has upon integrated thermodynamic parameters such as CAPE, CIN, and minimum buoyancy Bmin (Trier et al. 2015), and thus CI potential, varies between individual profiles (not shown; Fig. 17 provides an example of negative buoyancy over a deeper vertical layer in ensemble simulations than in observations). This result is independent of PBL parameterization and, in deterministic sensitivity tests using GFS initial and lateral boundary conditions, the number of model vertical levels (n = 50, 70, 90, and 100; not shown). As the Met Office’s semi-Lagrangian Unified Model (Walters et al. 2011) does not exhibit similar behavior (Clark et al. 2015b; Jirak et al. 2015; Kain et al. 2017), implicit damping with the WRF-ARW model-default third-order-accurate finite-difference vertical advection formulation (Skamarock et al. 2008) is hypothesized to lead to the inability to adequately resolve capping inversions. Further investigation with a larger sample of observed rawinsonde profiles and larger sample of cases is needed to better quantify the potential influence this inability may have upon CI overproduction within short-range WRF-ARW model forecasts of warm-season central U.S. convective episodes.

Fig. 17.
Fig. 17.

Skew T–logp diagram valid 1800 UTC 31 May 2013 (during RF10) from Springfield, MO (SGF; 37.23°N, 93.4°W). Temperature (dewpoint temperature) (°C) is depicted in solid (dashed) lines. The observed profile is in black, while 3-h forecast profiles from a randomly selected representative ensemble member are depicted in orange, blue, purple, green, and red for the ACM2, MYJ, MYNN, QNSE, and YSU PBL parameterizations, respectively. At left, the vertical profile of relative humidity (%, per the horizontal axis labeled at top left) is included.

Citation: Weather and Forecasting 32, 3; 10.1175/WAF-D-16-0174.1

4. Conclusions

This study has evaluated PBL parameterization control on convection-allowing ensemble forecast system skill for short-range (0–15 h) forecasts of warm-season, primarily surface-based CI events during three MPEX cases, each characterized by different prevailing large-scale flow directions. An object-based method applied to both observed and simulated radar reflectivity above the freezing level was used to identify CI events in a consistent manner between the model and observations. Matching modeled to observed CI events using a flow-dependent error metric at specified spatiotemporal thresholds, selected forecast skill measures were computed and evaluated within and between ensembles to quantify the PBL parameterization influence on CI predictive skill. Ensemble mean surface and boundary layer thermodynamic fields were evaluated against METAR and rawinsonde observations to better understand the forecast skill variability between PBL parameterizations.

For the three cases studied, all ensembles overforecast (in the ensemble mean) CI events, particularly during the local daytime and early evening hours, with near-zero mean timing bias and small distance errors for matched events. The probability of detection, false alarm rate, and bias were highest (lowest) for local (nonlocal) mixing PBL parameterization ensembles. Relative to CSI, however, PBL parameterization exerts minimal influence on forecast skill at the chosen thresholds. For individual cases, there was greater variation in forecast skill between ensembles than between individual ensemble members, implying greater CI forecast skill control by PBL parameterization than small initial condition differences. In contrast, forecast skill variation between cases indicates that larger-scale features exert a greater control upon CI forecast skill than PBL parameterization. However, given that the cases considered in this work are among those with the highest practical predictability during MPEX (Romine et al. 2016; KERR), it is possible that the results to some degree understate the PBL parameterization control on short-range ensemble CI forecast skill. Ensemble forecasts are not skillful on the meso-β and smaller scales. Forecast skill increases as spatiotemporal matching thresholds are made more lenient; however, the rate of forecast skill increase was increasingly small after the 100 km/1 h spatiotemporal threshold with a concordant drop in forecast utility. These findings are consistent with recent studies (Kain et al. 2013; Burghardt et al. 2014) that quantified convection-allowing deterministic CI forecast skill.

In general, local mixing PBL parameterization ensembles were associated with shallower vertical mixing, and thus cooler and moister conditions within the lower reaches of the PBL, than their nonlocal mixing PBL parameterization ensemble counterparts, which is also consistent with recent studies (e.g., Coniglio et al. 2013; Clark et al. 2015a). For a case such as RF4, where all ensemble means were warm biased for temperature and moist biased for dewpoint temperature at and near the surface, the resultant impact upon forecast CI event counts was relatively straightforward: more unstable conditions in local (vs nonlocal) mixing PBL parameterization ensembles contributed, all else assumed to be equal, to greater (lesser) CI event overproduction. The extent to which this can be generalized varied between cases, however. Verification against available rawinsonde data indicated that model simulations struggled to accurately resolve sharp vertical gradients associated with capping inversions found within the preconvective environments of the cases studied, which was hypothesized to result from implicit numerical dampening associated with the default third-order-accurate finite-differencing scheme for vertical advection used in the simulations herein.

Although the present study advances our understanding of controls upon numerical simulation CI event forecast skill, there are multiple outstanding questions that remain for further study. The extent to which modeled CI event identification (and thus forecast skill metrics) is sensitive to data sampling relative to observations (e.g., beam height, terrain blocking, etc.) and to microphysical parameterization (e.g., Stratman et al. 2013; Coniglio et al. 2016) should be studied. Sensitivity in matched event statistics to the verification metric and approach used also remains to be quantified. Appropriate metrics for verifying probabilistic forecasts of locally rare events are lacking, and this study would benefit from the development of such metrics to better quantify CI ensemble forecast skill. It is clear that the ensemble initial conditions are not free of bias, but the extent to which this influenced CI forecast skill for the cases considered herein was only able to be partially quantified. Similarly, the limited samples of rawinsonde observations for verification and cases for study limit the extent to which the results can be generalized. Furthermore, the results herein focused on thermodynamic rather than kinematic influences for CI, although it is acknowledged that PBL parameterization can influence the lower-tropospheric kinematic environment. Further study is needed to identify such controls on CI predictive skill across PBL parameterizations. Finally, preliminary research is under way to better understand the extent to which the simulated capping inversion resolution (or lack thereof) influences CI predictive skill, at least within the context of the WRF-ARW.

Acknowledgments

Ensemble initial and lateral boundary conditions were graciously provided by Glen Romine and Ryan Torn. Code to compute simulated reflectivity at the −10°C level in line with simulation execution was graciously provided by Scott Dembek and Adam Clark. All numerical simulations were conducted on the NCAR Yellowstone supercomputer (CISL 2012). This manuscript benefited from conversations with Kyle Swanson and reviews from three anonymous reviewers. Rapid Refresh 0-h analyses were obtained from the UCAR Research Data Archive. All MPEX observations were obtained from the UCAR Earth Observing Laboratory MPEX Field Catalog (http://catalog.eol.ucar.edu/mpex). This material is based upon work supported by the National Science Foundation under Grant AGS-1347545.

REFERENCES

  • Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61, 7283, doi:10.1111/j.1600-0870.2008.00361.x.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., 2012: Localization and sampling error correction in ensemble Kalman filter data assimilation. Mon. Wea. Rev., 140, 23592371, doi:10.1175/MWR-D-11-00013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., T. Hoar, K. Raeder, H. Liu, N. Collins, R. Torn, and A. Arellano, 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 12831296, doi:10.1175/2009BAMS2618.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Barker, D. M., and Coauthors, 2012: The Weather Research and Forecasting Model’s community variational/ensemble data assimilation system: WRFDA. Bull. Amer. Meteor. Soc., 93, 831843, doi:10.1175/BAMS-D-11-00167.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and Coauthors, 2004: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518, doi:10.1175/1520-0493(2004)132<0495:AHACTR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Burghardt, B., C. Evans, and P. Roebber, 2014: Assessing the predictability of convection initiation across the high plains using an object-based approach. Wea. Forecasting, 29, 403418, doi:10.1175/WAF-D-13-00089.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • CISL, 2012: Yellowstone. Computational and Information Systems Laboratory, National Center for Atmospheric Research. [Available online at http://n2t.net/ark:/85065/d7wd3xhc.]

  • Clark, A. J., M. C. Coniglio, B. E. Coffer, G. Thompson, M. Xue, and F. Kong, 2015a: Sensitivity of 24-h forecast dryline position and structure to boundary layer parameterizations in convection-allowing WRF model simulations. Wea. Forecasting, 30, 613638, doi:10.1175/WAF-D-14-00078.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2015b: Spring Forecasting Experiment 2015: Program overview and operations plan. NOAA/NSSL/Storm Prediction Center, 24 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2015/HWT_SFE_2015_OPS_plan_final.pdf.]

  • Cohen, A. E., S. M. Cavallo, M. C. Coniglio, and H. E. Brooks, 2015: A review of planetary boundary layer parameterization schemes and their sensitivity in simulating southeastern U.S. cold season severe weather environments. Wea. Forecasting, 30, 591612, doi:10.1175/WAF-D-14-00105.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coniglio, M. C., J. Correia Jr., P. T. Marsh, and F. Kong, 2013: Verification of convection-allowing WRF model forecasts of the planetary boundary layer using sounding observations. Wea. Forecasting, 28, 842862, doi:10.1175/WAF-D-12-00103.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Coniglio, M. C., S. M. Hitchcock, and K. H. Knopfmeier, 2016: Impact of assimilating preconvective upsonde observations on short-term forecasts of convection observed during MPEX. Mon. Wea. Rev., 144, 43014325, doi:10.1175/MWR-D-16-0091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Crook, A. N., 1996: Sensitivity of moist convection forced by boundary layer processes to low-level thermodynamic fields. Mon. Wea. Rev., 124, 17671785, doi:10.1175/1520-0493(1996)124<1767:SOMCFB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Done, J., C. Davis, and M. Weisman, 2004: The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) model. Atmos. Sci. Lett., 5, 110117, doi:10.1002/asl.72.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 1987: The distinction between large-scale and mesoscale contribution to severe convection: A case study example. Wea. Forecasting, 2, 316, doi:10.1175/1520-0434(1987)002<0003:TDBLSA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doswell, C. A., III, 2004: Weather forecasting by humans—Heuristics and decision making. Wea. Forecasting, 19, 11151126, doi:10.1175/WAF-821.1.

  • Duda, J. D., and W. A. Gallus Jr., 2013: The impact of large-scale forcing on skill of simulated convective initiation and upscale evolution with convection-allowing grid spacings in the WRF. Wea. Forecasting, 28, 9941018, doi:10.1175/WAF-D-13-00005.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fowle, M. A., and P. J. Roebber, 2003: Short-range (0–48 h) numerical prediction of convective occurrence, mode, and location. Wea. Forecasting, 18, 782794, doi:10.1175/1520-0434(2003)018<0782:SHNPOC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, doi:10.1002/qj.49712555417.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gremillion, M. S., and R. E. Orville, 1999: Thunderstorm characteristics of cloud-to-ground lightning at the Kennedy Space Center, Florida: A study of lightning initiation signatures as indicated by the WSR-88D. Wea. Forecasting, 14, 640649, doi:10.1175/1520-0434(1999)014<0640:TCOCTG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and H.-L. Pan, 1996: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124, 23222339, doi:10.1175/1520-0493(1996)124<2322:NBLVDI>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Houston, A. L., and D. Niyogi, 2007: The sensitivity of convective initiation to the lapse rate of the active cloud-bearing layer. Mon. Wea. Rev., 135, 30133032, doi:10.1175/MWR3449.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hu, X.-M., J. W. Nielsen-Gammon, and F. Zhang, 2010: Evaluation of three planetary boundary layer schemes in the WRF model. J. Appl. Meteor. Climatol., 49, 18311843, doi:10.1175/2010JAMC2432.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 1994: The step-mountain eta coordinate model: Further developments of the convection, viscous sublayer, and turbulence closure schemes. Mon. Wea. Rev., 122, 927945, doi:10.1175/1520-0493(1994)122<0927:TSMECM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jirak, I., A. Clark, J. Correia, K. Knopfmeier, C. Melick, B. Twiest, M. Coniglio, and S. Weiss, 2015: Spring Forecasting Experiment 2015: Preliminary findings and results. NOAA/NSSL/Storm Prediction Center, 32 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2015/HWT_SFE_2015_Prelim_Findings_Final.pdf.]

  • Jorgensen, D. P., and T. M. Weckwerth, 2003: Forcing and organization of convective systems. Radar and Atmospheric Science: A Collection of Essays in Honor of David Atlas, Meteor. Monogr., No. 52, Amer. Meteor. Soc., 75–103.

    • Crossref
    • Export Citation
  • Kain, J. S., S. J. Weiss, J. J. Levit, M. E. Baldwin, and D. R. Bright, 2006: Examination of convection-allowing configurations of the WRF model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004. Wea. Forecasting, 21, 167181, doi:10.1175/WAF906.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2008: Some practical considerations regarding horizontal resolution in the first generation of operational convection-allowing NWP. Wea. Forecasting, 23, 931952, doi:10.1175/WAF2007106.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2013: A feasibility study for probabilistic convection initiation forecasts based on explicit numerical guidance. Bull. Amer. Meteor. Soc., 94, 12131225, doi:10.1175/BAMS-D-11-00264.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Coauthors, 2017: Collaborative efforts between the U.S. and U.K. to advance prediction of high-impact weather. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-15-00199.1, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., 2012: Automating the Analysis of Spatial Grids: A Practical Guide to Data Mining Geospatial Images for Human and Environmental Applications. Springer, 320 pp.

    • Crossref
    • Export Citation
  • Lakshmanan, V., and T. Smith, 2010: An objective method of evaluating and devising storm-tracking algorithms. Wea. Forecasting, 29, 701709, 701–709, doi:10.1175/2009WAF2222330.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., and T. W. Humphrey, 2014: A MapReduce technique to mosaic continental-scale weather radar data in real-time. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7, 721732, doi:10.1109/JSTARS.2013.2282040.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., T. Smith, K. Hondl, G. J. Stumpf, and A. Witt, 2006: A real-time, three-dimensional, rapidly updating, heterogeneous radar merger technique for reflectivity, velocity, and derived products. Wea. Forecasting, 21, 802823, doi:10.1175/WAF942.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., T. Smith, G. J. Stumpf, and K. Hondl, 2007: The Warning Decision Support System–Integrated Information. Wea. Forecasting, 22, 596612, doi:10.1175/WAF1009.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., K. Hondl, and R. Rabin, 2009: An efficient, general-purpose technique for identifying storm cells in geospatial images. J. Ocean. Atmos. Technol., 26, 523537, doi:10.1175/2008JTECHA1153.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., C. Karstens, J. Krause, and L. Tang, 2014: Quality control of weather radar using polarimetric variables. J. Atmos. Oceanic Technol., 31, 12341249, doi:10.1175/JTECH-D-13-00073.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, B. D., R. D. Farley, and M. R. Hjelmfelt, 1991: A numerical case study of convection initiation along colliding convergence boundaries in northeast Colorado. J. Atmos. Sci., 48, 23502366, doi:10.1175/1520-0469(1991)048<2350:ANCSOC>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lilly, D. K., 1990: Numerical prediction of thunderstorms—Has its time come? Quart. J. Roy. Meteor. Soc., 116, 779798, doi:10.1002/qj.49711649402.

    • Search Google Scholar
    • Export Citation
  • Lock, N. A., and A. L. Houston, 2015: Spatiotemporal distribution of thunderstorm initiation in the US Great Plains from 2005 to 2007. Int. J. Climatol., 35, 40474056, doi:10.1002/joc.4261.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Markowski, P., and C. Hannon, 2006: Multiple-Doppler radar observations of the evolution of vorticity extrema in a convective boundary layer. Mon. Wea. Rev., 134, 355374, doi:10.1175/MWR3060.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Markowski, P., and Y. Richardson, 2010: Mesoscale Meteorology in Midlatitudes. Wiley-Blackwell, 397 pp.

    • Crossref
    • Export Citation
  • Murphey, H. V., R. M. Wakimoto, C. Flamant, and D. E. Kingsmill, 2006: Dryline on 19 June 2002 during IHOP. Part I: Airborne Doppler and LEANDRE II analyses of the thin line structure and convection initiation. Mon. Wea. Rev., 134, 406430, doi:10.1175/MWR3063.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nakanishi, M., and H. Niino, 2009: Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteor. Soc. Japan, 87, 895912, doi:10.2151/jmsj.87.895.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pleim, J. E., 2007: A combined local and nonlocal closure model for the atmospheric boundary layer. Part I: Model description and testing. J. Appl. Meteor. Climatol., 46, 13831395, doi:10.1175/JAM2539.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., 2009: Visualizing multiple measures of forecast quality. Wea. Forecasting, 24, 601608, doi:10.1175/2008WAF2222159.1.

  • Roebber, P. J., D. M. Schultz, B. A. Colle, and D. J. Stensrud, 2004: Toward improved prediction: High-resolution and ensemble modeling systems in operations. Wea. Forecasting, 19, 936949, doi:10.1175/1520-0434(2004)019<0936:TIPHAE>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, C. Snyder, J. L. Anderson, and M. L. Weisman, 2013: Model bias in a continuously cycled assimilation system and its influence on convection-permitting forecasts. Mon. Wea. Rev., 141, 12631284, doi:10.1175/MWR-D-12-00112.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, J. Berner, K. R. Fossell, C. Snyder, J. L. Anderson, and M. L. Weisman, 2014: Representing forecast error in a convection-permitting ensemble forecast system. Mon. Wea. Rev., 142, 45194541, doi:10.1175/MWR-D-14-00100.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Romine, G. S., C. S. Schwartz, R. D. Torn, and M. L. Weisman, 2016: Impact of assimilating dropsonde observations from MPEX on ensemble forecasts of severe weather events. Mon. Wea. Rev., 144, 37993823, doi:10.1175/MWR-D-15-0407.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schumacher, R. S., 2015: Resolution dependence of initiation and upscale growth of deep convection in convection-allowing forecasts of the 31 May–1 June 2013 supercell and MCS. Mon. Wea. Rev., 143, 43314354, doi:10.1175/MWR-D-15-0179.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, M. L. Weisman, R. A. Sobash, K. R. Fossell, K. W. Manning, and S. B. Trier, 2015: A real-time convection-allowing ensemble prediction system initialized by mesoscale ensemble Kalman filter analyses. Wea. Forecasting, 30, 11581181, doi:10.1175/WAF-D-15-0013.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and M. L. Weisman, 2009: The impact of positive-definite moisture transport on NWP precipitation forecasts. Mon. Wea. Rev., 137, 488494, doi:10.1175/2008MWR2583.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

    • Crossref
    • Export Citation
  • Skinner, P. S., L. J. Wicker, D. M. Wheatley, and K. H. Knopfmeier, 2016: Application of two spatial verification methods to ensemble forecasts of low-level rotation. Wea. Forecasting, 31, 713735, doi:10.1175/WAF-D-15-0129.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., 2007: Parameterization Schemes: Keys to Understanding Numerical Weather Prediction Models. Cambridge University Press, 459 pp.

    • Crossref
    • Export Citation
  • Stratman, D. R., M. C. Coniglio, S. E. Koch, and M. Xue, 2013: Use of multiple verification methods to evaluate forecasts of convection from hot- and cold-start convection-allowing models. Wea. Forecasting, 28, 119138, doi:10.1175/WAF-D-12-00022.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sukoriansky, S., B. Galperian, and V. Perov, 2005: Application of a new spectral theory of stable stratified turbulence to the atmospheric boundary layer over sea ice. Bound.-Layer Meteor., 117, 231257, doi:10.1007/s10546-004-6848-4.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, doi:10.1175/2008MWR2387.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tiedtke, M., 1989: A comprehensive mass flux scheme for cumulus parameterization in large-scale models. Mon. Wea. Rev., 117, 17791800, doi:10.1175/1520-0493(1989)117<1779:ACMFSF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., 2010: Performance of a mesoscale ensemble Kalman filter (EnKF) during the NOAA High-Resolution Hurricane test. Mon. Wea. Rev., 138, 43754392, doi:10.1175/2010MWR3361.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and C. A. Davis, 2012: The influence of shallow convection on tropical cyclone track forecasts. Mon. Wea. Rev., 140, 21882197, doi:10.1175/MWR-D-11-00246.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., and G. S. Romine, 2015: Sensitivity of central Oklahoma convection forecasts to upstream potential vorticity anomalies during two strongly forced cases during MPEX. Mon. Wea. Rev., 143, 40644087, doi:10.1175/MWR-D-15-0085.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Torn, R. D., G. J. Hakim, and C. Snyder, 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 24902502, doi:10.1175/MWR3187.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trapp, R. J., D. J. Stensrud, M. C. Coniglio, R. S. Schumacher, M. E. Baldwin, S. Waugh, and D. T. Conlee, 2016: Mobile radiosonde deployments during the Mesoscale Predictability Experiment (MPEX): Rapid and adaptive sampling of upscale convective feedbacks. Bull. Amer. Meteor. Soc., 97, 329336, doi:10.1175/BAMS-D-14-00258.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Trier, S. B., G. S. Romine, D. A. Ahijevych, R. J. Trapp, R. S. Schumacher, M. C. Coniglio, and D. J. Stensrud, 2015: Mesoscale thermodynamic influences on convection initiation near a surface dryline in a convection-permitting ensemble. Mon. Wea. Rev., 143, 37263753, doi:10.1175/MWR-D-15-0133.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Van Klooster, S. L., and P. J. Roebber, 2009: Surface-based convective potential in the contiguous United States in a business-as-usual future climate. J. Climate, 22, 33173330, doi:10.1175/2009JCLI2697.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Walters, D. N., and Coauthors, 2011: The Met Office Unified Model Global Atmosphere 3.0/3.1 and JULES Global Land 3.0/3.1 configurations. Geosci. Model Dev., 4, 919941, doi:10.5194/gmd-4-919-2011.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., and D. B. Parsons, 2006: A review of convection initiation and motivation for IHOP_2002. Mon. Wea. Rev., 134, 522, doi:10.1175/MWR3067.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., J. W. Wilson, and R. M. Wakimoto, 1996: Thermodynamic variability within the convective boundary layer due to horizontal convective rolls. Mon. Wea. Rev., 124, 769784, doi:10.1175/1520-0493(1996)124<0769:TVWTCB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., H. V. Murphey, C. Flamant, J. Goldstein, and C. R. Pettet, 2008: An observational study of convection initiation on 12 June 2002 during IHOP_2002. Mon. Wea. Rev., 136, 22832304, doi:10.1175/2007MWR2128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., C. Davis, W. Wang, K. W. Manning, and J. B. Klemp, 2008: Experiences with 0–36-h explicit convective forecasts with the WRF-ARW model. Wea. Forecasting, 23, 407437, doi:10.1175/2007WAF2007005.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weisman, M. L., and Coauthors, 2015: The Mesoscale Predictability Experiment (MPEX). Bull. Amer. Meteor. Soc., 96, 21272149, doi:10.1175/BAMS-D-13-00281.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weiss, S., and Coauthors, 2011: Experimental Forecast Program Spring Experiment 2011: Program overview and operations plan. NOAA/NSSL/Storm Prediction Center, 62 pp. [Available online at http://hwt.nssl.noaa.gov/Spring_2011/Spring_Experiment_2011_ops_plan_13May_v5.pdf.]

  • Wilks, D. S., 1995: Statistical Methods in Atmospheric Sciences: An Introduction. Academic Press, 500 pp.

  • Wilks, D. S., 2010: Sampling distributions of the Brier score and Brier skill score under serial dependence. Quart. J. Roy. Meteor. Soc., 136, 21092118, doi:10.1002/qj.709.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilson, J. W., and R. D. Roberts, 2006: Summary of convective storm initiation and evolution during IHOP: Observational and modeling perspective. Mon. Wea. Rev., 134, 2347, doi:10.1175/MWR3069.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and W. J. Martin, 2006a: A high-resolution modeling study of the 24 May 2002 dryline case during IHOP. Part I: Numerical simulation and general evolution of the dryline and convection. Mon. Wea. Rev., 134, 149171, doi:10.1175/MWR3071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Xue, M., and W. J. Martin, 2006b: A high-resolution modeling study of the 24 May 2002 dryline case during IHOP. Part II: Horizontal convective rolls and convective initiation. Mon. Wea. Rev., 134, 172191, doi:10.1175/MWR3072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, C., Y. Wang, and K. Hamilton, 2011: Improved representation of boundary layer clouds over the southeast Pacific in ARW-WRF using a modified Tiedtke cumulus parameterization scheme. Mon. Wea. Rev., 139, 34893513, doi:10.1175/MWR-D-10-05091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ziegler, C. L., T. J. Lee, and R. A. Pielke, 1997: Convective initiation at the dryline: A modeling study. Mon. Wea. Rev., 125, 10011026, doi:10.1175/1520-0493(1997)125<1001:CIATDA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save