Results from a Pseudo-Real-Time Next-Generation 1-km Warn-on-Forecast System Prototype

Christopher A. Kerr aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by Christopher A. Kerr in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-1237-7740
,
Brian C. Matilla aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by Brian C. Matilla in
Current site
Google Scholar
PubMed
Close
,
Yaping Wang aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by Yaping Wang in
Current site
Google Scholar
PubMed
Close
,
Derek R. Stratman aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by Derek R. Stratman in
Current site
Google Scholar
PubMed
Close
,
Thomas A. Jones aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma
cSchool of Meteorology, University of Oklahoma, Norman, Oklahoma

Search for other papers by Thomas A. Jones in
Current site
Google Scholar
PubMed
Close
, and
Nusrat Yussouf aCooperative Institute for Severe and High-Impact Weather Research and Operations, University of Oklahoma, Norman, Oklahoma
bNOAA/OAR National Severe Storms Laboratory, Norman, Oklahoma
cSchool of Meteorology, University of Oklahoma, Norman, Oklahoma

Search for other papers by Nusrat Yussouf in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Since 2017, the Warn-on-Forecast System (WoFS) has been tested and evaluated during the Hazardous Weather Testbed Spring Forecasting Experiment (SFE) and summer convective seasons. The system has shown promise in predicting high temporal and spatial specificity of individual evolving thunderstorms. However, this baseline version of the WoFS has a 3-km horizontal grid spacing and cannot resolve some convective processes. Efforts are under way to develop a WoFS prototype at a 1-km grid spacing (WoFS-1km) with the hope to improve forecast accuracy. This requires extensive changes to data assimilation specifications and observation processing parameters. A preliminary version of WoFS-1km nested within WoFS at 3 km (WoFS-3km) was developed, tested, and run during the 2021 SFE in pseudo–real time. Ten case studies were successfully completed and provided simulations of a variety of convective modes. The reflectivity and rotation storm objects from WoFS-1km are verified against both WoFS-3km and 1-km forecasts initialized from downscaled WoFS-3km analyses using both neighborhood- and object-based techniques. Neighborhood-based verification suggests WoFS-1km improves reflectivity bias but not spatial placement. The WoFS-1km object-based reflectivity forecast accuracy is higher in most cases, leading to a net improvement. Both the WoFS-1km and downscaled forecasts have ideal reflectivity object frequency biases while the WoFS-3km overpredicts the number of reflectivity objects. The rotation object verification is ambiguous as many cases are negatively impacted by 1-km data assimilation. This initial evaluation of a WoFS-1km prototype is a solid foundation for further development and future testing.

Significance Statement

This study investigates the impacts of performing data assimilation directly on a 1-km WoFS model grid. Most previous studies have only initialized 1-km WoFS forecasts from coarser analyses. The results demonstrate some improvements to reflectivity forecasts through data assimilation on a 1-km model grid although finer resolution data assimilation did not improve rotation forecasts.

© 2023 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Christopher A. Kerr, christopher.kerr@noaa.gov

Abstract

Since 2017, the Warn-on-Forecast System (WoFS) has been tested and evaluated during the Hazardous Weather Testbed Spring Forecasting Experiment (SFE) and summer convective seasons. The system has shown promise in predicting high temporal and spatial specificity of individual evolving thunderstorms. However, this baseline version of the WoFS has a 3-km horizontal grid spacing and cannot resolve some convective processes. Efforts are under way to develop a WoFS prototype at a 1-km grid spacing (WoFS-1km) with the hope to improve forecast accuracy. This requires extensive changes to data assimilation specifications and observation processing parameters. A preliminary version of WoFS-1km nested within WoFS at 3 km (WoFS-3km) was developed, tested, and run during the 2021 SFE in pseudo–real time. Ten case studies were successfully completed and provided simulations of a variety of convective modes. The reflectivity and rotation storm objects from WoFS-1km are verified against both WoFS-3km and 1-km forecasts initialized from downscaled WoFS-3km analyses using both neighborhood- and object-based techniques. Neighborhood-based verification suggests WoFS-1km improves reflectivity bias but not spatial placement. The WoFS-1km object-based reflectivity forecast accuracy is higher in most cases, leading to a net improvement. Both the WoFS-1km and downscaled forecasts have ideal reflectivity object frequency biases while the WoFS-3km overpredicts the number of reflectivity objects. The rotation object verification is ambiguous as many cases are negatively impacted by 1-km data assimilation. This initial evaluation of a WoFS-1km prototype is a solid foundation for further development and future testing.

Significance Statement

This study investigates the impacts of performing data assimilation directly on a 1-km WoFS model grid. Most previous studies have only initialized 1-km WoFS forecasts from coarser analyses. The results demonstrate some improvements to reflectivity forecasts through data assimilation on a 1-km model grid although finer resolution data assimilation did not improve rotation forecasts.

© 2023 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Christopher A. Kerr, christopher.kerr@noaa.gov

1. Introduction

The Warn-on-Forecast System (WoFS) provides short-term ensemble forecasts of individual thunderstorms and their associated hazards, including tornadoes, large hail, damaging winds, and flash flooding (Stensrud et al. 2009, 2013). The baseline current WoFS configuration (e.g., Wheatley et al. 2015; Jones et al. 2016; Skinner et al. 2018; Lawson et al. 2018; Yussouf et al. 2020) is convection permitting at 3-km model horizontal grid spacing and generates 6- or 3-h forecasts initialized at the top and bottom of the hour, respectively. WoFS will support NOAA’s Forecasting a Continuum of Environmental Threats (FACETs; Rothfusz et al. 2018) paradigm of providing continuous flow of probabilistic hazard information between the watch-to-warning temporal and spatial scales (hours to minutes; mesoscale to storm scale). The testing and evaluation of the baseline WoFS has been going on since 2017 during the Hazardous Weather Testbed Spring Forecasting Experiment (SFE) and summer convective seasons (Clark et al. 2020, 2021). WoFS has improved forecaster situational awareness in these experimental settings (Wilson et al. 2019; Gallo et al. 2022). However, the baseline WoFS at 3-km grid spacing cannot resolve many convective processes necessary to accurately depict storm-scale proxies of convection-related hazards (e.g., Bryan et al. 2003).

Previous studies have investigated the impacts of decreased grid spacing of convection-allowing models, including WoFS. Schwartz et al. (2017) and Schwartz and Sobash (2019) found 1-km precipitation forecasts to be more skillful than 3-km precipitation forecasts mainly due to smaller displacement errors of precipitating clouds. Sobash et al. (2019) determined 1-km forecasts of surrogate severe probabilities were more skillful than 3-km forecasts based on local storm reports. For individual convective storms, specifically supercells, a finer grid spacing may result in better cycling frequency of low-level mesocyclones (Adlerman and Droegemeier 2002; Potvin and Flora 2015; Britt et al. 2020). The WoFS forecasts downscaled from 3-km (both 1.5- and 1-km forecasts) are more skillful than 3-km forecasts of rotation features (i.e., mesocyclones; Lawson et al. 2021; Miller et al. 2022).

While these previous studies have evaluated downscaled WoFS forecasts (Britt et al. 2020; Lawson et al. 2021; Miller et al. 2022), performing data assimilation (DA) directly on a 1-km model grid is an appropriate next step. Wang et al. (2022) presented two convective and two tropical cyclone cases where DA was performed directly on a 1-km grid. Their main findings included improved heavy precipitation forecasts while rotation object results were mixed. This study expands upon that study, with a larger sample size of severe convection cases. During the 2021 SFE, this 1-km WoFS prototype was run in pseudo–real time on select cases to assess its performance. This study summarizes the results from these pseudo-real-time case studies. The next section outlines the model configuration, DA, verification, and case studies. Composite reflectivity results are presented in the third section, followed by rotation feature verification in the fourth section. The conclusions are presented in the final section.

2. Methods

a. Model configuration

WoFS is an ensemble DA and forecast system initialized using the High-Resolution Rapid Refresh Ensemble (HRRRE; Dowell et al. 2016) and consists of 36 WRF-ARW v3.9 (Skamarock et al. 2008) ensemble members with varying physical parameterization schemes (Table 1; Wheatley et al. 2015) and initial conditions. An on-demand 900 km × 900 km domain with 3-km horizontal grid spacing (WoFS-3km) was run during the 2021 SFE. A 402 km × 402 km one-way nested domain with 1-km horizontal grid spacing was placed within the daily WoFS-3km domain at a location where the most consequential convection was anticipated to occur (WoFS-1km). The parent 3-km grid and nested 1-km grid use a 15- and 5-s model time step, respectively, while using an identical 50-level vertical grid. Similar to the baseline WoFS, this WoFS variant was initialized daily at 1500 UTC with HRRRE initial conditions for both domains and boundary conditions for the parent domain. However, unlike the baseline WoFS traditionally run during the SFE which uses the RUC land surface model (LSM; Smirnova et al. 2016), both the parent and nested domains utilized the NOAH-MP LSM (Niu et al. 2011; Yang et al. 2011) due to improved numeric model stability and lower vertical Courant–Friedrichs–Lewy condition error rate (Kerr et al. 2021). The microphysical scheme used by all the WoFS ensemble members is the NSSL two-moment scheme (Mansell et al. 2010) given the desirable performance of multimoment schemes in supercell simulations (Dawson et al. 2010, 2014).

Table 1

Physics options applied to 36 HRRRE members. Planetary boundary layer schemes used are the Yonsei University (YSU; Hong et al. 2006), Mellor–Yamada–Janjić (MYJ; Janjić 2002), and Mellor–Yamada–Nakanishi–Niino (MYNN; Nakanishi and Niino 2009). Shortwave (SW) radiation schemes are Dudhia (1989) and the Rapid Radiative Transfer Model–Global (RRTMG; Iacono et al. 2008) while longwave (LW) radiation schemes are the Rapid Radiative Transfer Model (RRTM; Mlawer et al. 1997) and the RRTMG.

Table 1

b. Data assimilation

The baseline 3-km WoFS has 15-min DA cycling of WSR-88D radar reflectivity and radial velocity (Vr), GOES-R satellite (Minnis et al. 2011; Jones and Stensrud 2015; Jones et al. 2016), and surface observations using an ensemble square root filter (Whitaker and Hamill 2002) via the Gridpoint Statistical Interpolation software (Kleist et al. 2009). The WoFS variant in this study assimilates the same observations on the 3-km parent domain. However, radar and satellite observations assimilated on the 1-km nested domain are preprocessed differently (described below). Assimilation on both grids occurred simultaneously in parallel (Fig. 1). Like the baseline WoFS, observations were assimilated in 15-min increments on both domains.

Fig. 1.
Fig. 1.

Workflow for parallel DA in WoFS-3km and WoFS-1km where DA cycling begins at 1500 UTC in each case (adapted from Wang et al. 2022).

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

Observation and assimilation specifications are identical to that of Wang et al. (2022). The radar observations (both reflectivity and Vr) assimilated in WoFS-3km are “superobbed” to a 5-km grid (Cressman 1959; Majcen et al. 2008) at 10 vertical levels. Reflectivity observations less than 15 dBZ are set to 0 dBZ (clear-air reflectivity) and are thinned to a 10-km grid (at only two vertical levels), and Vr observations collocated with reflectivity < 20 dBZ are removed. The radar superobservations assimilated in WoFS-1km are at a 3-km grid spacing with 10 vertical levels. Clear-air reflectivity superobservations are at a 6-km grid spacing (four vertical levels). Unlike assimilation on the parent domain, Vr superobservations where reflectivity < 20 dBZ were assimilated. These “clear-air Vr” superobservations are at a 9-km grid spacing due to limited benefits of using a finer grid spacing (not shown).

Satellite observations from GOES-16 are assimilated, including cloud water path (CWP; Jones et al. 2016) and clear-sky water vapor band radiances (Jones et al. 2018) on both domains. For WoFS-3km, CWP observations are reanalyzed to a 15-km grid in dry points. In WoFS-1km, CWP observations are reanalyzed to a 9-km grid at dry points. Brightness temperature (BT) observations via the 6.2- and 7.3-μm channels (BT62 and BT73, respectively) are reanalyzed to the same grid as the CWP observations for WoFS-3km (15-km grid) and WoFS-1km (9-km grid). Further DA details are listed in Table 2.

Table 2

Observation types and their observation errors, horizontal localization radii (H-Local), and vertical localization radii (V-Local, in normalized pressure space) for WoFS-3km and WoFS-1km (Wheatley et al. 2015; Jones et al. 2016; Wang et al. 2022). Cloud water path observation error has a range due to nonlinearity (Jones et al. 2016). Surface observations have varying H-Local based on the observation platform (Mesonet, etc.).

Table 2

The Gaspari and Cohn (1999) localization function was applied to all observations assimilated on both domains (Table 2). Additive noise was used to spin up convection in WoFS-3km only (Dowell and Wicker 2009; Sobash and Wicker 2015). This was done via perturbations of 0.5 standard deviations added to the temperature, dewpoint temperature, and horizontal wind fields at points where observed reflectivity exceeded 35 dBZ with length scales of 9 and 3 km in the horizontal and vertical, respectively. The use of additive noise on a finer grid (WoFS-1km) led to more spurious convection without improving storm spinup (not shown). Prior adaptive inflation was used to maintain ensemble spread on both domains (Anderson and Collins 2007).

c. 2021 pseudo-real-time runs and forecast experiments

The prototype was run in pseudo–real time during the 2021 SFE for 10 cases spanning all convective hazards associated with various convective modes (Table 3). The 3-h ensemble forecasts were initialized hourly beginning at 1700 UTC to allow for sufficient DA cycling (i.e., 2 h of cycling from 1500 UTC). The last forecast in some cases was initialized at 0000 UTC since the feature(s) of interest had exited the WoFS-1km domain. Computational constraints precluded WoFS-1km real-time runs during the 2021 SFE (Table 4). However, forecast output was available the day following each event.

Table 3

Ten case studies from the 2021 SFE when WoFS-1km was run. Note that “MCS” stands for mesoscale convective system.

Table 3
Table 4

Computational resources used and runtimes for components of the baseline WoFS and the combined WoFS-3km and WoFS-1km. Core hours are the numbers of processors used times the runtime.

Table 4

Three experiments are used to assess the impact of DA directly on the 1-km grid. The 1-km forecasts for each case initialized from 1-km DA analyses are hereafter 1KM_DA (WoFS-1km), while the control is 1KM_DOWN, which are 1-km forecasts initialized from the downscaled 3-km DA analyses as in Wang et al. (2022). Last, the WoFS-3km forecasts collocated with the 1-km nest are hereafter 3KM.

d. Verification

Composite reflectivity (hereafter DZ) and 2–5-km updraft helicity (hereafter UH25) forecasts are initially verified using a neighborhood-based approach against gridded Multi-Radar Multi-Sensor (MRMS; Smith et al. 2016) composite reflectivity and 2–5-km azimuthal wind shear (AWS) data, respectively. Ensemble fractions skill scores (eFSS; Roberts and Lean 2008; Duc et al. 2013) are determined using various percentile intensity thresholds and neighborhood radii where values range from 0 (no skill) to 1 (perfect). Percentile thresholds are computed via two methods. In one method, percentile thresholds are calculated for each individual case and forecast output time. This removes all bias from the computation. However, in the second method, percentile thresholds are calculated via an aggregation of all cases, all forecast output times, thus only removing climatological bias, while individual-case bias remains. For 3KM, only the subset of the domain overlapping with the 1-km nest is included in all computations.

Object-based surrogates for individual thunderstorms are created from DZ to UH25 as in several recent WoFS studies (Skinner et al. 2018; Jones et al. 2018; Flora et al. 2019; Kerr et al. 2021; Miller et al. 2022). This is done in 5-min increments for ensemble forecast output over the 3-h WoFS forecasts and gridded MRMS data. Identifying objects within the forecast output and MRMS data requires object intensity thresholds to be determined based on the 99th percentile value for DZ and 99.9th percentile value for UH25/MRMS AWS over the 10 events meaning there is individual-case bias (Skinner et al. 2018). As in the neighborhood-based verification, only the WoFS-3km area overlapping with the 1-km nest is considered in 3KM to prevent biased thresholds and subsequent object counts. Quality control is applied to the various data, requiring objects have a minimum area of 90 and 108 km2 for rotational and DZ objects, respectively. Objects with a <10-km boundary displacement are merged. Forecast and observed objects are then matched based on the criteria outlined by Skinner et al. (2018), where objects are matched in both space (<32 km) and time (<20 min). The object matching allows various statistical metrics to be computed, as matched forecast objects are hits, unmatched forecast objects are false alarms, and unmatched MRMS objects are misses (Fig. 2). Contingency table metrics of probability of detection (POD), false alarm ratio (FAR), frequency bias, and critical success index (CSI) are computed.

Fig. 2.
Fig. 2.

Illustration of forecast object matching and verification where (a) forecast objects and (d) observed objects are identified after (b),(e) quality control and (c) combined. Forecast and observed objects may be matched (hits), unmatched forecast objects are false alarms, and unmatched observed objects are misses. (f) The contingency table metrics are then calculated. Adapted from Skinner et al. (2018).

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

Statistical significance of metric differences between forecast sets using both neighborhood- and object-based methods is determined following a resampling method by Hamill (1999). The metrics for two compared forecast sets are sampled randomly 1000 times which creates score difference null distributions. If a score difference exceeds the 97.5th percentile or is less than the 2.5th percentile, the difference is statistically significant at the 95% confidence interval. In this study, the statistical significance of score differences between 1KM_DA and 1KM_DOWN is computed. One limitation of this statistical significance testing method is the assumption that forecasts are uncorrelated which is not true for adjacent forecasts in a given case.

3. Composite reflectivity verification

Previous studies have found storm motion is more accurately depicted as grid spacing decreases, particularly in MCS cases (VandenBerg et al. 2014). An example of improved storm motion is the 18 May Texas case where an MCS impacted the Houston metropolitan area (Fig. 3). In this example, only decreasing the grid spacing does not correct the storm motion biases. The 3KM and 1KM_DOWN forecasts have the position of the leading edge further west than the observed location (∼50 km), particularly at 2–3-h lead times compared to the corresponding 1KM_DA forecast. This better placement of the leading edge in 1KM_DA is important since the system impacts a large population area at the 3-h forecast valid time. A similar trend is apparent in the 4 May MCS case, where 1KM_DA depicts damaging wind gusts in better spatial agreement with subsequent local storm reports (not shown) than 3KM and 1KM_DOWN.

Fig. 3.
Fig. 3.

DZ (dBZ) forecast paintball (≥40 dBZ) plots (each color represents an ensemble member) for each experiment and MRMS DZ at 1-, 2-, and 3-h lead times initialized at 0100 UTC 19 May 2021. The black line indicates the observed (MRMS) location of the MCS leading edge.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

a. Neighborhood-based verification

Various DZ intensity percentile thresholds are computed using all cases for forecast and MRMS data (Table 5). These percentile thresholds are used to calculate eFSS, which maintains individual-case bias (Fig. 4). The DZ eFSS with individual-case bias is generally higher than unbiased eFSS. While there are negligible differences in unbiased eFSS computed using percentiles for each forecast output time among the experiments, the inclusion of individual-case bias reveals significant differences, mainly for 1KM_DA in the highest percentile. This is indicative of 1KM_DA and 1KM_DOWN having similar spatial placement while 1KM_DA has improved bias. 1KM_DA’s eFSS is larger than that of 3KM and 1KM_DOWN for most of the forecast period in the 99th percentile for all neighborhood radii. This is consistent with Wang et al. (2022), who found there to be differences between 1KM_DA and 1KM_DOWN at higher percentiles, and Lawson et al. (2021), who determined higher reflectivity thresholds yielded larger score differences between 3-km and downscaled forecasts.

Fig. 4.
Fig. 4.

DZ eFSS with (solid lines) and without (dashed lines) individual-case bias for various percentiles (in rows) and neighborhood radii (in columns) with forecast lead time aggregated over all cases, all forecasts in 5-min intervals. Statistical significance at the 95% confidence interval of differences between 1KM_DA and 1KM_DOWN is illustrated by circles (biased) and triangles (unbiased) in 10-min increments.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

Table 5

DZ percentile thresholds (dBZ) for each forecast experiment and MRMS aggregated over all cases, forecast output times, and ensemble members (forecasts only).

Table 5

b. Object-based verification

Object-based metrics outlined in section 2d are computed for DZ objects using the 99th percentile aggregated over all cases and 3-h forecasts (Table 5) for 3KM, 1KM_DOWN, and 1KM_DA (Fig. 5). The POD in 3KM and 1KM_DOWN decrease with increasing lead time, as in previous studies (Skinner et al. 2018; Britt et al. 2020; Miller et al. 2022); however, 1KM_DA has a relatively consistent POD across all lead times (Fig. 5a). While 3KM has the highest POD when lead times are less than 90 min, 1KM_DA and 3KM have comparable POD beyond 90 min. There is clearer separation in FAR among the three experiments for all lead times, and all three experiments have an expected increase in FAR with lead time (Fig. 5b). The higher POD of 3KM comes at a cost of higher FAR in lead times less than 90 min, but 1KM_DA FAR is 0.2–0.3 lower when the lead time is greater than 90 min. The lower POD and FAR in 1KM_DOWN compared to 3KM translates to similar CSI throughout the forecast period (Fig. 5c). This is consistent with Miller et al. (2022) who found downscaled DZ forecasts had similar CSI; however, they found POD and FAR to also be similar. The comparable POD between 1KM_DA and 3KM in the latter half of the forecast period combined with the lower FAR in 1KM_DA yields higher CSI for 1KM_DA when lead times exceed ∼45 min.

Fig. 5.
Fig. 5.

Object-based DZ (a) POD, (b) FAR, (c) CSI, and (d) frequency bias with forecast lead time aggregated over all cases, all forecasts for 3KM (blue), 1KM_DOWN (red), and 1KM_DA (green). Object identification is based on the 99th percentile DZ value. Thin lines indicate individual ensemble members while the corresponding bold line is the mean of the member statistics. The shading around the 1KM_DA bold lines represents the 95% confidence interval that 1KM_DA is different than 1KM_DOWN (Hamill 1999).

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

The frequency biases for each experiment reveal some sources of the differences in POD, FAR, and CSI among 3KM, 1KM_DOWN, and 1KM_DA (Fig. 5d). The 3KM bias throughout the forecast period indicates there are ∼50% more forecasted objects than observed objects. The surplus of forecast objects increases both POD and FAR compared to the respective metrics of 1KM_DOWN and 1KM_DA, which have smaller and often more desirable (i.e., near 1.0 beyond ∼90 min) frequency biases. However, low 1KM_DA frequency bias before 45 min induces a relatively low POD during this time span. This is indicative of more convection suppression in 1KM_DA possibly due to a higher density of clear-air reflectivity observations. Given 1KM_DOWN and 1KM_DA have similar biases after 45 min, their differences in POD, FAR, and CSI are driven by forecast object placement. This suggests 1KM_DOWN has more objects with larger displacement errors than 1KM_DA, limiting the number of forecast objects matched with observed objects. The individual ensemble member’s metrics vary more in 1KM_DOWN than either 3KM or 1KM_DA. It is possible the downscaled member forecasts are noisier since the analyses and forecasts have different grid spacings.

While Fig. 5 presents aggregated metrics, each case does not have an equal impact since the numbers of forecasts and observed objects vary. Performance diagrams allow each case to be easily displayed (Fig. 6; Roebber 2009). At both 60 and 120 min into the forecast, 1KM_DA has a higher DZ CSI than 1KM_DOWN for eight cases including some where the difference in CSI is >0.2 (e.g., 18 May and 27 May). One notable exception is 17 May, where the 60-min forecast CSI is 0.1 lower for 1KM_DA. Overall, these results demonstrate that the aggregated statistics in Fig. 5 are not dominated by a case minority.

Fig. 6.
Fig. 6.

Object-based DZ performance diagrams displaying 1KM_DA (green triangles) and 1KM_DOWN (red circles) for each case at 60- and 120-min forecast lead times. Object identification is based on the 99th percentile DZ value. Arrows signal the movement from 1KM_DOWN to 1KM_DA for each case. The background diagonal lines represent frequency bias while curved lines denote CSI.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

The mean object area over all cases is over-depicted and largest in 3KM due to a coarser grid spacing, and the mean area increases throughout the 3-h forecast window suggesting that convection tends to grow upscale or individual storms increase in size as forecasts integrate forward (Fig. 7a). This trend is less apparent in 1KM_DA and 1KM_DOWN. 1KM_DOWN under-depicts the mean object area while 1KM_DA is the closest to the observed area. The observed objects do not tend to increase in areal coverage with increasing lead times. These differences in mean object area are mostly consistent for each case, where 1KM_DOWN under-depicts object size in most cases (Fig. 7b).

Fig. 7.
Fig. 7.

(a) Mean DZ object area with lead time averaged over all cases, all forecasts, all ensemble members for 3KM (blue), 1KM_DOWN (red), 1KM_DA (green), and MRMS (gray). (b) Mean DZ object area for each case over all forecasts and all lead times with same color designations.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

4. Storm rotation verification

The hail event on 3 May in Oklahoma and Texas is predicted with better confidence from the ensembles using 1-km grid spacing (Fig. 8). Both 1KM_DOWN and 1KM_DA indicate higher confidence than 3KM in predicting strong midlevel rotation, a proxy for hail potential, in both southern Oklahoma and northern Texas. Later initialized 3-km forecasts indicate a higher confidence in these events (not shown); however, the 1-km forecasts are more confident at longer lead times (2–3 h) in these later forecasts. In this example, 1KM_DA has no apparent advantage to that from 1KM_DOWN.

Fig. 8.
Fig. 8.

The 0–3-h forecasts of UH25 exceeding 60 m2 s−2 within 27 km of a point for 3KM and 300 m2 s−2 for 1KM_DOWN and 1KM_DA, initialized at 2000 UTC 3 May 2021. Hail reports are denoted by green circles.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

Following the procedure for DZ, percentiles for forecast and observed storm rotation fields are calculated from the aggregated 10 cases (Table 6). Unlike DZ eFSS, unbiased eFSS is generally higher than eFSS using aggregated percentiles (Fig. 9). There is no consistent difference between the experiment sets for all percentiles other than the slight increase in 1KM_DA’s unbiased eFSS around the 2-h lead time.

Fig. 9.
Fig. 9.

As in Fig. 4, but for UH25.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

Table 6

Forecast experiment UH25 and MRMS AWS percentiles aggregated over all cases, forecast output times, and ensemble members (UH25 only).

Table 6

The object-based contingency-table metrics for midlevel rotation objects are more inconsistent among cases than for DZ objects (Fig. 10). This inconsistent result occurs in both supercell and non-supercell cases where CSI increases, decreases, or is unchanged between 1KM_DOWN and 1KM_DA. The case inconsistency results in indifferentiable UH25 aggregated metrics between 1KM_DOWN and 1KM_DA, as in Fig. 5 for DZ (not shown). The decrease in forecast accuracy from 1KM_DOWN to 1KM_DA in many cases is consistent with Wang et al. (2022), who found worsening forecast mesocyclone accuracy in a springtime convection case.

Fig. 10.
Fig. 10.

As in Fig. 6, but for UH25 at 90 min using the 99.9th percentile (Table 6) where green triangles and red circles represent 1KM_DA and 1KM_DOWN, respectively. The supercell cases correspond to numbers 1, 2, 4, 5, 7, 8, and 9.

Citation: Weather and Forecasting 38, 2; 10.1175/WAF-D-22-0080.1

5. Conclusions and summary

Building upon the findings of Wang et al. (2022), 10 case studies were successfully tested during the 2021 SFE using an experimental 1-km WoFS. A neighborhood-based verification approach suggests high reflectivity features are better depicted by 1KM_DA while using 1KM_DOWN and 3KM as baselines. 1KM_DA outperformed both 1KM_DOWN and 3KM in portraying composite reflectivity objects, predominantly through false alarm reduction. This is a substantial improvement to WoFS relative to previous studies focused on altering DA techniques on the 3-km grid (Jones et al. 2018; Kerr et al. 2021) or simply downscaling 3-km analyses (Lawson et al. 2021; Miller et al. 2022). For reflectivity, neighborhood-based verification suggests that 1KM_DA slightly outperforms 1KM_DOWN due to improved bias rather than altered spatial placement. Negative object-based impacts of performing DA directly on the 1-km grid appear to be an exception, as this consistently occurred in only one case (17 May). The relative object false alarm reduction in 1KM_DA may be an effect of finer-scale radar and satellite observation assimilation onto a finer model grid. Assimilating finer-scale observations on a coarser model grid (i.e., 3-km grid) does not have the same extensive impact on forecasts (Kerr et al. 2021). Given 1KM_DA and 1KM_DOWN have similar object frequency biases, 1KM_DA has smaller forecast object displacement errors which differs from a neighborhood-based approach.

Rotation verification reveals negligible differences between the three experiment sets using both neighborhood- and object-based verification techniques. This is inconsistent with Miller et al. (2022), where the 3-km and downscaled 1.5-km forecast skill scores had statistically significant object-based differences.

Research will continue in future years to develop WoFS-1km where the model configuration and DA will be modified to improve forecast accuracy and skill while optimizing computational efficiency. The 1-km domain in this study must be enlarged in the future to encompass a larger geographical area given convection initiation location uncertainty. More sophisticated planetary boundary layer, microphysics, and stochastic parameterization schemes (Berner et al. 2017) in the model and assimilating new and emerging boundary layer observations, such as uncrewed aerial systems (Chilson et al. 2019), Atmospheric Emitted Radiance Interferometer, and Doppler wind lidar retrievals (Hu et al. 2019) will likely improve the forecasts. For example, recent preliminary work has shown that using multiple stochastic physics perturbation schemes, such as the stochastic kinetic energy backscatter scheme (Berner et al. 2011) and the physically based stochastic perturbation scheme (Kober and Craig 2016), along with stochastically perturbing physics scheme’s parameterizations can lead to improved ensemble forecasts and can potentially allow for a single-physics ensemble framework.

Acknowledgments.

The authors are supported by the NOAA–University of Oklahoma Cooperative Agreements NA16OAR4320115 and NA21OAR4320204. The authors thank Craig Schwartz and two anonymous reviewers for feedback that greatly improved the manuscript. The authors also thank Joshua Martin (CIWRO/NSSL) for radar observation processing assistance, Kent Knopfmeier (CIWRO/NSSL) for WoFS software assistance, David Dowell (NOAA GSL) for Jet supercomputer guidance, Tony Reinhart (NOAA NSSL) for MRMS data assistance, Ryan Sobash (NCAR) for object verification discussions, and Pat Skinner for object verification insight and manuscript review.

Data availability statement.

The model data are not available in a public repository. The data are available upon request.

REFERENCES

  • Adlerman, E. J., and K. K. Droegemeier, 2002: The sensitivity of numerically simulated cyclic mesocyclogenesis to variations in model physical and computational parameters. Mon. Wea. Rev., 130, 26712691, https://doi.org/10.1175/1520-0493(2002)130<2671:TSONSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., and N. Collins, 2007: Scalable implementations of ensemble filter algorithms for data assimilation. J. Atmos. Oceanic Technol., 24, 14521463, https://doi.org/10.1175/JTECH2049.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., S.-Y. Ha, J. P. Hacker, A. Fournier, and C. Snyder, 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 19721995, https://doi.org/10.1175/2010MWR3595.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., and Coauthors, 2017: Stochastic parameterization: Toward a new view of weather and climate models. Bull. Amer. Meteor. Soc., 98, 565588, https://doi.org/10.1175/BAMS-D-15-00268.1.

    • Search Google Scholar
    • Export Citation
  • Britt, K. C., P. S. Skinner, P. L. Heinselman, and K. H. Knopfmeier, 2020: Effects of horizontal grid spacing and inflow environment on forecasts of cyclic mesocyclogenesis in NSSL’s Warn-on-Forecast System (WoFS). Wea. Forecasting, 35, 24232444, https://doi.org/10.1175/WAF-D-20-0094.1.

    • Search Google Scholar
    • Export Citation
  • Bryan, G. H., J. C. Wyngaard, and J. M. Fritsch, 2003: Resolution requirements for the simulation of deep moist convection. Mon. Wea. Rev., 131, 23942416, https://doi.org/10.1175/1520-0493(2003)131<2394:RRFTSO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Chilson, P. B., and Coauthors, 2019: Moving towards a network of autonomous UAS atmospheric profiling stations for observations in the Earth’s lower atmosphere: The 3D mesonet concept. Sensors, 19, 2720, https://doi.org/10.3390/s19122720.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2020: A real-time simulated forecasting experiment for advancing the prediction of hazardous convective weather. Bull. Amer. Meteor. Soc., 101, E2022E2024, https://doi.org/10.1175/BAMS-D-19-0298.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2021: A real-time, virtual spring forecasting experiment to advance severe weather prediction. Bull. Amer. Meteor. Soc., 102, E814E816, https://doi.org/10.1175/BAMS-D-20-0268.1.

    • Search Google Scholar
    • Export Citation
  • Cressman, G. P., 1959: An operational objective analysis system. Mon. Wea. Rev., 87, 367374, https://doi.org/10.1175/1520-0493(1959)087<0367:AOOAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dawson, D. T., II, M. Xue, J. A. Milbrandt, and M. K. Yau, 2010: Comparison of evaporation and cold pool development between single-moment and multimoment bulk microphysics schemes in idealized simulations of tornadic thunderstorms. Mon. Wea. Rev., 138, 11521171, https://doi.org/10.1175/2009MWR2956.1.

    • Search Google Scholar
    • Export Citation
  • Dawson, D. T., II, E. R. Mansell, Y. Jung, L. J. Wicker, M. R. Kumjian, and M. Xue, 2014: Low-level ZDR signatures in supercell forward flanks: The role of size sorting and melting of hail. J. Atmos. Sci., 71, 276299, https://doi.org/10.1175/JAS-D-13-0118.1.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., and L. J. Wicker, 2009: Additive noise for storm-scale ensemble forecasting and data assimilation. J. Atmos. Oceanic Technol., 26, 911927, https://doi.org/10.1175/2008JTECHA1156.1.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., and Coauthors, 2016: Development of a High-Resolution Rapid Refresh Ensemble (HRRRE) for severe weather forecasting. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 8B.2, https://ams.confex.com/ams/28SLS/webprogram/Paper301555.html.

  • Duc, L., K. Saito, and H. Seko, 2013: Spatial–temporal fractions verification for high-resolution ensemble forecasts. Tellus, 65A, 18171, https://doi.org/10.3402/tellusa.v65i0.18171.

    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 30773107, https://doi.org/10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Flora, M. L., P. S. Skinner, C. K. Potvin, A. E. Reinhart, T. A. Jones, N. Yussouf, and K. H. Knopfmeier, 2019: Object-based verification of short-term, storm-scale probabilistic mesocyclone guidance from an experimental Warn-on-Forecast System. Wea. Forecasting, 34, 17211739, https://doi.org/10.1175/WAF-D-19-0094.1.

    • Search Google Scholar
    • Export Citation
  • Gallo, B. T., and Coauthors, 2022: Exploring the watch-to-warning space: Experimental outlook performance during the 2019 Spring Forecasting Experiment in NOAA’s Hazardous Weather Testbed. Wea. Forecasting, 37, 617637, https://doi.org/10.1175/WAF-D-21-0171.1.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, https://doi.org/10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, https://doi.org/10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 23182341, https://doi.org/10.1175/MWR3199.1.

    • Search Google Scholar
    • Export Citation
  • Hu, J., N. Yussouf, D. D. Turner, T. A. Jones, and X. Wang, 2019: Impact of ground-based remote sensing boundary layer observations on short-term probabilistic forecasts of a tornadic supercell event. Wea. Forecasting, 34, 14531476, https://doi.org/10.1175/WAF-D-18-0200.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp., http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.

  • Jones, T. A., and D. J. Stensrud, 2015: Assimilating cloud water path as a function of model cloud microphysics in an idealized simulation. Mon. Wea. Rev., 143, 20522081, https://doi.org/10.1175/MWR-D-14-00266.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., K. Knopfmeier, D. Wheatley, G. Creager, P. Minnis, and R. Palikondo, 2016: Storm-scale data assimilation and ensemble forecasting with the NSSL Experimental Warn-on-Forecast System. Part II: Combined radar and satellite data experiments. Wea. Forecasting, 31, 297327, https://doi.org/10.1175/WAF-D-15-0107.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., X. Wang, P. S. Skinner, A. Johnson, and Y. Wang, 2018: Assimilation of GOES-13 imager clear-sky water vapor (6.5 μm) radiances into a Warn-on-Forecast System. Mon. Wea. Rev., 146, 10771107, https://doi.org/10.1175/MWR-D-17-0280.1.

    • Search Google Scholar
    • Export Citation
  • Kerr, C. A., L. J. Wicker, and P. S. Skinner, 2021: Updraft-based adaptive assimilation of radial velocity observations in a Warn-on-Forecast System. Wea. Forecasting, 36, 2137, https://doi.org/10.1175/WAF-D-19-0251.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, W.-S. Wu, and S. Lord, 2009: Introduction of the GSI into the NCEP global data assimilation system. Wea. Forecasting, 24, 16911705, https://doi.org/10.1175/2009WAF2222201.1.

    • Search Google Scholar
    • Export Citation
  • Kober, K., and G. C. Craig, 2016: Physically-based stochastic perturbations (PSP) in the boundary layer to represent uncertainty in convective initiation. J. Atmos. Sci., 73, 28932911, https://doi.org/10.1175/JAS-D-15-0144.1.

    • Search Google Scholar
    • Export Citation
  • Lawson, J. R., J. S. Kain, N. Yussouf, D. C. Dowell, D. M. Wheatley, K. H. Knopfmeier, and T. A. Jones, 2018: Advancing from convection-allowing NWP to Warn-on-Forecast: Evidence of progress. Wea. Forecasting, 33, 599607, https://doi.org/10.1175/WAF-D-17-0145.1.

    • Search Google Scholar
    • Export Citation
  • Lawson, J. R., C. K. Potvin, P. S. Skinner, and A. E. Reinhart, 2021: The vice and virtue of increased horizontal resolution in ensemble forecasts of tornadic thunderstorms in low-CAPE, high-shear environments. Mon. Wea. Rev., 149, 921944, https://doi.org/10.1175/MWR-D-20-0281.1.

    • Search Google Scholar
    • Export Citation
  • Majcen, M., P. Markowski, Y. Richardson, D. Dowell, and J. Wurman, 2008: Multipass objective analyses of Doppler radar data. J. Atmos. Oceanic Technol., 25, 18451858, https://doi.org/10.1175/2008JTECHA1089.1.

    • Search Google Scholar
    • Export Citation
  • Mansell, E. R., C. L. Ziegler, and E. C. Bruning, 2010: Simulated electrification of a small thunderstorm with two-moment bulk microphysics. J. Atmos. Sci., 67, 171194, https://doi.org/10.1175/2009JAS2965.1.

    • Search Google Scholar
    • Export Citation
  • Miller, W. J. S., and Coauthors, 2022: Exploring the usefulness of downscaling free forecasts from the Warn-on-Forecast System. Wea. Forecasting, 37, 181203, https://doi.org/10.1175/WAF-D-21-0079.1.

    • Search Google Scholar
    • Export Citation
  • Minnis, P., and Coauthors, 2011: CERES edition-2 cloud property retrievals using TRMM VIRS and Terra and Aqua MODIS data—Part I: Algorithms. IEEE Trans. Geosci. Remote Sens., 49, 43744400, https://doi.org/10.1109/TGRS.2011.2144601.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, https://doi.org/10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Nakanishi, M., and H. Niino, 2009: Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteor. Soc. Japan, 87, 895912, https://doi.org/10.2151/jmsj.87.895.

    • Search Google Scholar
    • Export Citation
  • Niu, G.-Y., and Coauthors, 2011: The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res., 116, D12109, https://doi.org/10.1029/2010JD015139.

    • Search Google Scholar
    • Export Citation
  • Potvin, C. K., and M. L. Flora, 2015: Sensitivity of idealized supercell simulations to horizontal grid spacing: Implications for Warn-on-Forecast. Mon. Wea. Rev., 143, 29983024, https://doi.org/10.1175/MWR-D-14-00416.1.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, https://doi.org/10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., 2009: Visualizing multiple measures of forecast quality. Wea. Forecasting, 24, 601608, https://doi.org/10.1175/2008WAF2222159.1.

    • Search Google Scholar
    • Export Citation
  • Rothfusz, L. P., R. Schneider, D. Novak, K. Klockow-McClain, A. E. Gerard, C. Karstens, G. J. Stumpf, and T. M. Smith, 2018: FACETs: A proposed next-generation paradigm for high-impact weather forecasting. Bull. Amer. Meteor. Soc., 99, 20252043, https://doi.org/10.1175/BAMS-D-16-0100.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and R. A. Sobash, 2019: Revisiting sensitivity to horizontal grid spacing in convection-allowing models over the central and eastern United States. Mon. Wea. Rev., 147, 44114435, https://doi.org/10.1175/MWR-D-19-0115.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, K. R. Fossell, R. A. Sobash, and M. L. Weisman, 2017: Toward 1-km ensemble forecasts over large domains. Mon. Wea. Rev., 145, 29432969, https://doi.org/10.1175/MWR-D-16-0410.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., https://doi.org/10.5065/D68S4MVH.

  • Skinner, P. S., and Coauthors, 2018: Object-based verification of a prototype Warn-on-Forecast System. Wea. Forecasting, 33, 12251250, https://doi.org/10.1175/WAF-D-18-0020.1.

    • Search Google Scholar
    • Export Citation
  • Smirnova, T. G., J. M. Brown, S. G. Benjamin, and J. S. Kenyon, 2016: Modifications to the Rapid Update Cycle Land Surface Model (RUC LSM) available in the Weather Research and Forecasting (WRF) Model. Mon. Wea. Rev., 144, 18511865, https://doi.org/10.1175/MWR-D-15-0198.1.

    • Search Google Scholar
    • Export Citation
  • Smith, T. M., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 16171630, https://doi.org/10.1175/BAMS-D-14-00173.1.

    • Search Google Scholar
    • Export Citation
  • Sobash, R. A., and L. J. Wicker, 2015: On the impact of additive noise in storm-scale EnKF experiments. Mon. Wea. Rev., 143, 30673086, https://doi.org/10.1175/MWR-D-14-00323.1.

    • Search Google Scholar
    • Export Citation
  • Sobash, R. A., C. S. Schwartz, G. S. Romine, and M. L. Weisman, 2019: Next-day prediction of tornadoes using convection-allowing models with 1-km horizontal grid spacing. Wea. Forecasting, 34, 11171135, https://doi.org/10.1175/WAF-D-19-0044.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2009: Convective-scale Warn-on-Forecast System: A vision for 2020. Bull. Amer. Meteor. Soc., 90, 14871500, https://doi.org/10.1175/2009BAMS2795.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2013: Progress and challenges with Warn-on-Forecast. Atmos. Res., 123, 216, https://doi.org/10.1016/j.atmosres.2012.04.004.

    • Search Google Scholar
    • Export Citation
  • VandenBerg, M. A., M. C. Coniglio, and A. J. Clark, 2014: Comparison of next-day convection-allowing forecasts of storm motion on 1- and 4-km grids. Wea. Forecasting, 29, 878893, https://doi.org/10.1175/WAF-D-14-00011.1.

    • Search Google Scholar
    • Export Citation
  • Wang, Y., N. Yussouf, C. A. Kerr, D. R. Stratman, and B. C. Matilla, 2022: An experimental 1-km Warn-on-Forecast System for hazardous weather events. Mon. Wea. Rev., 150, 30813102, https://doi.org/10.1175/MWR-D-22-0094.1.

    • Search Google Scholar
    • Export Citation
  • Wheatley, D. M., K. H. Knopfmeier, T. A. Jones, and G. J. Creager, 2015: Storm-scale data assimilation and ensemble forecasting with the NSSL experimental Warn-on-Forecast System. Part I: Radar data experiments. Wea. Forecasting, 30, 17951817, https://doi.org/10.1175/WAF-D-15-0043.1.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, https://doi.org/10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wilson, K. A., and Coauthors, 2019: Exploring applications of storm-scale probabilistic warn-on-forecast guidance in weather forecasting, HCII 2019: Virtual, Augmented and Mixed Reality. Applications and Case Studies, J. Chen and G. Fragomeni, Eds., Lecture Notes in Computer Science, Vol. 11575, Springer, 557–572, https://doi.org/10.1007/978-3-030-21565-1_39.

  • Yang, Z.-L., and Coauthors, 2011: The Community Noah land surface model with multi-parameterization options (Noah-MP): 2. Evaluation over global river basins. J. Geophys. Res., 116, D12110, https://doi.org/10.1029/2010JD015140.

    • Search Google Scholar
    • Export Citation
  • Yussouf, N., K. A. Wilson, S. M. Martinaitis, H. Vergara, P. L. Heinselman, and J. J. Gourley, 2020: The coupling of NSSL Warn-on-Forecast and FLASH systems for probabilistic flash flood prediction. J. Hydrometeor., 21, 123141, https://doi.org/10.1175/JHM-D-19-0131.1.

    • Search Google Scholar
    • Export Citation
Save
  • Adlerman, E. J., and K. K. Droegemeier, 2002: The sensitivity of numerically simulated cyclic mesocyclogenesis to variations in model physical and computational parameters. Mon. Wea. Rev., 130, 26712691, https://doi.org/10.1175/1520-0493(2002)130<2671:TSONSC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Anderson, J. L., and N. Collins, 2007: Scalable implementations of ensemble filter algorithms for data assimilation. J. Atmos. Oceanic Technol., 24, 14521463, https://doi.org/10.1175/JTECH2049.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., S.-Y. Ha, J. P. Hacker, A. Fournier, and C. Snyder, 2011: Model uncertainty in a mesoscale ensemble prediction system: Stochastic versus multiphysics representations. Mon. Wea. Rev., 139, 19721995, https://doi.org/10.1175/2010MWR3595.1.

    • Search Google Scholar
    • Export Citation
  • Berner, J., and Coauthors, 2017: Stochastic parameterization: Toward a new view of weather and climate models. Bull. Amer. Meteor. Soc., 98, 565588, https://doi.org/10.1175/BAMS-D-15-00268.1.

    • Search Google Scholar
    • Export Citation
  • Britt, K. C., P. S. Skinner, P. L. Heinselman, and K. H. Knopfmeier, 2020: Effects of horizontal grid spacing and inflow environment on forecasts of cyclic mesocyclogenesis in NSSL’s Warn-on-Forecast System (WoFS). Wea. Forecasting, 35, 24232444, https://doi.org/10.1175/WAF-D-20-0094.1.

    • Search Google Scholar
    • Export Citation
  • Bryan, G. H., J. C. Wyngaard, and J. M. Fritsch, 2003: Resolution requirements for the simulation of deep moist convection. Mon. Wea. Rev., 131, 23942416, https://doi.org/10.1175/1520-0493(2003)131<2394:RRFTSO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Chilson, P. B., and Coauthors, 2019: Moving towards a network of autonomous UAS atmospheric profiling stations for observations in the Earth’s lower atmosphere: The 3D mesonet concept. Sensors, 19, 2720, https://doi.org/10.3390/s19122720.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2020: A real-time simulated forecasting experiment for advancing the prediction of hazardous convective weather. Bull. Amer. Meteor. Soc., 101, E2022E2024, https://doi.org/10.1175/BAMS-D-19-0298.1.

    • Search Google Scholar
    • Export Citation
  • Clark, A. J., and Coauthors, 2021: A real-time, virtual spring forecasting experiment to advance severe weather prediction. Bull. Amer. Meteor. Soc., 102, E814E816, https://doi.org/10.1175/BAMS-D-20-0268.1.

    • Search Google Scholar
    • Export Citation
  • Cressman, G. P., 1959: An operational objective analysis system. Mon. Wea. Rev., 87, 367374, https://doi.org/10.1175/1520-0493(1959)087<0367:AOOAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dawson, D. T., II, M. Xue, J. A. Milbrandt, and M. K. Yau, 2010: Comparison of evaporation and cold pool development between single-moment and multimoment bulk microphysics schemes in idealized simulations of tornadic thunderstorms. Mon. Wea. Rev., 138, 11521171, https://doi.org/10.1175/2009MWR2956.1.

    • Search Google Scholar
    • Export Citation
  • Dawson, D. T., II, E. R. Mansell, Y. Jung, L. J. Wicker, M. R. Kumjian, and M. Xue, 2014: Low-level ZDR signatures in supercell forward flanks: The role of size sorting and melting of hail. J. Atmos. Sci., 71, 276299, https://doi.org/10.1175/JAS-D-13-0118.1.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., and L. J. Wicker, 2009: Additive noise for storm-scale ensemble forecasting and data assimilation. J. Atmos. Oceanic Technol., 26, 911927, https://doi.org/10.1175/2008JTECHA1156.1.

    • Search Google Scholar
    • Export Citation
  • Dowell, D. C., and Coauthors, 2016: Development of a High-Resolution Rapid Refresh Ensemble (HRRRE) for severe weather forecasting. 28th Conf. on Severe Local Storms, Portland, OR, Amer. Meteor. Soc., 8B.2, https://ams.confex.com/ams/28SLS/webprogram/Paper301555.html.

  • Duc, L., K. Saito, and H. Seko, 2013: Spatial–temporal fractions verification for high-resolution ensemble forecasts. Tellus, 65A, 18171, https://doi.org/10.3402/tellusa.v65i0.18171.

    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46, 30773107, https://doi.org/10.1175/1520-0469(1989)046<3077:NSOCOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Flora, M. L., P. S. Skinner, C. K. Potvin, A. E. Reinhart, T. A. Jones, N. Yussouf, and K. H. Knopfmeier, 2019: Object-based verification of short-term, storm-scale probabilistic mesocyclone guidance from an experimental Warn-on-Forecast System. Wea. Forecasting, 34, 17211739, https://doi.org/10.1175/WAF-D-19-0094.1.

    • Search Google Scholar
    • Export Citation
  • Gallo, B. T., and Coauthors, 2022: Exploring the watch-to-warning space: Experimental outlook performance during the 2019 Spring Forecasting Experiment in NOAA’s Hazardous Weather Testbed. Wea. Forecasting, 37, 617637, https://doi.org/10.1175/WAF-D-21-0171.1.

    • Search Google Scholar
    • Export Citation
  • Gaspari, G., and S. E. Cohn, 1999: Construction of correlation functions in two and three dimensions. Quart. J. Roy. Meteor. Soc., 125, 723757, https://doi.org/10.1002/qj.49712555417.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155167, https://doi.org/10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 23182341, https://doi.org/10.1175/MWR3199.1.

    • Search Google Scholar
    • Export Citation
  • Hu, J., N. Yussouf, D. D. Turner, T. A. Jones, and X. Wang, 2019: Impact of ground-based remote sensing boundary layer observations on short-term probabilistic forecasts of a tornadic supercell event. Wea. Forecasting, 34, 14531476, https://doi.org/10.1175/WAF-D-18-0200.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Janjić, Z. I., 2002: Nonsingular implementation of the Mellor–Yamada level 2.5 scheme in the NCEP Meso model. NCEP Office Note 437, 61 pp., http://www.emc.ncep.noaa.gov/officenotes/newernotes/on437.pdf.

  • Jones, T. A., and D. J. Stensrud, 2015: Assimilating cloud water path as a function of model cloud microphysics in an idealized simulation. Mon. Wea. Rev., 143, 20522081, https://doi.org/10.1175/MWR-D-14-00266.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., K. Knopfmeier, D. Wheatley, G. Creager, P. Minnis, and R. Palikondo, 2016: Storm-scale data assimilation and ensemble forecasting with the NSSL Experimental Warn-on-Forecast System. Part II: Combined radar and satellite data experiments. Wea. Forecasting, 31, 297327, https://doi.org/10.1175/WAF-D-15-0107.1.

    • Search Google Scholar
    • Export Citation
  • Jones, T. A., X. Wang, P. S. Skinner, A. Johnson, and Y. Wang, 2018: Assimilation of GOES-13 imager clear-sky water vapor (6.5 μm) radiances into a Warn-on-Forecast System. Mon. Wea. Rev., 146, 10771107, https://doi.org/10.1175/MWR-D-17-0280.1.

    • Search Google Scholar
    • Export Citation
  • Kerr, C. A., L. J. Wicker, and P. S. Skinner, 2021: Updraft-based adaptive assimilation of radial velocity observations in a Warn-on-Forecast System. Wea. Forecasting, 36, 2137, https://doi.org/10.1175/WAF-D-19-0251.1.

    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, W.-S. Wu, and S. Lord, 2009: Introduction of the GSI into the NCEP global data assimilation system. Wea. Forecasting, 24, 16911705, https://doi.org/10.1175/2009WAF2222201.1.

    • Search Google Scholar
    • Export Citation
  • Kober, K., and G. C. Craig, 2016: Physically-based stochastic perturbations (PSP) in the boundary layer to represent uncertainty in convective initiation. J. Atmos. Sci., 73, 28932911, https://doi.org/10.1175/JAS-D-15-0144.1.

    • Search Google Scholar
    • Export Citation
  • Lawson, J. R., J. S. Kain, N. Yussouf, D. C. Dowell, D. M. Wheatley, K. H. Knopfmeier, and T. A. Jones, 2018: Advancing from convection-allowing NWP to Warn-on-Forecast: Evidence of progress. Wea. Forecasting, 33, 599607, https://doi.org/10.1175/WAF-D-17-0145.1.

    • Search Google Scholar
    • Export Citation
  • Lawson, J. R., C. K. Potvin, P. S. Skinner, and A. E. Reinhart, 2021: The vice and virtue of increased horizontal resolution in ensemble forecasts of tornadic thunderstorms in low-CAPE, high-shear environments. Mon. Wea. Rev., 149, 921944, https://doi.org/10.1175/MWR-D-20-0281.1.

    • Search Google Scholar
    • Export Citation
  • Majcen, M., P. Markowski, Y. Richardson, D. Dowell, and J. Wurman, 2008: Multipass objective analyses of Doppler radar data. J. Atmos. Oceanic Technol., 25, 18451858, https://doi.org/10.1175/2008JTECHA1089.1.

    • Search Google Scholar
    • Export Citation
  • Mansell, E. R., C. L. Ziegler, and E. C. Bruning, 2010: Simulated electrification of a small thunderstorm with two-moment bulk microphysics. J. Atmos. Sci., 67, 171194, https://doi.org/10.1175/2009JAS2965.1.

    • Search Google Scholar
    • Export Citation
  • Miller, W. J. S., and Coauthors, 2022: Exploring the usefulness of downscaling free forecasts from the Warn-on-Forecast System. Wea. Forecasting, 37, 181203, https://doi.org/10.1175/WAF-D-21-0079.1.

    • Search Google Scholar
    • Export Citation
  • Minnis, P., and Coauthors, 2011: CERES edition-2 cloud property retrievals using TRMM VIRS and Terra and Aqua MODIS data—Part I: Algorithms. IEEE Trans. Geosci. Remote Sens., 49, 43744400, https://doi.org/10.1109/TGRS.2011.2144601.

    • Search Google Scholar
    • Export Citation
  • Mlawer, E. J., S. J. Taubman, P. D. Brown, M. J. Iacono, and S. A. Clough, 1997: Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res., 102, 16 66316 682, https://doi.org/10.1029/97JD00237.

    • Search Google Scholar
    • Export Citation
  • Nakanishi, M., and H. Niino, 2009: Development of an improved turbulence closure model for the atmospheric boundary layer. J. Meteor. Soc. Japan, 87, 895912, https://doi.org/10.2151/jmsj.87.895.

    • Search Google Scholar
    • Export Citation
  • Niu, G.-Y., and Coauthors, 2011: The community Noah land surface model with multiparameterization options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res., 116, D12109, https://doi.org/10.1029/2010JD015139.

    • Search Google Scholar
    • Export Citation
  • Potvin, C. K., and M. L. Flora, 2015: Sensitivity of idealized supercell simulations to horizontal grid spacing: Implications for Warn-on-Forecast. Mon. Wea. Rev., 143, 29983024, https://doi.org/10.1175/MWR-D-14-00416.1.

    • Search Google Scholar
    • Export Citation
  • Roberts, N. M., and H. W. Lean, 2008: Scale-selective verification of rainfall accumulations from high-resolution forecasts of convective events. Mon. Wea. Rev., 136, 7897, https://doi.org/10.1175/2007MWR2123.1.

    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., 2009: Visualizing multiple measures of forecast quality. Wea. Forecasting, 24, 601608, https://doi.org/10.1175/2008WAF2222159.1.

    • Search Google Scholar
    • Export Citation
  • Rothfusz, L. P., R. Schneider, D. Novak, K. Klockow-McClain, A. E. Gerard, C. Karstens, G. J. Stumpf, and T. M. Smith, 2018: FACETs: A proposed next-generation paradigm for high-impact weather forecasting. Bull. Amer. Meteor. Soc., 99, 20252043, https://doi.org/10.1175/BAMS-D-16-0100.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., and R. A. Sobash, 2019: Revisiting sensitivity to horizontal grid spacing in convection-allowing models over the central and eastern United States. Mon. Wea. Rev., 147, 44114435, https://doi.org/10.1175/MWR-D-19-0115.1.

    • Search Google Scholar
    • Export Citation
  • Schwartz, C. S., G. S. Romine, K. R. Fossell, R. A. Sobash, and M. L. Weisman, 2017: Toward 1-km ensemble forecasts over large domains. Mon. Wea. Rev., 145, 29432969, https://doi.org/10.1175/MWR-D-16-0410.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., https://doi.org/10.5065/D68S4MVH.

  • Skinner, P. S., and Coauthors, 2018: Object-based verification of a prototype Warn-on-Forecast System. Wea. Forecasting, 33, 12251250, https://doi.org/10.1175/WAF-D-18-0020.1.

    • Search Google Scholar
    • Export Citation
  • Smirnova, T. G., J. M. Brown, S. G. Benjamin, and J. S. Kenyon, 2016: Modifications to the Rapid Update Cycle Land Surface Model (RUC LSM) available in the Weather Research and Forecasting (WRF) Model. Mon. Wea. Rev., 144, 18511865, https://doi.org/10.1175/MWR-D-15-0198.1.

    • Search Google Scholar
    • Export Citation
  • Smith, T. M., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) severe weather and aviation products: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 16171630, https://doi.org/10.1175/BAMS-D-14-00173.1.

    • Search Google Scholar
    • Export Citation
  • Sobash, R. A., and L. J. Wicker, 2015: On the impact of additive noise in storm-scale EnKF experiments. Mon. Wea. Rev., 143, 30673086, https://doi.org/10.1175/MWR-D-14-00323.1.

    • Search Google Scholar
    • Export Citation
  • Sobash, R. A., C. S. Schwartz, G. S. Romine, and M. L. Weisman, 2019: Next-day prediction of tornadoes using convection-allowing models with 1-km horizontal grid spacing. Wea. Forecasting, 34, 11171135, https://doi.org/10.1175/WAF-D-19-0044.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2009: Convective-scale Warn-on-Forecast System: A vision for 2020. Bull. Amer. Meteor. Soc., 90, 14871500, https://doi.org/10.1175/2009BAMS2795.1.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., and Coauthors, 2013: Progress and challenges with Warn-on-Forecast. Atmos. Res., 123, 216, https://doi.org/10.1016/j.atmosres.2012.04.004.

    • Search Google Scholar
    • Export Citation
  • VandenBerg, M. A., M. C. Coniglio, and A. J. Clark, 2014: Comparison of next-day convection-allowing forecasts of storm motion on 1- and 4-km grids. Wea. Forecasting, 29, 878893, https://doi.org/10.1175/WAF-D-14-00011.1.

    • Search Google Scholar
    • Export Citation
  • Wang, Y., N. Yussouf, C. A. Kerr, D. R. Stratman, and B. C. Matilla, 2022: An experimental 1-km Warn-on-Forecast System for hazardous weather events. Mon. Wea. Rev., 150, 30813102, https://doi.org/10.1175/MWR-D-22-0094.1.

    • Search Google Scholar
    • Export Citation
  • Wheatley, D. M., K. H. Knopfmeier, T. A. Jones, and G. J. Creager, 2015: Storm-scale data assimilation and ensemble forecasting with the NSSL experimental Warn-on-Forecast System. Part I: Radar data experiments. Wea. Forecasting, 30, 17951817, https://doi.org/10.1175/WAF-D-15-0043.1.

    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, https://doi.org/10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wilson, K. A., and Coauthors, 2019: Exploring applications of storm-scale probabilistic warn-on-forecast guidance in weather forecasting, HCII 2019: Virtual, Augmented and Mixed Reality. Applications and Case Studies, J. Chen and G. Fragomeni, Eds., Lecture Notes in Computer Science, Vol. 11575, Springer, 557–572, https://doi.org/10.1007/978-3-030-21565-1_39.

  • Yang, Z.-L., and Coauthors, 2011: The Community Noah land surface model with multi-parameterization options (Noah-MP): 2. Evaluation over global river basins. J. Geophys. Res., 116, D12110, https://doi.org/10.1029/2010JD015140.

    • Search Google Scholar
    • Export Citation
  • Yussouf, N., K. A. Wilson, S. M. Martinaitis, H. Vergara, P. L. Heinselman, and J. J. Gourley, 2020: The coupling of NSSL Warn-on-Forecast and FLASH systems for probabilistic flash flood prediction. J. Hydrometeor., 21, 123141, https://doi.org/10.1175/JHM-D-19-0131.1.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Workflow for parallel DA in WoFS-3km and WoFS-1km where DA cycling begins at 1500 UTC in each case (adapted from Wang et al. 2022).

  • Fig. 2.

    Illustration of forecast object matching and verification where (a) forecast objects and (d) observed objects are identified after (b),(e) quality control and (c) combined. Forecast and observed objects may be matched (hits), unmatched forecast objects are false alarms, and unmatched observed objects are misses. (f) The contingency table metrics are then calculated. Adapted from Skinner et al. (2018).

  • Fig. 3.

    DZ (dBZ) forecast paintball (≥40 dBZ) plots (each color represents an ensemble member) for each experiment and MRMS DZ at 1-, 2-, and 3-h lead times initialized at 0100 UTC 19 May 2021. The black line indicates the observed (MRMS) location of the MCS leading edge.

  • Fig. 4.

    DZ eFSS with (solid lines) and without (dashed lines) individual-case bias for various percentiles (in rows) and neighborhood radii (in columns) with forecast lead time aggregated over all cases, all forecasts in 5-min intervals. Statistical significance at the 95% confidence interval of differences between 1KM_DA and 1KM_DOWN is illustrated by circles (biased) and triangles (unbiased) in 10-min increments.

  • Fig. 5.

    Object-based DZ (a) POD, (b) FAR, (c) CSI, and (d) frequency bias with forecast lead time aggregated over all cases, all forecasts for 3KM (blue), 1KM_DOWN (red), and 1KM_DA (green). Object identification is based on the 99th percentile DZ value. Thin lines indicate individual ensemble members while the corresponding bold line is the mean of the member statistics. The shading around the 1KM_DA bold lines represents the 95% confidence interval that 1KM_DA is different than 1KM_DOWN (Hamill 1999).

  • Fig. 6.

    Object-based DZ performance diagrams displaying 1KM_DA (green triangles) and 1KM_DOWN (red circles) for each case at 60- and 120-min forecast lead times. Object identification is based on the 99th percentile DZ value. Arrows signal the movement from 1KM_DOWN to 1KM_DA for each case. The background diagonal lines represent frequency bias while curved lines denote CSI.