Search Results

You are looking at 11 - 20 of 68 items for

  • Author or Editor: Adam J. Clark x
  • Refine by Access: All Content x
Clear All Modify Search
Russ S. Schumacher
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

From 9 to 11 June 2010, a mesoscale convective vortex (MCV) was associated with several periods of heavy rainfall that led to flash flooding. During the overnight hours, mesoscale convective systems (MCSs) developed that moved slowly and produced heavy rainfall over small areas in south-central Texas on 9 June, north Texas on 10 June, and western Arkansas on 11 June. In this study, forecasts of this event from the Center for the Analysis and Prediction of Storms' Storm-Scale Ensemble Forecast system are examined. This ensemble, with 26 members at 4-km horizontal grid spacing, included a few members that very accurately predicted the development, maintenance, and evolution of the heavy-rain-producing MCSs, along with a majority of members that had substantial errors in their precipitation forecasts. The processes favorable for the initiation, organization, and maintenance of these heavy-rain-producing MCSs are diagnosed by comparing ensemble members with accurate and inaccurate forecasts. Even within a synoptic environment known to be conducive to extreme local rainfall, there was considerable spread in the ensemble's rainfall predictions. Because all ensemble members included an anomalously moist environment, the precipitation predictions were insensitive to the atmospheric moisture. However, the development of heavy precipitation overnight was very sensitive to the intensity and evolution of convection the previous day. Convective influences on the strength of the MCV and its associated dome of cold air at low levels determined whether subsequent deep convection was initiated and maintained. In all, this ensemble provides quantitative and qualitative information about the mesoscale processes that are most favorable (or unfavorable) for localized extreme rainfall.

Full access
Eric D. Loken
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Spread and skill of mixed- and single-physics convection-allowing ensemble forecasts that share the same set of perturbed initial and lateral boundary conditions are investigated at a variety of spatial scales. Forecast spread is assessed for 2-m temperature, 2-m dewpoint, 500-hPa geopotential height, and hourly accumulated precipitation both before and after a bias-correction procedure is applied. Time series indicate that the mixed-physics ensemble forecasts generally have greater variance than comparable single-physics forecasts. While the differences tend to be small, they are greatest at the smallest spatial scales and when the ensembles are not calibrated for bias. Although differences between the mixed- and single-physics ensemble variances are smaller for the larger spatial scales, variance ratios suggest that the mixed-physics ensemble generates more spread relative to the single-physics ensemble at larger spatial scales. Forecast skill is evaluated for 2-m temperature, dewpoint temperature, and bias-corrected 6-h accumulated precipitation. The mixed-physics ensemble generally has lower 2-m temperature and dewpoint root-mean-square error (RMSE) compared to the single-physics ensemble. However, little difference in skill or reliability is found between the mixed- and single-physics bias-corrected precipitation forecasts. Overall, given that mixed- and single-physics ensembles have similar spread and skill, developers may prefer to implement single- as opposed to mixed-physics convection-allowing ensembles in future operational systems, while accounting for model error using stochastic methods.

Full access
Eric D. Loken
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Given increasing computing power, an important question is whether additional computational resources would be better spent reducing the horizontal grid spacing of a convection-allowing model (CAM) or adding members to form CAM ensembles. The present study investigates this question as it applies to CAM-derived next-day probabilistic severe weather forecasts created by using forecast updraft helicity as a severe weather proxy for 63 days of the 2010 and 2011 NOAA Hazardous Weather Testbed Spring Forecasting Experiments. Forecasts derived from three sets of Weather Research and Forecasting Model configurations are tested: a 1-km deterministic model, a 4-km deterministic model, and an 11-member, 4-km ensemble. Forecast quality is evaluated using relative operating characteristic (ROC) curves, attributes diagrams, and performance diagrams, and forecasts from five representative cases are analyzed to investigate their relative quality and value in a variety of situations. While no statistically significant differences exist between the 4- and 1-km deterministic forecasts in terms of area under ROC curves, the 4-km ensemble forecasts offer weakly significant improvements over the 4-km deterministic forecasts over the entire 63-day dataset. Further, the 4-km ensemble forecasts generally provide greater forecast quality relative to either of the deterministic forecasts on an individual day. Collectively, these results suggest that, for purposes of improving next-day CAM-derived probabilistic severe weather forecasts, additional computing resources may be better spent on adding members to form CAM ensembles than on reducing the horizontal grid spacing of a deterministic model below 4 km.

Full access
Burkely T. Gallo
,
Adam J. Clark
, and
Scott R. Dembek
Full access
Nusrat Yussouf
,
John S. Kain
, and
Adam J. Clark

Abstract

A continuous-update-cycle storm-scale ensemble data assimilation (DA) and prediction system using the ARW model and DART software is used to generate retrospective 0–6-h ensemble forecasts of the 31 May 2013 tornado and flash flood event over central Oklahoma, with a focus on the prediction of heavy rainfall. Results indicate that the model-predicted probabilities of strong low-level mesocyclones correspond well with the locations of observed mesocyclones and with the observed damage track. The ensemble-mean quantitative precipitation forecast (QPF) from the radar DA experiments match NCEP’s stage IV analyses reasonably well in terms of location and amount of rainfall, particularly during the 0–3-h forecast period. In contrast, significant displacement errors and lower rainfall totals are evident in a control experiment that withholds radar data during the DA. The ensemble-derived probabilistic QPF (PQPF) from the radar DA experiment is more skillful than the PQPF from the no_radar experiment, based on visual inspection and probabilistic verification metrics. A novel object-based storm-tracking algorithm provides additional insight, suggesting that explicit assimilation and 1–2-h prediction of the dominant supercell is remarkably skillful in the radar experiment. The skill in both experiments is substantially higher during the 0–3-h forecast period than in the 3–6-h period. Furthermore, the difference in skill between the two forecasts decreases sharply during the latter period, indicating that the impact of radar DA is greatest during early forecast hours. Overall, the results demonstrate the potential for a frequently updated, high-resolution ensemble system to extend probabilistic low-level mesocyclone and flash flood forecast lead times and improve accuracy of convective precipitation nowcasting.

Full access
Burkely T. Gallo
,
Adam J. Clark
, and
Scott R. Dembek

Abstract

Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.

Full access
Eswar R. Iyer
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Previous studies examining convection-allowing models (CAMs), as well as NOAA/Hazardous Weather Testbed Spring Forecasting Experiments (SFEs) have typically emphasized “day 1” (12–36 h) forecast guidance. These studies find a distinct advantage in CAMs relative to models that parameterize convection, especially for fields strongly tied to convection like precipitation. During the 2014 SFE, “day 2” (36–60 h) forecast products from a CAM ensemble provided by the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma were examined. Quantitative precipitation forecasts (QPFs) from the CAPS ensemble, known as the Storm Scale Ensemble Forecast (SSEF) system, are compared to NCEP’s operational Short Range Ensemble Forecast (SREF) system, which provides lateral boundary conditions for the SSEF, to see if the CAM ensemble outperforms the SREF through forecast hours 36–60. Equitable threat scores (ETSs) were computed for precipitation thresholds ranging from 0.10 to 0.75 in. for each SSEF and SREF member, as well as ensemble means, for 3-h accumulation periods. The ETS difference between the SSEF and SREF peaked during hours 36–42. Probabilistic forecasts were evaluated using the area under the receiver operating characteristic curve (ROC area). The SSEF had higher values of ROC area, especially at thresholds ≥ 0.50 in. Additionally, time–longitude diagrams of diurnally averaged rainfall were constructed for each SSEF/SREF ensemble member. Spatial correlation coefficients between forecasts and observations in time–longitude space indicated that the SSEF depicted the diurnal cycle much better than the SREF, which underforecasted precipitation with a peak that had a 3-h phase lag. A minority of SREF members performed well.

Full access
Yunsung Hwang
,
Adam J. Clark
,
Valliappa Lakshmanan
, and
Steven E. Koch

Abstract

Planning and managing commercial airplane routes to avoid thunderstorms requires very skillful and frequently updated 0–8-h forecasts of convection. The National Oceanic and Atmospheric Administration’s High-Resolution Rapid Refresh (HRRR) model is well suited for this purpose, being initialized hourly and providing explicit forecasts of convection out to 15 h. However, because of difficulties with depicting convection at the time of model initialization and shortly thereafter (i.e., during model spinup), relatively simple extrapolation techniques, on average, perform better than the HRRR at 0–2-h lead times. Thus, recently developed nowcasting techniques blend extrapolation-based forecasts with numerical weather prediction (NWP)-based forecasts, heavily weighting the extrapolation forecasts at 0–2-h lead times and transitioning emphasis to the NWP-based forecasts at the later lead times. In this study, a new approach to applying different weights to blend extrapolation and model forecasts based on intensities and forecast times is applied and tested. An image-processing method of morphing between extrapolation and model forecasts to create nowcasts is described and the skill is compared to extrapolation forecasts and forecasts from the HRRR. The new approach is called salient cross dissolve (Sal CD), which is compared to a commonly used method called linear cross dissolve (Lin CD). Examinations of forecasts and observations of the maximum altitude of echo-top heights ≥18 dBZ and measurement of forecast skill using neighborhood-based methods shows that Sal CD significantly improves upon Lin CD, as well as the HRRR at 2–5-h lead times.

Full access
Adam J. Clark
,
William A. Gallus Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, a 10-member 4-km grid-spacing Storm-Scale Ensemble Forecast (SSEF) system was run in real time to provide experimental severe weather forecasting guidance. Five SSEF system members used perturbed initial and lateral boundary conditions (ICs and LBCs) and mixed physics (ENS4), and five members used only mixed physics (ENS4phys). This ensemble configuration facilitates a comparison of ensemble spread generated by a combination of perturbed ICs/LBCs and mixed physics to that generated by only mixed physics, which is examined herein. In addition, spread growth and spread-error metrics for the two SSEF system configurations are compared to similarly configured 20-km grid-spacing convection-parameterizing ensembles (ENS20 and ENS20phys). Twelve forecast fields are examined for 20 cases.

For most fields, ENS4 mean spread growth rates are higher than ENS20 for ensemble configurations with both sets of perturbations, which is expected as smaller scales of motion are resolved at higher resolution. However, when ensembles with only mixed physics are compared, mass-related fields (i.e., geopotential height and mean sea level pressure) in ENS20phys have slightly higher spread growth rates than ENS4phys, likely resulting from the additional physics uncertainty in ENS20phys from varied cumulus parameterizations that were not used at 4-km grid spacing. For 4- and 20-km configurations, the proportion of spread generated by mixed physics in ENS4 and ENS20 increased with increasing forecast lead time. In addition, low-level fields (e.g., 2-m temperature) had a higher proportion of spread generated by mixed physics than mass-related fields. Spread-error analyses revealed that ensemble variance from the current uncalibrated ensemble systems was not a reliable indicator of forecast uncertainty. Furthermore, ENS4 had better statistical consistency than ENS20 for some mass-related fields, wind-related fields, precipitation, and most unstable convective available potential energy (MUCAPE) with no noticeable differences for low-level temperature and dewpoint fields. The variety of results obtained for the different types of fields examined suggests that future ensemble design should give careful consideration to the specific types of forecasts desired by the user.

Full access
Eric D. Loken
,
Adam J. Clark
, and
Amy McGovern

Abstract

Recent research has shown that random forests (RFs) can create skillful probabilistic severe weather hazard forecasts from numerical weather prediction (NWP) ensemble data. However, it remains unclear how RFs use NWP data and how predictors should be generated from NWP ensembles. This paper compares two methods for creating RFs for next-day severe weather prediction using simulated forecast data from the convection-allowing High-Resolution Ensemble Forecast System, version 2.1 (HREFv2.1). The first method uses predictors from individual ensemble members (IM) at the point of prediction, while the second uses ensemble mean (EM) predictors at multiple spatial points. IM and EM RFs are trained with all predictors as well as predictor subsets, and the Python module tree interpreter (TI) is used to assess RF variable importance and the relationships learned by the RFs. Results show that EM RFs have better objective skill compared to similarly configured IM RFs for all hazards, presumably because EM predictors contain less noise. In both IM and EM RFs, storm variables are found to be most important, followed by index and environment variables. Interestingly, RFs created from storm and index variables tend to produce forecasts with greater or equal skill than those from the all-predictor RFs. TI analysis shows that the RFs emphasize different predictors for different hazards in a way that makes physical sense. Further, TI shows that RFs create calibrated hazard probabilities based on complex, multivariate relationships that go well beyond thresholding 2–5-km updraft helicity.

Full access