Search Results

You are looking at 11 - 20 of 43 items for :

  • Author or Editor: Adam J. Clark x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Burkely T. Gallo
,
Adam J. Clark
, and
Scott R. Dembek

Abstract

Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.

Full access
Eswar R. Iyer
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Previous studies examining convection-allowing models (CAMs), as well as NOAA/Hazardous Weather Testbed Spring Forecasting Experiments (SFEs) have typically emphasized “day 1” (12–36 h) forecast guidance. These studies find a distinct advantage in CAMs relative to models that parameterize convection, especially for fields strongly tied to convection like precipitation. During the 2014 SFE, “day 2” (36–60 h) forecast products from a CAM ensemble provided by the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma were examined. Quantitative precipitation forecasts (QPFs) from the CAPS ensemble, known as the Storm Scale Ensemble Forecast (SSEF) system, are compared to NCEP’s operational Short Range Ensemble Forecast (SREF) system, which provides lateral boundary conditions for the SSEF, to see if the CAM ensemble outperforms the SREF through forecast hours 36–60. Equitable threat scores (ETSs) were computed for precipitation thresholds ranging from 0.10 to 0.75 in. for each SSEF and SREF member, as well as ensemble means, for 3-h accumulation periods. The ETS difference between the SSEF and SREF peaked during hours 36–42. Probabilistic forecasts were evaluated using the area under the receiver operating characteristic curve (ROC area). The SSEF had higher values of ROC area, especially at thresholds ≥ 0.50 in. Additionally, time–longitude diagrams of diurnally averaged rainfall were constructed for each SSEF/SREF ensemble member. Spatial correlation coefficients between forecasts and observations in time–longitude space indicated that the SSEF depicted the diurnal cycle much better than the SREF, which underforecasted precipitation with a peak that had a 3-h phase lag. A minority of SREF members performed well.

Full access
Yunsung Hwang
,
Adam J. Clark
,
Valliappa Lakshmanan
, and
Steven E. Koch

Abstract

Planning and managing commercial airplane routes to avoid thunderstorms requires very skillful and frequently updated 0–8-h forecasts of convection. The National Oceanic and Atmospheric Administration’s High-Resolution Rapid Refresh (HRRR) model is well suited for this purpose, being initialized hourly and providing explicit forecasts of convection out to 15 h. However, because of difficulties with depicting convection at the time of model initialization and shortly thereafter (i.e., during model spinup), relatively simple extrapolation techniques, on average, perform better than the HRRR at 0–2-h lead times. Thus, recently developed nowcasting techniques blend extrapolation-based forecasts with numerical weather prediction (NWP)-based forecasts, heavily weighting the extrapolation forecasts at 0–2-h lead times and transitioning emphasis to the NWP-based forecasts at the later lead times. In this study, a new approach to applying different weights to blend extrapolation and model forecasts based on intensities and forecast times is applied and tested. An image-processing method of morphing between extrapolation and model forecasts to create nowcasts is described and the skill is compared to extrapolation forecasts and forecasts from the HRRR. The new approach is called salient cross dissolve (Sal CD), which is compared to a commonly used method called linear cross dissolve (Lin CD). Examinations of forecasts and observations of the maximum altitude of echo-top heights ≥18 dBZ and measurement of forecast skill using neighborhood-based methods shows that Sal CD significantly improves upon Lin CD, as well as the HRRR at 2–5-h lead times.

Full access
Adam J. Clark
,
William A. Gallus Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, a 10-member 4-km grid-spacing Storm-Scale Ensemble Forecast (SSEF) system was run in real time to provide experimental severe weather forecasting guidance. Five SSEF system members used perturbed initial and lateral boundary conditions (ICs and LBCs) and mixed physics (ENS4), and five members used only mixed physics (ENS4phys). This ensemble configuration facilitates a comparison of ensemble spread generated by a combination of perturbed ICs/LBCs and mixed physics to that generated by only mixed physics, which is examined herein. In addition, spread growth and spread-error metrics for the two SSEF system configurations are compared to similarly configured 20-km grid-spacing convection-parameterizing ensembles (ENS20 and ENS20phys). Twelve forecast fields are examined for 20 cases.

For most fields, ENS4 mean spread growth rates are higher than ENS20 for ensemble configurations with both sets of perturbations, which is expected as smaller scales of motion are resolved at higher resolution. However, when ensembles with only mixed physics are compared, mass-related fields (i.e., geopotential height and mean sea level pressure) in ENS20phys have slightly higher spread growth rates than ENS4phys, likely resulting from the additional physics uncertainty in ENS20phys from varied cumulus parameterizations that were not used at 4-km grid spacing. For 4- and 20-km configurations, the proportion of spread generated by mixed physics in ENS4 and ENS20 increased with increasing forecast lead time. In addition, low-level fields (e.g., 2-m temperature) had a higher proportion of spread generated by mixed physics than mass-related fields. Spread-error analyses revealed that ensemble variance from the current uncalibrated ensemble systems was not a reliable indicator of forecast uncertainty. Furthermore, ENS4 had better statistical consistency than ENS20 for some mass-related fields, wind-related fields, precipitation, and most unstable convective available potential energy (MUCAPE) with no noticeable differences for low-level temperature and dewpoint fields. The variety of results obtained for the different types of fields examined suggests that future ensemble design should give careful consideration to the specific types of forecasts desired by the user.

Full access
Eric D. Loken
,
Adam J. Clark
, and
Amy McGovern

Abstract

Recent research has shown that random forests (RFs) can create skillful probabilistic severe weather hazard forecasts from numerical weather prediction (NWP) ensemble data. However, it remains unclear how RFs use NWP data and how predictors should be generated from NWP ensembles. This paper compares two methods for creating RFs for next-day severe weather prediction using simulated forecast data from the convection-allowing High-Resolution Ensemble Forecast System, version 2.1 (HREFv2.1). The first method uses predictors from individual ensemble members (IM) at the point of prediction, while the second uses ensemble mean (EM) predictors at multiple spatial points. IM and EM RFs are trained with all predictors as well as predictor subsets, and the Python module tree interpreter (TI) is used to assess RF variable importance and the relationships learned by the RFs. Results show that EM RFs have better objective skill compared to similarly configured IM RFs for all hazards, presumably because EM predictors contain less noise. In both IM and EM RFs, storm variables are found to be most important, followed by index and environment variables. Interestingly, RFs created from storm and index variables tend to produce forecasts with greater or equal skill than those from the all-predictor RFs. TI analysis shows that the RFs emphasize different predictors for different hazards in a way that makes physical sense. Further, TI shows that RFs create calibrated hazard probabilities based on complex, multivariate relationships that go well beyond thresholding 2–5-km updraft helicity.

Full access
Eric D. Loken
,
Adam J. Clark
, and
Christopher D. Karstens

Abstract

Extracting explicit severe weather forecast guidance from convection-allowing ensembles (CAEs) is challenging since CAEs cannot directly simulate individual severe weather hazards. Currently, CAE-based severe weather probabilities must be inferred from one or more storm-related variables, which may require extensive calibration and/or contain limited information. Machine learning (ML) offers a way to obtain severe weather forecast probabilities from CAEs by relating CAE forecast variables to observed severe weather reports. This paper develops and verifies a random forest (RF)-based ML method for creating day 1 (1200–1200 UTC) severe weather hazard probabilities and categorical outlooks based on 0000 UTC Storm-Scale Ensemble of Opportunity (SSEO) forecast data and observed Storm Prediction Center (SPC) storm reports. RF forecast probabilities are compared against severe weather forecasts from calibrated SSEO 2–5-km updraft helicity (UH) forecasts and SPC convective outlooks issued at 0600 UTC. Continuous RF probabilities routinely have the highest Brier skill scores (BSSs), regardless of whether the forecasts are evaluated over the full domain or regional/seasonal subsets. Even when RF probabilities are truncated at the probability levels issued by the SPC, the RF forecasts often have BSSs better than or comparable to corresponding UH and SPC forecasts. Relative to the UH and SPC forecasts, the RF approach performs best for severe wind and hail prediction during the spring and summer (i.e., March–August). Overall, it is concluded that the RF method presented here provides skillful, reliable CAE-derived severe weather probabilities that may be useful to severe weather forecasters and decision-makers.

Free access
Adam J. Clark
,
William A. Gallus Jr.
, and
Morris L. Weisman

Abstract

Since 2003 the National Center for Atmospheric Research (NCAR) has been running various experimental convection-allowing configurations of the Weather Research and Forecasting Model (WRF) for domains covering a large portion of the central United States during the warm season (April–July). In this study, the skill of 3-hourly accumulated precipitation forecasts from a large sample of these convection-allowing simulations conducted during 2004–05 and 2007–08 is compared to that from operational North American Mesoscale (NAM) model forecasts using a neighborhood-based equitable threat score (ETS). Separate analyses were conducted for simulations run before and after the implementation in 2007 of positive-definite (PD) moisture transport for the NCAR-WRF simulations. The neighborhood-based ETS (denoted 〈ETS〉 r ) relaxes the criteria for “hits” (i.e., correct forecasts) by considering grid points within a specified radius r. It is shown that 〈ETS〉 r is more useful than the traditional ETS because 〈ETS〉 r can be used to diagnose differences in precipitation forecast skill between different models as a function of spatial scale, whereas the traditional ETS only considers the spatial scale of the verification grid. It was found that differences in 〈ETS〉 r between NCAR-WRF and NAM generally increased with increasing r, with NCAR-WRF having higher scores. Examining time series of 〈ETS〉 r for r = 100 and r = 0 km (which simply reduces to the “traditional” ETS), statistically significant differences between NCAR-WRF and NAM were found at many forecast lead times for 〈ETS〉100 but only a few times for 〈ETS〉0. Larger and more statistically significant differences occurred with the 2007–08 cases relative to the 2004–05 cases. Because of differences in model configurations and dominant large-scale weather regimes, a more controlled experiment would have been needed to diagnose the reason for the larger differences that occurred with the 2007–08 cases. Finally, a compositing technique was used to diagnose the differences in the spatial distribution of the forecasts. This technique implied westward displacement errors for NAM model forecasts in both sets of cases and in NCAR-WRF model forecasts for the 2007–08 cases. Generally, the results are encouraging because they imply that advantages in convection-allowing relative to convection-parameterizing simulations noted in recent studies are reflected in an objective neighborhood-based metric.

Full access
Michael A. VandenBerg
,
Michael C. Coniglio
, and
Adam J. Clark

Abstract

This study compares next-day forecasts of storm motion from convection-allowing models with 1- and 4-km grid spacing. A tracking algorithm is used to determine the motion of discrete storms in both the model forecasts and an analysis of radar observations. The distributions of both the raw storm motions and the deviations of these motions from the environmental flow are examined to determine the overall biases of the 1- and 4-km forecasts and how they compare to the observed storm motions. The mean storm speeds for the 1-km forecasts are significantly closer to the observed mean than those for the 4-km forecasts when viewed relative to the environmental flow/shear, but mostly for the shorter-lived storms. For storm directions, the 1-km forecast storms move similarly to the 4-km forecast storms on average. However, for the raw storm motions and those relative to the 0–6-km shear, results suggest that the 1-km forecasts may alleviate some of a clockwise (rightward) bias of the 4-km forecasts, particularly for those that do not deviate strongly from the 0–6-km shear vector. This improvement in a clockwise bias also is seen for the longer-lived storms, but is not seen when viewing the storm motions relative to the 850–300-hPa mean wind or Bunkers motion vector. These results suggest that a reduction from 4- to 1-km grid spacing can potentially improve forecasts of storm motion, but further analysis of closer storm analogs are needed to confirm these results and to explore specific hypotheses for their differences.

Full access
Adam J. Clark
,
Christopher J. Schaffer
,
William A. Gallus Jr.
, and
Kaj Johnson-O’Mara

Abstract

Using quasigeostrophic arguments and numerical simulations, past works have developed conceptual models of vertical circulations induced by linear and curved jet streaks. Because jet-induced vertical motion could influence the development of severe weather, these conceptual models, especially the “four quadrant” model for linear jet streaks, are often applied by operational forecasters. The present study examines the climatology of tornado, hail, and severe wind reports relative to upper-level jet streaks, along with temporal trends in storm report frequencies and changes in report distributions for different jet streak directions. In addition, composite fields (e.g., divergence, vertical velocity) are analyzed for jet streak regions to examine whether the fields correspond to what is expected from conceptual models of curved or linear jet streaks, and whether the fields help explain the storm report distributions.

During the period analyzed, 84% of storm reports were associated with upper-level jet streaks, with June–August having the lowest percentages. In March and April the left-exit quadrant had the most storm reports, while after April the right-entrance quadrant was associated with the most reports. Composites revealed that tornado and hail reports are concentrated in the jet-exit region along the major jet axis and in the right-entrance quadrant. Wind reports have similar maxima, but the right-entrance quadrant maximum is more pronounced. Upper-level composite divergence fields generally correspond to what would be expected from the four-quadrant model, but differences in the magnitudes of the vertical velocity between the quadrants and locations of divergent–convergent centers may have resulted from jet curvature. The maxima in the storm report distributions are not well collocated with the maxima in the upper-level divergence fields, but are much better collocated with low-level convergence maxima that exist in both exit regions and extend into the right-entrance region. Composites of divergence–convergence with linear, cyclonic, and anticyclonic jet streaks also generally matched conceptual models for curved jet streaks, and it was found that wind reports have a notable maximum in the right-entrance quadrant of both anticyclonic and linear jet streaks. Finally, it was found that the upper-level divergence and vertical velocity in all jet-quadrants have a tendency to decrease as jet streak directions shift from SSW to NNW.

Full access
Aaron Johnson
,
Xuguang Wang
,
Yongming Wang
,
Anthony Reinhart
,
Adam J. Clark
, and
Israel L. Jirak

Abstract

An object-based probabilistic (OBPROB) forecasting framework is developed and applied, together with a more traditional neighborhood-based framework, to convection-permitting ensemble forecasts produced by the University of Oklahoma (OU) Multiscale data Assimilation and Predictability (MAP) laboratory during the 2017 and 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiments. Case studies from 2017 are used for parameter tuning and demonstration of methodology, while the 2018 ensemble forecasts are systematically verified. The 2017 case study demonstrates that the OBPROB forecast product can provide a unique tool to operational forecasters that includes convective-scale details such as storm mode and morphology, which are typically lost in neighborhood-based methods, while also providing quantitative ensemble probabilistic guidance about those details in a more easily interpretable format than the more commonly used paintball plots. The case study also demonstrates that objective verification metrics reveal different relative performance of the ensemble at different forecast lead times depending on the verification framework (i.e., object versus neighborhood) because of the different features emphasized by object- and neighborhood-based evaluations. Both frameworks are then used for a systematic evaluation of 26 forecasts from the spring of 2018. The OBPROB forecast verification as configured in this study shows less sensitivity to forecast lead time than the neighborhood forecasts. Both frameworks indicate a need for probabilistic calibration to improve ensemble reliability. However, lower ensemble discrimination for OBPROB than the neighborhood-based forecasts is also noted.

Free access