Search Results

You are looking at 1 - 10 of 42 items for :

  • Author or Editor: Adam J. Clark x
  • Weather and Forecasting x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
Adam J. Clark

Abstract

Methods for generating ensemble mean precipitation forecasts from convection-allowing model (CAM) ensembles based on a simple average of all members at each grid point can have limited utility because of amplitude reduction and overprediction of light precipitation areas caused by averaging complex spatial fields with strong gradients and high-amplitude features. To combat these issues with the simple ensemble mean, a method known as probability matching is commonly used to replace the ensemble mean amounts with amounts sampled from the distribution of ensemble member forecasts, which results in a field that has a bias approximately equal to the average bias of the ensemble members. Thus, the probability matched mean (PM mean hereafter) is viewed as a better representation of the ensemble members compared to the mean, and previous studies find that it is more skillful than any of the individual members. Herein, using nearly a year’s worth of data from a CAM-based ensemble running in real time at the National Severe Storms Laboratory, evidence is provided that the superior performance of the PM mean is at least partially an artifact of the spatial redistribution of precipitation amounts that occur when the PM mean is computed over a large domain. Specifically, the PM mean enlarges big areas of heavy precipitation and shrinks or even eliminates smaller ones. An alternative approach for the PM mean is developed that restricts the grid points used to those within a specified radius of influence. The new approach has an improved spatial representation of precipitation and is found to perform more skillfully than the PM mean at large scales when using neighborhood-based verification metrics.

Full access
Adam J. Clark

Abstract

This study compares ensemble precipitation forecasts from 10-member, 3-km grid-spacing, CONUS domain single- and multicore ensembles that were a part of the 2016 Community Leveraged Unified Ensemble (CLUE) that was run for the 2016 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. The main results are that a 10-member ARW ensemble was significantly more skillful than a 10-member NMMB ensemble, and a 10-member MIX ensemble (5 ARW and 5 NMMB members) performed about the same as the 10-member ARW ensemble. Skill was measured by area under the relative operating characteristic curve (AUC) and fractions skill score (FSS). Rank histograms in the ARW ensemble were flatter than the NMMB ensemble indicating that the envelope of ensemble members better encompassed observations (i.e., better reliability) in the ARW. Rank histograms in the MIX ensemble were similar to the ARW ensemble. In the context of NOAA’s plans for a Unified Forecast System featuring a CAM ensemble with a single core, the results are positive and indicate that it should be possible to develop a single-core system that performs as well as or better than the current operational CAM ensemble, which is known as the High-Resolution Ensemble Forecast System (HREF). However, as new modeling applications are developed and incremental changes that move HREF toward a single-core system are made possible, more thorough testing and evaluation should be conducted.

Full access
Adam J. Clark
and
Eric D. Loken

Abstract

Severe weather probabilities are derived from the Warn-on-Forecast System (WoFS) run by NOAA’s National Severe Storms Laboratory (NSSL) during spring 2018 using the random forest (RF) machine learning algorithm. Recent work has shown this method generates skillful and reliable forecasts when applied to convection-allowing model ensembles for the “Day 1” time range (i.e., 12–36-h lead times), but it has been tested in only one other study for lead times relevant to WoFS (e.g., 0–6 h). Thus, in this paper, various sets of WoFS predictors, which include both environment and storm-based fields, are input into a RF algorithm and trained using the occurrence of severe weather reports within 39 km of a point to produce severe weather probabilities at 0–3-h lead times. We analyze the skill and reliability of these forecasts, sensitivity to different sets of predictors, and avenues for further improvements. The RF algorithm produced very skillful and reliable severe weather probabilities and significantly outperformed baseline probabilities calculated by finding the best performing updraft helicity (UH) threshold and smoothing parameter. Experiments where different sets of predictors were used to derive RF probabilities revealed 1) storm attribute fields contributed significantly more skill than environmental fields, 2) 2–5 km AGL UH and maximum updraft speed were the best performing storm attribute fields, 3) the most skillful ensemble summary metric was a smoothed mean, and 4) the most skillful forecasts were obtained when smoothed UH from individual ensemble members were used as predictors.

Free access
Shih-Yu Wang
and
Adam J. Clark

Abstract

Using a composite procedure, North American Mesoscale Model (NAM) forecast and observed environments associated with zonally oriented, quasi-stationary surface fronts for 64 cases during July–August 2006–08 were examined for a large region encompassing the central United States. NAM adequately simulated the general synoptic features associated with the frontal environments (e.g., patterns in the low-level wind fields) as well as the positions of the fronts. However, kinematic fields important to frontogenesis such as horizontal deformation and convergence were overpredicted. Surface-based convective available potential energy (CAPE) and precipitable water were also overpredicted, which was likely related to the overprediction of the kinematic fields through convergence of water vapor flux. In addition, a spurious coherence between forecast deformation and precipitation was found using spatial correlation coefficients. Composite precipitation forecasts featured a broad area of rainfall stretched parallel to the composite front, whereas the composite observed precipitation covered a smaller area and had a WNW–ESE orientation relative to the front, consistent with mesoscale convective systems (MCSs) propagating at a slight right angle relative to the thermal gradient. Thus, deficiencies in the NAM precipitation forecasts may at least partially result from the inability to depict MCSs properly. It was observed that errors in the precipitation forecasts appeared to lag those of the kinematic fields, and so it seems likely that deficiencies in the precipitation forecasts are related to the overprediction of the kinematic fields such as deformation. However, no attempts were made to establish whether the overpredicted kinematic fields actually contributed to the errors in the precipitation forecasts or whether the overpredicted kinematic fields were simply an artifact of the precipitation errors. Regardless of the relationship between such errors, recognition of typical warm-season environments associated with these errors should be useful to operational forecasters.

Full access
Eric D. Loken
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Given increasing computing power, an important question is whether additional computational resources would be better spent reducing the horizontal grid spacing of a convection-allowing model (CAM) or adding members to form CAM ensembles. The present study investigates this question as it applies to CAM-derived next-day probabilistic severe weather forecasts created by using forecast updraft helicity as a severe weather proxy for 63 days of the 2010 and 2011 NOAA Hazardous Weather Testbed Spring Forecasting Experiments. Forecasts derived from three sets of Weather Research and Forecasting Model configurations are tested: a 1-km deterministic model, a 4-km deterministic model, and an 11-member, 4-km ensemble. Forecast quality is evaluated using relative operating characteristic (ROC) curves, attributes diagrams, and performance diagrams, and forecasts from five representative cases are analyzed to investigate their relative quality and value in a variety of situations. While no statistically significant differences exist between the 4- and 1-km deterministic forecasts in terms of area under ROC curves, the 4-km ensemble forecasts offer weakly significant improvements over the 4-km deterministic forecasts over the entire 63-day dataset. Further, the 4-km ensemble forecasts generally provide greater forecast quality relative to either of the deterministic forecasts on an individual day. Collectively, these results suggest that, for purposes of improving next-day CAM-derived probabilistic severe weather forecasts, additional computing resources may be better spent on adding members to form CAM ensembles than on reducing the horizontal grid spacing of a deterministic model below 4 km.

Full access
Adam J. Clark
,
William A. Gallus Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, a 10-member 4-km grid-spacing Storm-Scale Ensemble Forecast (SSEF) system was run in real time to provide experimental severe weather forecasting guidance. Five SSEF system members used perturbed initial and lateral boundary conditions (ICs and LBCs) and mixed physics (ENS4), and five members used only mixed physics (ENS4phys). This ensemble configuration facilitates a comparison of ensemble spread generated by a combination of perturbed ICs/LBCs and mixed physics to that generated by only mixed physics, which is examined herein. In addition, spread growth and spread-error metrics for the two SSEF system configurations are compared to similarly configured 20-km grid-spacing convection-parameterizing ensembles (ENS20 and ENS20phys). Twelve forecast fields are examined for 20 cases.

For most fields, ENS4 mean spread growth rates are higher than ENS20 for ensemble configurations with both sets of perturbations, which is expected as smaller scales of motion are resolved at higher resolution. However, when ensembles with only mixed physics are compared, mass-related fields (i.e., geopotential height and mean sea level pressure) in ENS20phys have slightly higher spread growth rates than ENS4phys, likely resulting from the additional physics uncertainty in ENS20phys from varied cumulus parameterizations that were not used at 4-km grid spacing. For 4- and 20-km configurations, the proportion of spread generated by mixed physics in ENS4 and ENS20 increased with increasing forecast lead time. In addition, low-level fields (e.g., 2-m temperature) had a higher proportion of spread generated by mixed physics than mass-related fields. Spread-error analyses revealed that ensemble variance from the current uncalibrated ensemble systems was not a reliable indicator of forecast uncertainty. Furthermore, ENS4 had better statistical consistency than ENS20 for some mass-related fields, wind-related fields, precipitation, and most unstable convective available potential energy (MUCAPE) with no noticeable differences for low-level temperature and dewpoint fields. The variety of results obtained for the different types of fields examined suggests that future ensemble design should give careful consideration to the specific types of forecasts desired by the user.

Full access
Eswar R. Iyer
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Previous studies examining convection-allowing models (CAMs), as well as NOAA/Hazardous Weather Testbed Spring Forecasting Experiments (SFEs) have typically emphasized “day 1” (12–36 h) forecast guidance. These studies find a distinct advantage in CAMs relative to models that parameterize convection, especially for fields strongly tied to convection like precipitation. During the 2014 SFE, “day 2” (36–60 h) forecast products from a CAM ensemble provided by the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma were examined. Quantitative precipitation forecasts (QPFs) from the CAPS ensemble, known as the Storm Scale Ensemble Forecast (SSEF) system, are compared to NCEP’s operational Short Range Ensemble Forecast (SREF) system, which provides lateral boundary conditions for the SSEF, to see if the CAM ensemble outperforms the SREF through forecast hours 36–60. Equitable threat scores (ETSs) were computed for precipitation thresholds ranging from 0.10 to 0.75 in. for each SSEF and SREF member, as well as ensemble means, for 3-h accumulation periods. The ETS difference between the SSEF and SREF peaked during hours 36–42. Probabilistic forecasts were evaluated using the area under the receiver operating characteristic curve (ROC area). The SSEF had higher values of ROC area, especially at thresholds ≥ 0.50 in. Additionally, time–longitude diagrams of diurnally averaged rainfall were constructed for each SSEF/SREF ensemble member. Spatial correlation coefficients between forecasts and observations in time–longitude space indicated that the SSEF depicted the diurnal cycle much better than the SREF, which underforecasted precipitation with a peak that had a 3-h phase lag. A minority of SREF members performed well.

Full access
Burkely T. Gallo
,
Adam J. Clark
, and
Scott R. Dembek

Abstract

Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.

Full access
Yunsung Hwang
,
Adam J. Clark
,
Valliappa Lakshmanan
, and
Steven E. Koch

Abstract

Planning and managing commercial airplane routes to avoid thunderstorms requires very skillful and frequently updated 0–8-h forecasts of convection. The National Oceanic and Atmospheric Administration’s High-Resolution Rapid Refresh (HRRR) model is well suited for this purpose, being initialized hourly and providing explicit forecasts of convection out to 15 h. However, because of difficulties with depicting convection at the time of model initialization and shortly thereafter (i.e., during model spinup), relatively simple extrapolation techniques, on average, perform better than the HRRR at 0–2-h lead times. Thus, recently developed nowcasting techniques blend extrapolation-based forecasts with numerical weather prediction (NWP)-based forecasts, heavily weighting the extrapolation forecasts at 0–2-h lead times and transitioning emphasis to the NWP-based forecasts at the later lead times. In this study, a new approach to applying different weights to blend extrapolation and model forecasts based on intensities and forecast times is applied and tested. An image-processing method of morphing between extrapolation and model forecasts to create nowcasts is described and the skill is compared to extrapolation forecasts and forecasts from the HRRR. The new approach is called salient cross dissolve (Sal CD), which is compared to a commonly used method called linear cross dissolve (Lin CD). Examinations of forecasts and observations of the maximum altitude of echo-top heights ≥18 dBZ and measurement of forecast skill using neighborhood-based methods shows that Sal CD significantly improves upon Lin CD, as well as the HRRR at 2–5-h lead times.

Full access
Michael A. VandenBerg
,
Michael C. Coniglio
, and
Adam J. Clark

Abstract

This study compares next-day forecasts of storm motion from convection-allowing models with 1- and 4-km grid spacing. A tracking algorithm is used to determine the motion of discrete storms in both the model forecasts and an analysis of radar observations. The distributions of both the raw storm motions and the deviations of these motions from the environmental flow are examined to determine the overall biases of the 1- and 4-km forecasts and how they compare to the observed storm motions. The mean storm speeds for the 1-km forecasts are significantly closer to the observed mean than those for the 4-km forecasts when viewed relative to the environmental flow/shear, but mostly for the shorter-lived storms. For storm directions, the 1-km forecast storms move similarly to the 4-km forecast storms on average. However, for the raw storm motions and those relative to the 0–6-km shear, results suggest that the 1-km forecasts may alleviate some of a clockwise (rightward) bias of the 4-km forecasts, particularly for those that do not deviate strongly from the 0–6-km shear vector. This improvement in a clockwise bias also is seen for the longer-lived storms, but is not seen when viewing the storm motions relative to the 850–300-hPa mean wind or Bunkers motion vector. These results suggest that a reduction from 4- to 1-km grid spacing can potentially improve forecasts of storm motion, but further analysis of closer storm analogs are needed to confirm these results and to explore specific hypotheses for their differences.

Full access