Search Results

You are looking at 1 - 10 of 56 items for :

  • Author or Editor: Adam J. Clark x
  • Refine by Access: All Content x
Clear All Modify Search
Adam J. Clark

Abstract

This study compares ensemble precipitation forecasts from 10-member, 3-km grid-spacing, CONUS domain single- and multicore ensembles that were a part of the 2016 Community Leveraged Unified Ensemble (CLUE) that was run for the 2016 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. The main results are that a 10-member ARW ensemble was significantly more skillful than a 10-member NMMB ensemble, and a 10-member MIX ensemble (5 ARW and 5 NMMB members) performed about the same as the 10-member ARW ensemble. Skill was measured by area under the relative operating characteristic curve (AUC) and fractions skill score (FSS). Rank histograms in the ARW ensemble were flatter than the NMMB ensemble indicating that the envelope of ensemble members better encompassed observations (i.e., better reliability) in the ARW. Rank histograms in the MIX ensemble were similar to the ARW ensemble. In the context of NOAA’s plans for a Unified Forecast System featuring a CAM ensemble with a single core, the results are positive and indicate that it should be possible to develop a single-core system that performs as well as or better than the current operational CAM ensemble, which is known as the High-Resolution Ensemble Forecast System (HREF). However, as new modeling applications are developed and incremental changes that move HREF toward a single-core system are made possible, more thorough testing and evaluation should be conducted.

Full access
Adam J. Clark

Abstract

Methods for generating ensemble mean precipitation forecasts from convection-allowing model (CAM) ensembles based on a simple average of all members at each grid point can have limited utility because of amplitude reduction and overprediction of light precipitation areas caused by averaging complex spatial fields with strong gradients and high-amplitude features. To combat these issues with the simple ensemble mean, a method known as probability matching is commonly used to replace the ensemble mean amounts with amounts sampled from the distribution of ensemble member forecasts, which results in a field that has a bias approximately equal to the average bias of the ensemble members. Thus, the probability matched mean (PM mean hereafter) is viewed as a better representation of the ensemble members compared to the mean, and previous studies find that it is more skillful than any of the individual members. Herein, using nearly a year’s worth of data from a CAM-based ensemble running in real time at the National Severe Storms Laboratory, evidence is provided that the superior performance of the PM mean is at least partially an artifact of the spatial redistribution of precipitation amounts that occur when the PM mean is computed over a large domain. Specifically, the PM mean enlarges big areas of heavy precipitation and shrinks or even eliminates smaller ones. An alternative approach for the PM mean is developed that restricts the grid points used to those within a specified radius of influence. The new approach has an improved spatial representation of precipitation and is found to perform more skillfully than the PM mean at large scales when using neighborhood-based verification metrics.

Full access
Russ S. Schumacher
and
Adam J. Clark

Abstract

This study investigates probabilistic forecasts made using different convection-allowing ensemble configurations for a three-day period in June 2010 when numerous heavy-rain-producing mesoscale convective systems (MCSs) occurred in the United States. These MCSs developed both along a baroclinic zone in the Great Plains, and in association with a long-lived mesoscale convective vortex (MCV) in Texas and Arkansas. Four different ensemble configurations were developed using an ensemble-based data assimilation system. Two configurations used continuously cycled data assimilation, and two started the assimilation 24 h prior to the initialization of each forecast. Each configuration was run with both a single set of physical parameterizations and a mixture of physical parameterizations. These four ensemble forecasts were also compared with an ensemble run in real time by the Center for the Analysis and Prediction of Storms (CAPS). All five of these ensemble systems produced skillful probabilistic forecasts of the heavy-rain-producing MCSs, with the ensembles using mixed physics providing forecasts with greater skill and less overall bias compared to the single-physics ensembles. The forecasts using ensemble-based assimilation systems generally outperformed the real-time CAPS ensemble at lead times of 6–18 h, whereas the CAPS ensemble was the most skillful at forecast hours 24–30, though it also exhibited a wet bias. The differences between the ensemble precipitation forecasts were found to be related in part to differences in the analysis of the MCV and its environment, which in turn affected the evolution of errors in the forecasts of the MCSs. These results underscore the importance of representing model error in convection-allowing ensemble analysis and prediction systems.

Full access
Adam J. Clark
and
Eric D. Loken

Abstract

Severe weather probabilities are derived from the Warn-on-Forecast System (WoFS) run by NOAA’s National Severe Storms Laboratory (NSSL) during spring 2018 using the random forest (RF) machine learning algorithm. Recent work has shown this method generates skillful and reliable forecasts when applied to convection-allowing model ensembles for the “Day 1” time range (i.e., 12–36-h lead times), but it has been tested in only one other study for lead times relevant to WoFS (e.g., 0–6 h). Thus, in this paper, various sets of WoFS predictors, which include both environment and storm-based fields, are input into a RF algorithm and trained using the occurrence of severe weather reports within 39 km of a point to produce severe weather probabilities at 0–3-h lead times. We analyze the skill and reliability of these forecasts, sensitivity to different sets of predictors, and avenues for further improvements. The RF algorithm produced very skillful and reliable severe weather probabilities and significantly outperformed baseline probabilities calculated by finding the best performing updraft helicity (UH) threshold and smoothing parameter. Experiments where different sets of predictors were used to derive RF probabilities revealed 1) storm attribute fields contributed significantly more skill than environmental fields, 2) 2–5 km AGL UH and maximum updraft speed were the best performing storm attribute fields, 3) the most skillful ensemble summary metric was a smoothed mean, and 4) the most skillful forecasts were obtained when smoothed UH from individual ensemble members were used as predictors.

Free access
Adam J. Clark
,
William A. Gallus Jr.
, and
Tsing-Chang Chen

Abstract

The diurnal cycles of rainfall in 5-km grid-spacing convection-resolving and 22-km grid-spacing non-convection-resolving configurations of the Weather Research and Forecasting (WRF) model are compared to see if significant improvements can be obtained by using fine enough grid spacing to explicitly resolve convection. Diurnally averaged Hovmöller diagrams, spatial correlation coefficients computed in Hovmöller space, equitable threat scores (ETSs), and biases for forecasts conducted from 1 April to 25 July 2005 over a large portion of the central United States are used for the comparisons. A subjective comparison using Hovmöller diagrams of diurnally averaged rainfall show that the diurnal cycle representation in the 5-km configuration is clearly superior to that in the 22-km configuration during forecast hours 24–48. The superiority of the 5-km configuration is validated by much higher spatial correlation coefficients than in the 22-km configuration. During the first 24 forecast hours the 5-km model forecasts appear to be more adversely affected by model “spinup” processes than the 22-km model forecasts, and it is less clear, subjectively, which configuration has the better diurnal cycle representation, although spatial correlation coefficients are slightly higher in the 22-km configuration. ETSs in both configurations have diurnal oscillations with relative maxima occurring in both configurations at forecast hours corresponding to 0000–0300 LST, while biases also have diurnal oscillations with relative maxima (largest errors) in the 22-km (5-km) configuration occurring at forecast hours corresponding to 1200 (1800) LST. At all forecast hours, ETSs from the 22-km configuration are higher than those in the 5-km configuration. This inconsistency with some of the results obtained using the aforementioned spatial correlation coefficients reinforces discussion in past literature that cautions against using “traditional” verification statistics, such as ETS, to compare high- to low-resolution forecasts.

Full access
Adam J. Clark
,
William A. Gallus Jr.
, and
Tsing-Chang Chen

Abstract

An experiment is described that is designed to examine the contributions of model, initial condition (IC), and lateral boundary condition (LBC) errors to the spread and skill of precipitation forecasts from two regional eight-member 15-km grid-spacing Weather Research and Forecasting (WRF) ensembles covering a 1575 km × 1800 km domain. It is widely recognized that a skillful ensemble [i.e., an ensemble with a probability distribution function (PDF) that generates forecast probabilities with high resolution and reliability] should account for both error sources. Previous work suggests that model errors make a larger contribution than IC and LBC errors to forecast uncertainty in the short range before synoptic-scale error growth becomes nonlinear. However, in a regional model with unperturbed LBCs, the infiltration of the lateral boundaries will negate increasing spread. To obtain a better understanding of the contributions to the forecast errors in precipitation and to examine the window of forecast lead time before unperturbed ICs and LBCs begin to cause degradation in ensemble forecast skill, the “perfect model” assumption is made in an ensemble that uses perturbed ICs and LBCs (PILB ensemble), and the “perfect analysis” assumption is made in another ensemble that uses mixed physics–dynamic cores (MP ensemble), thus isolating the error contributions. For the domain and time period used in this study, unperturbed ICs and LBCs in the MP ensemble begin to negate increasing spread around forecast hour 24, and ensemble forecast skill as measured by relative operating characteristic curves (ROC scores) becomes lower in the MP ensemble than in the PILB ensemble, with statistical significance beginning after forecast hour 69. However, degradation in forecast skill in the MP ensemble relative to the PILB ensemble is not observed in an analysis of deterministic forecasts calculated from each ensemble using the probability matching method. Both ensembles were found to lack statistical consistency (i.e., to be underdispersive), with the PILB ensemble (MP ensemble) exhibiting more (less) statistical consistency with respect to forecast lead time. Spread ratios in the PILB ensemble are greater than those in the MP ensemble at all forecast lead times and thresholds examined; however, ensemble variance in the MP ensemble is greater than that in the PILB ensemble during the first 24 h of the forecast. This discrepancy in spread measures likely results from greater bias in the MP ensemble leading to an increase in ensemble variance and decrease in spread ratio relative to the PILB ensemble.

Full access
Adam J. Clark
,
William A. Gallus Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

An experiment has been designed to evaluate and compare precipitation forecasts from a 5-member, 4-km grid-spacing (ENS4) and a 15-member, 20-km grid-spacing (ENS20) Weather Research and Forecasting (WRF) model ensemble, which cover a similar domain over the central United States. The ensemble forecasts are initialized at 2100 UTC on 23 different dates and cover forecast lead times up to 33 h. Previous work has demonstrated that simulations using convection-allowing resolution (CAR; dx ∼ 4 km) have a better representation of the spatial and temporal statistical properties of convective precipitation than coarser models using convective parameterizations. In addition, higher resolution should lead to greater ensemble spread as smaller scales of motion are resolved. Thus, CAR ensembles should provide more accurate and reliable probabilistic forecasts than parameterized-convection resolution (PCR) ensembles.

Computation of various precipitation skill metrics for probabilistic and deterministic forecasts reveals that ENS4 generally provides more accurate precipitation forecasts than ENS20, with the differences tending to be statistically significant for precipitation thresholds above 0.25 in. at forecast lead times of 9–21 h (0600–1800 UTC) for all accumulation intervals analyzed (1, 3, and 6 h). In addition, an analysis of rank histograms and statistical consistency reveals that faster error growth in ENS4 eventually leads to more reliable precipitation forecasts in ENS4 than in ENS20. For the cases examined, these results imply that the skill gained by increasing to CAR outweighs the skill lost by decreasing the ensemble size. Thus, when computational capabilities become available, it will be highly desirable to increase the ensemble resolution from PCR to CAR, even if the size of the ensemble has to be reduced.

Full access
Adam J. Clark
,
William A. Gallus Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

An analysis of a regional severe weather outbreak that was related to a mesoscale convective vortex (MCV) is performed. The MCV-spawning mesoscale convection system (MCS) formed in northwest Kansas along the southern periphery of a large cutoff 500-hPa low centered over western South Dakota. As the MCS propagated into eastern Kansas during the early morning of 1 June 2007, an MCV that became evident from multiple data sources [e.g., Weather Surveillance Radar-1988 Doppler (WSR-88D) network, visible satellite imagery, wind-profiler data, Rapid Update Cycle 1-hourly analyses] tracked through northwest Missouri and central Iowa and manifested itself as a well-defined midlevel short-wave trough. Downstream of the MCV in southeast Iowa and northwest Illinois, southwesterly 500-hPa winds increased to around 25 m s−1 over an area with southeasterly surface winds and 500–1500 J kg−1 of surface-based convective available potential energy (CAPE), creating a favorable environment for severe weather. In the favorable region, multiple tornadoes occurred, including one rated as a category 3 storm on the enhanced Fujita scale (EF3) that caused considerable damage. In the analysis, emphasis is placed on the role of the MCV in leading to a favorable environment for severe weather. In addition, convection-allowing forecasts of the MCV and associated environmental conditions from the 10-member Storm-Scale Ensemble Forecast (SSEF) system produced for the 2007 NOAA Hazardous Weather Testbed Spring Experiment are compared to those from a similarly configured, but coarser, 30-member convection-parameterizing ensemble. It was found that forecasts of the MCV track and associated environmental conditions (e.g., midlevel winds, low-level wind shear, and instability) were much better in the convection-allowing ensemble. Errors in the MCV track from convection-parameterizing members likely resulted from westward displacement errors in the incipient MCS. Furthermore, poor depiction of MCV structure and maintenance in convection-parameterizing members, which was diagnosed through a vorticity budget analysis, likely led to the relatively poor forecasts of the associated environmental conditions. The results appear to be very encouraging for convection-allowing ensembles, especially when environmental conditions lead to a high degree of predictability for MCSs, which appeared to be the case for this particular event.

Full access
Russ S. Schumacher
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

From 9 to 11 June 2010, a mesoscale convective vortex (MCV) was associated with several periods of heavy rainfall that led to flash flooding. During the overnight hours, mesoscale convective systems (MCSs) developed that moved slowly and produced heavy rainfall over small areas in south-central Texas on 9 June, north Texas on 10 June, and western Arkansas on 11 June. In this study, forecasts of this event from the Center for the Analysis and Prediction of Storms' Storm-Scale Ensemble Forecast system are examined. This ensemble, with 26 members at 4-km horizontal grid spacing, included a few members that very accurately predicted the development, maintenance, and evolution of the heavy-rain-producing MCSs, along with a majority of members that had substantial errors in their precipitation forecasts. The processes favorable for the initiation, organization, and maintenance of these heavy-rain-producing MCSs are diagnosed by comparing ensemble members with accurate and inaccurate forecasts. Even within a synoptic environment known to be conducive to extreme local rainfall, there was considerable spread in the ensemble's rainfall predictions. Because all ensemble members included an anomalously moist environment, the precipitation predictions were insensitive to the atmospheric moisture. However, the development of heavy precipitation overnight was very sensitive to the intensity and evolution of convection the previous day. Convective influences on the strength of the MCV and its associated dome of cold air at low levels determined whether subsequent deep convection was initiated and maintained. In all, this ensemble provides quantitative and qualitative information about the mesoscale processes that are most favorable (or unfavorable) for localized extreme rainfall.

Full access
Eric D. Loken
,
Adam J. Clark
,
Ming Xue
, and
Fanyou Kong

Abstract

Spread and skill of mixed- and single-physics convection-allowing ensemble forecasts that share the same set of perturbed initial and lateral boundary conditions are investigated at a variety of spatial scales. Forecast spread is assessed for 2-m temperature, 2-m dewpoint, 500-hPa geopotential height, and hourly accumulated precipitation both before and after a bias-correction procedure is applied. Time series indicate that the mixed-physics ensemble forecasts generally have greater variance than comparable single-physics forecasts. While the differences tend to be small, they are greatest at the smallest spatial scales and when the ensembles are not calibrated for bias. Although differences between the mixed- and single-physics ensemble variances are smaller for the larger spatial scales, variance ratios suggest that the mixed-physics ensemble generates more spread relative to the single-physics ensemble at larger spatial scales. Forecast skill is evaluated for 2-m temperature, dewpoint temperature, and bias-corrected 6-h accumulated precipitation. The mixed-physics ensemble generally has lower 2-m temperature and dewpoint root-mean-square error (RMSE) compared to the single-physics ensemble. However, little difference in skill or reliability is found between the mixed- and single-physics bias-corrected precipitation forecasts. Overall, given that mixed- and single-physics ensembles have similar spread and skill, developers may prefer to implement single- as opposed to mixed-physics convection-allowing ensembles in future operational systems, while accounting for model error using stochastic methods.

Full access