Search Results

You are looking at 1 - 10 of 11 items for :

  • Author or Editor: Scott R. Dembek x
  • Refine by Access: Content accessible to me x
Clear All Modify Search
John H. E. Clark
and
Scott R. Dembek

Abstract

The Catalina eddy that existed from 5 July to 12 July 1987 during FIRE (First ISSCP Regional Experiment) over offshore California is analyzed. There were two stages to the eddy's lifecycle. During the first, from 5 July to 1200 UTC 9 July, the eddy formed just south of Santa Barbara and drifted southeastward parallel to the coastline. This motion is attributed to an equivalent β effect associated with gradients of marine layer depth perpendicular to the coast. The eddy's thermal structure was characterized by an elevated marine inversion with surface temperatures 2°–4°C higher than beyond the periphery. Over offshore regions a sharp edge to the eddy was noted with a sudden change in mixed layer depth, wind speed and direction, and temperature. The eddy's influence on coastal winds was most notable during the nighttime and early morning. The strong local sea-breeze circulation overwhelmed the coastal eddy circulation during daytime. A pronounced diurnal wind fluctuation was observed at San Nicolas Island during this period, associated with a perturbation wind parallel to the California coastline. We conclude that it is due to either an extended coastal sea-breeze influence (latitudes in this region are close to the critical latitude according to linear theory) or northward-propagating coastally trapped Kelvin waves. The eddy's second stage was initiated on 9 July by the formation of a cutoff low in the middle troposphere immediately above the eddy. During this period the eddy expanded horizontally, moved southwestward away from the coastline, and eventually weakened. For a brief time, a coherent meso-α structure existed from the surface to about 500 hPa.

Eddy formation was precipitated by the passage of a low-level trough that strengthened the northerly flow across the mountains north of Santa Barbara. Froude numbers at the time of eddy formation suggest considerable lee troughing as the airflow was forced over and possibly around the topography.

Full access
Burkely T. Gallo
,
Adam J. Clark
, and
Scott R. Dembek

Abstract

Hourly maximum fields of simulated storm diagnostics from experimental versions of convection-permitting models (CPMs) provide valuable information regarding severe weather potential. While past studies have focused on predicting any type of severe weather, this study uses a CPM-based Weather Research and Forecasting (WRF) Model ensemble initialized daily at the National Severe Storms Laboratory (NSSL) to derive tornado probabilities using a combination of simulated storm diagnostics and environmental parameters. Daily probabilistic tornado forecasts are developed from the NSSL-WRF ensemble using updraft helicity (UH) as a tornado proxy. The UH fields are combined with simulated environmental fields such as lifted condensation level (LCL) height, most unstable and surface-based CAPE (MUCAPE and SBCAPE, respectively), and multifield severe weather parameters such as the significant tornado parameter (STP). Varying thresholds of 2–5-km updraft helicity were tested with differing values of σ in the Gaussian smoother that was used to derive forecast probabilities, as well as different environmental information, with the aim of maximizing both forecast skill and reliability. The addition of environmental information improved the reliability and the critical success index (CSI) while slightly degrading the area under the receiver operating characteristic (ROC) curve across all UH thresholds and σ values. The probabilities accurately reflected the location of tornado reports, and three case studies demonstrate value to forecasters. Based on initial tests, four sets of tornado probabilities were chosen for evaluation by participants in the 2015 National Oceanic and Atmospheric Administration’s Hazardous Weather Testbed Spring Forecasting Experiment from 4 May to 5 June 2015. Participants found the probabilities useful and noted an overforecasting tendency.

Full access
Alexandre O. Fierro
,
Jidong Gao
,
Conrad L. Ziegler
,
Edward R. Mansell
,
Donald R. MacGorman
, and
Scott R. Dembek

Abstract

This work evaluates the short-term forecast (≤6 h) of the 29–30 June 2012 derecho event from the Advanced Research core of the Weather Research and Forecasting Model (WRF-ARW) when using two distinct data assimilation techniques at cloud-resolving scales (3-km horizontal grid). The first technique assimilates total lightning data using a smooth nudging function. The second method is a three-dimensional variational technique (3DVAR) that assimilates radar reflectivity and radial velocity data. A suite of sensitivity experiments revealed that the lightning assimilation was better able to capture the placement and intensity of the derecho up to 6 h of the forecast. All the simulations employing 3DVAR, however, best represented the storm’s radar reflectivity structure at the analysis time. Detailed analysis revealed that a small feature in the velocity field from one of the six selected radars in the original 3DVAR experiment led to the development of spurious convection ahead of the parent mesoscale convective system, which significantly degraded the forecast. Thus, the relatively simple nudging scheme using lightning data complements the more complex variational technique. The much lower computational cost of the lightning scheme may permit its use alongside variational techniques in improving severe weather forecasts on days favorable for the development of outflow-dominated mesoscale convective systems.

Full access
Alexandre O. Fierro
,
Adam J. Clark
,
Edward R. Mansell
,
Donald R. MacGorman
,
Scott R. Dembek
, and
Conrad L. Ziegler

Abstract

This work evaluates the performance of a recently developed cloud-scale lightning data assimilation technique implemented within the Weather Research and Forecasting Model running at convection-allowing scales (4-km grid spacing). Data provided by the Earth Networks Total Lightning Network for the contiguous United States (CONUS) were assimilated in real time over 67 days spanning the 2013 warm season (May–July). The lightning data were assimilated during the first 2 h of simulations each day. Bias-corrected, neighborhood-based, equitable threat scores (BC-ETSs) were the chief metric used to quantify the skill of the forecasts utilizing this assimilation scheme. Owing to inferior observational data quality over mountainous terrain, this evaluation focused on the eastern two-thirds of the United States.

During the first 3 h following the assimilation (i.e., 3-h forecasts), all the simulations suffered from a high wet bias in forecasted accumulated precipitation (APCP), particularly for the lightning assimilation run (LIGHT). Forecasts produced by LIGHT, however, had a noticeable, statistically significant (α = 0.05) improvement over those by the control run (CTRL) up to 6 h into the forecast with BC-ETS differences often exceeding 0.4. This improvement was seen independently of the APCP threshold (ranging from 2.5 to 50 mm) and the neighborhood radius (ranging from 0 to 40 km) selected. Past 6 h of the forecast, the APCP fields from LIGHT progressively converged to that of CTRL probably due to the longer-term evolution being bounded by the large-scale model environment. Thus, this computationally inexpensive lightning assimilation scheme shows considerable promise for routinely improving short-term (≤6 h) forecasts of high-impact weather by convection-allowing forecast models.

Full access
Lewis Grasso
,
Daniel T. Lindsey
,
Kyo-Sun Sunny Lim
,
Adam Clark
,
Dan Bikos
, and
Scott R. Dembek

Abstract

Synthetic satellite imagery can be employed to evaluate simulated cloud fields. Past studies have revealed that the Weather Research and Forecasting (WRF) single-moment 6-class (WSM6) microphysics scheme in the Advanced Research WRF (WRF-ARW) produces less upper-level ice clouds within synthetic images compared to observations. Synthetic Geostationary Operational Environmental Satellite-13 (GOES-13) imagery at 10.7 μm of simulated cloud fields from the 4-km National Severe Storms Laboratory (NSSL) WRF-ARW is compared to observed GOES-13 imagery. Histograms suggest that too few points contain upper-level simulated ice clouds. In particular, side-by-side examples are shown of synthetic and observed anvils. Such images illustrate the lack of anvil cloud associated with convection produced by the 4-km NSSL WRF-ARW. A vertical profile of simulated hydrometeors suggests that too much cloud water mass may be converted into graupel mass, effectively reducing the main source of ice mass in a simulated anvil. Further, excessive accretion of ice by snow removes ice from an anvil by precipitation settling. Idealized sensitivity tests reveal that a 50% reduction of the accretion rate of ice by snow results in a significant increase in anvil ice of a simulated storm. Such results provide guidance as to which conversions could be reformulated, in a more physical manner, to increase simulated ice mass in the upper troposphere.

Full access
Rebecca D. Adams-Selin
,
Adam J. Clark
,
Christopher J. Melick
,
Scott R. Dembek
,
Israel L. Jirak
, and
Conrad L. Ziegler

Abstract

Four different versions of the HAILCAST hail model have been tested as part of the 2014–16 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments. HAILCAST was run as part of the National Severe Storms Laboratory (NSSL) WRF Ensemble during 2014–16 and the Community Leveraged Unified Ensemble (CLUE) in 2016. Objective verification using the Multi-Radar Multi-Sensor maximum expected size of hail (MRMS MESH) product was conducted using both object-based and neighborhood grid-based verification. Subjective verification and feedback was provided by HWT participants. Hourly maximum storm surrogate fields at a variety of thresholds and Storm Prediction Center (SPC) convective outlooks were also evaluated for comparison. HAILCAST was found to improve with each version due to feedback from the 2014–16 HWTs. The 2016 version of HAILCAST was equivalent to or exceeded the skill of the tested storm surrogates across a variety of thresholds. The post-2016 version of HAILCAST was found to improve 50-mm hail forecasts through object-based verification, but 25-mm hail forecasting ability declined as measured through neighborhood grid-based verification. The skill of the storm surrogate fields varied widely as the threshold values used to determine hail size were varied. HAILCAST was found not to require such tuning, as it produced consistent results even when used across different model configurations and horizontal grid spacings. Additionally, different storm surrogate fields performed at varying levels of skill when forecasting 25- versus 50-mm hail, hinting at the different convective modes typically associated with small versus large sizes of hail. HAILCAST was able to match results relatively consistently with the best-performing storm surrogate field across multiple hail size thresholds.

Full access
Burkely T. Gallo
,
Adam J. Clark
,
Bryan T. Smith
,
Richard L. Thompson
,
Israel Jirak
, and
Scott R. Dembek

Abstract

Probabilistic ensemble-derived tornado forecasts generated from convection-allowing models often use hourly maximum updraft helicity (UH) alone or in combination with environmental parameters as a proxy for right-moving (RM) supercells. However, when UH occurrence is a condition for tornado probability generation, false alarm areas can occur from UH swaths associated with nocturnal mesoscale convective systems, which climatologically produce fewer tornadoes than RM supercells. This study incorporates UH timing information with the forecast near-storm significant tornado parameter (STP) to calibrate the forecast tornado probability. To generate the probabilistic forecasts, three sets of observed climatological tornado frequencies given an RM supercell and STP value are incorporated with the model output, two of which use UH timing information. One method uses the observed climatological tornado frequency for a given 3-h window to generate the probabilities. Another normalizes the observed climatological tornado frequency by the number of hail, wind, and tornado reports observed in that 3-h window compared to the maximum number of reports in any 3-h window. The final method is independent of when UH occurs and uses the observed climatological tornado frequency encompassing all hours. The normalized probabilities reduce the false alarm area compared to the other methods but have a smaller area under the ROC curve and require a much higher percentile of the STP distribution to be used in probability generation to become reliable. Case studies demonstrate that the normalized probabilities highlight the most likely area for evening RM supercellular tornadoes, decreasing the nocturnal false alarm by assuming a linear convective mode.

Full access
Burkely T. Gallo
,
Adam J. Clark
,
Bryan T. Smith
,
Richard L. Thompson
,
Israel Jirak
, and
Scott R. Dembek

Abstract

Attempts at probabilistic tornado forecasting using convection-allowing models (CAMs) have thus far used CAM attribute [e.g., hourly maximum 2–5-km updraft helicity (UH)] thresholds, treating them as binary events—either a grid point exceeds a given threshold or it does not. This study approaches these attributes probabilistically, using empirical observations of storm environment attributes and the subsequent climatological tornado occurrence frequency to assign a probability that a point will be within 40 km of a tornado, given the model-derived storm environment attributes. Combining empirical frequencies and forecast attributes produces better forecasts than solely using mid- or low-level UH, even if the UH is filtered using environmental parameter thresholds. Empirical tornado frequencies were derived using severe right-moving supercellular storms associated with a local storm report (LSR) of a tornado, severe wind, or severe hail for a given significant tornado parameter (STP) value from Storm Prediction Center (SPC) mesoanalysis grids in 2014–15. The NSSL–WRF ensemble produced the forecast STP values and simulated right-moving supercells, which were identified using a UH exceedance threshold. Model-derived probabilities are verified using tornado segment data from just right-moving supercells and from all tornadoes, as are the SPC-issued 0600 UTC tornado probabilities from the initial day 1 forecast valid 1200–1159 UTC the following day. The STP-based probabilistic forecasts perform comparably to SPC tornado probability forecasts in many skill metrics (e.g., reliability) and thus could be used as first-guess forecasts. Comparison with prior methodologies shows that probabilistic environmental information improves CAM-based tornado forecasts.

Full access
John S. Kain
,
Scott R. Dembek
,
Steven J. Weiss
,
Jonathan L. Case
,
Jason J. Levit
, and
Ryan A. Sobash

Abstract

A new strategy for generating and presenting model diagnostic fields from convection-allowing forecast models is introduced. The fields are produced by computing temporal-maximum values for selected diagnostics at each horizontal grid point between scheduled output times. The two-dimensional arrays containing these maximum values are saved at the scheduled output times. The additional fields have minimal impacts on the size of the output files and the computation of most diagnostic quantities can be done very efficiently during integration of the Weather Research and Forecasting Model. Results show that these unique output fields facilitate the examination of features associated with convective storms, which can change dramatically within typical output intervals of 1–3 h.

Full access
John S. Kain
,
Michael C. Coniglio
,
James Correia
,
Adam J. Clark
,
Patrick T. Marsh
,
Conrad L. Ziegler
,
Valliappa Lakshmanan
,
Stuart D. Miller Jr.
,
Scott R. Dembek
,
Steven J. Weiss
,
Fanyou Kong
,
Ming Xue
,
Ryan A. Sobash
,
Andrew R. Dean
,
Israel L. Jirak
, and
Christopher J. Melick

The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.

Full access