Search Results

You are looking at 1 - 7 of 7 items for

  • Author or Editor: Greg Stumpf x
  • Refine by Access: All Content x
Clear All Modify Search
John P. Monteverdi, Warren Blier, Greg Stumpf, Wilfred Pi, and Karl Anderson

Abstract

On 4 May 1998, a pair of tornadoes occurred in the San Francisco Bay Area in the cities of Sunnyvale (F2 on the Fujita scale) and Los Altos (F1). The parent thunderstorm was anticyclonically rotating and produced tornadoes that were documented photographically to be anticyclonic as well, making for an extremely rare event. The tornadic thunderstorm was one of several “pulse type” thunderstorms that developed on outflow boundaries on the left flank of an earlier-occurring thunderstorm east of San Jose. Satellite imagery showed that the tomadic storm moved northwestward along a sea-breeze boundary and ahead of the outflow boundary associated with the prior thunderstorms. The shear environment into which the storm propagated was characterized by a straight hodograph with some cyclonic curvature, and by shear and buoyancy profiles that were favorable for anticyclonically rotating updrafts. Mesoanticyclones were detected in the Monterey (KMUX) radar data in association with each tornado by the National Severe Storm Laboratory's (NSSL) new Mesocyclone Detection Algorithm (MDA) making this the only documented case of a tornadic mesoanticyclone in the United States that has been captured with WSR-88D level-II data. Analysis of the radar data indicates that the initial (Sunnyvale) tornado was not associated with a mesoanticyclone. The satellite evidence suggests that this tornado may have occurred as the storm ingested, tilted, and stretched solenoidally induced vorticity associated with a sea-breeze boundary, giving the initial tornado nonsupercellular characteristics, even though the parent thunderstorm itself was an anticyclonic supercell. The radar-depicted evolution of the second (Los Altos) tornado suggests that it was associated with a mesoanticyclone, although the role of the sea-breeze boundary in the tornadogenesis cannot be discounted.

Full access
Paul Joe, Don Burgess, Rod Potts, Tom Keenan, Greg Stumpf, and Andrew Treloar

Abstract

One of the main goals of the Sydney 2000 Forecast Demonstration Project was to demonstrate the efficacy and utility of automated severe weather detection radar algorithms. As a contribution to this goal, this paper describes the radar-based severe weather algorithms used in the project, their performance, and related radar issues. Participants in this part of the project included the National Severe Storm Laboratory (NSSL) Warning Decision Support System (WDSS), the Meteorological Service of Canada Canadian Radar Decision Support (CARDS) system, the National Center for Atmospheric Research Thunderstorm Initiation, Tracking, Analysis, and Nowcasting (TITAN) system, and a precipitation-typing algorithm from the Bureau of Meteorology Research Centre polarized C-band polarimetric (C-Pol) radar. Three radars were available: the S-band reflectivity-only operational radar, the C-band Doppler Kurnell radar, and the C-band Doppler polarization C-Pol radar.

The radar algorithms attempt to diagnose the presence of storm cells; provide storm tracks; identify mesocyclone circulations, downbursts and/or microbursts, and hail; and provide storm ranking. The tracking and identification of cells was undertaken using TITAN and WDSS. Three versions of TITAN were employed to track weak and strong cells. Results show TITAN cell detection thresholds influence the ability of the algorithm to clearly identify storm cells and also the ability to correctly track the storms. WDSS algorithms are set up with lower-volume thresholds and provided many more tracks. WDSS and CARDS circulation algorithms were adapted to the Southern Hemisphere. CARDS had lower detection thresholds and, hence, detected more circulations than WDSS. Radial-velocity-based and reflectivity-based downburst algorithms were available from CARDS. Since the reflectivity-based algorithm was based on features aloft, it provided an earlier indication of strong surface winds. Three different hail algorithms from WDSS, CARDS, and C-Pol provided output on the presence, the probability, and the size of hail. Although the algorithms differed considerably they provided similar results. Size distributions were similar to observations. The WDSS provided a ranking algorithm to identify the most severe storm.

Many of the algorithms had been adapted and altered to account for differences in radar technology, configuration, and meteorological regime. The various combinations of different algorithms and different radars provided an unprecedented opportunity to study the impact of radar technology on the performance of the severe weather algorithms. The algorithms were able to operate on both single- and dual-pulse repetition frequency Doppler radars and on C- and S-band radars with minimal changes. The biggest influence on the algorithms was data quality. Beamwidth smoothing limited the effective range of the algorithms and ground clutter and ground clutter filtering affected the quality of the low-level radial velocities and the detection of low-level downbursts. Cycle time of the volume scans significantly affected the tracking results.

Full access
Hongli Jiang, Steve Albers, Yuanfu Xie, Zoltan Toth, Isidora Jankov, Michael Scotten, Joseph Picca, Greg Stumpf, Darrel Kingfield, Daniel Birkenheuer, and Brian Motta

Abstract

The accurate and timely depiction of the state of the atmosphere on multiple scales is critical to enhance forecaster situational awareness and to initialize very short-range numerical forecasts in support of nowcasting activities. The Local Analysis and Prediction System (LAPS) of the Earth System Research Laboratory (ESRL)/Global Systems Division (GSD) is a numerical data assimilation and forecast system designed to serve such very finescale applications. LAPS is used operationally by more than 20 national and international agencies, including the NWS, where it has been operational in the Advanced Weather Interactive Processing System (AWIPS) since 1995.

Using computationally efficient and scientifically advanced methods such as a multigrid technique that adds observational information on progressively finer scales in successive iterations, GSD recently introduced a new, variational version of LAPS (vLAPS). Surface and 3D analyses generated by vLAPS were tested in the Hazardous Weather Testbed (HWT) to gauge their utility in both situational awareness and nowcasting applications. On a number of occasions, forecasters found that the vLAPS analyses and ensuing very short-range forecasts provided useful guidance for the development of severe weather events, including tornadic storms, while in some other cases the guidance was less sufficient.

Full access
Christopher D. Karstens, Greg Stumpf, Chen Ling, Lesheng Hua, Darrel Kingfield, Travis M. Smith, James Correia Jr., Kristin Calhoun, Kiel Ortega, Chris Melick, and Lans P. Rothfusz

Abstract

A proposed new method for hazard identification and prediction was evaluated with forecasters in the National Oceanic and Atmospheric Administration Hazardous Weather Testbed during 2014. This method combines hazard-following objects with forecaster-issued trends of exceedance probabilities to produce probabilistic hazard information, as opposed to the static, deterministic polygon and attendant text product methodology presently employed by the National Weather Service to issue severe thunderstorm and tornado warnings. Three components of the test bed activities are discussed: usage of the new tools, verification of storm-based warnings and probabilistic forecasts from a control–test experiment, and subjective feedback on the proposed paradigm change. Forecasters were able to quickly adapt to the new tools and concepts and ultimately produced probabilistic hazard information in a timely manner. The probabilistic forecasts from two severe hail events tested in a control–test experiment were more skillful than storm-based warnings and were found to have reliability in the low-probability spectrum. False alarm area decreased while the traditional verification metrics degraded with increasing probability thresholds. The latter finding is attributable to a limitation in applying the current verification methodology to probabilistic forecasts. Relaxation of on-the-fence decisions exposed a need to provide information for hazard areas below the decision-point thresholds of current warnings. Automated guidance information was helpful in combating potential workload issues, and forecasters raised a need for improved guidance and training to inform consistent and reliable forecasts.

Full access
Kristin M. Calhoun, Kodi L. Berry, Darrel M. Kingfield, Tiffany Meyer, Makenzie J. Krocak, Travis M. Smith, Greg Stumpf, and Alan Gerard

Abstract

NOAA’s Hazardous Weather Testbed (HWT) is a physical space and research framework to foster collaboration and evaluate emerging tools, technology, and products for NWS operations. The HWT’s Experimental Warning Program (EWP) focuses on research, technology, and communication that may improve severe and hazardous weather warnings and societal response. The EWP was established with three fundamental hypotheses: 1) collaboration with operational meteorologists increases the speed of the transition process and rate of adoption of beneficial applications and technology, 2) the transition of knowledge between research and operations benefits both the research and operational communities, and 3) including end users in experiments generates outcomes that are more reliable and useful for society. The EWP is designed to mimic the operations of any NWS Forecast Office, providing the opportunity for experiments to leverage live and archived severe weather activity anywhere in the United States. During the first decade of activity in the EWP, 15 experiments covered topics including new radar and satellite applications, storm-scale numerical models and data assimilation, total lightning use in severe weather forecasting, and multiple social science and end-user topics. The experiments range from exploratory and conceptual research to more controlled experimental design to establish statistical patterns and causal relationships. The EWP brought more than 400 NWS forecasters, 60 emergency managers, and 30 broadcast meteorologists to the HWT to participate in live demonstrations, archive events, and data-denial experiments influencing today’s operational warning environment and shaping the future of warning research, technology, and communication for years to come.

Full access
Kristin M. Calhoun, Kodi L. Berry, Darrel M. Kingfield, Tiffany Meyer, Makenzie J. Krocak, Travis M. Smith, Greg Stumpf, and Alan Gerard

Abstract

NOAA’s Hazardous Weather Testbed (HWT) is a physical space and research framework to foster collaboration and evaluate emerging tools, technology, and products for NWS operations. The HWT’s Experimental Warning Program (EWP) focuses on research, technology, and communication that may improve severe and hazardous weather warnings and societal response. The EWP was established with three fundamental hypotheses: 1) collaboration with operational meteorologists increases the speed of the transition process and rate of adoption of beneficial applications and technology, 2) the transition of knowledge between research and operations benefits both the research and operational communities, and 3) including end-users in experiments generates outcomes that are more reliable and useful for society. The EWP is designed to mimic the operations of any NWS Forecast Office, providing the opportunity for experiments to leverage live and archived severe weather activity anywhere in the United States. During the first decade of activity in the EWP, 15 experiments covered topics including: new radar and satellite applications, storm-scale numerical models and data assimilation, total lightning use in severe weather forecasting, and multiple social science and end-user topics. The experiments range from exploratory and conceptual research to more controlled experimental design to establish statistical patterns and causal relationships. The EWP brought more than 400 NWS forecasters, 60 emergency managers, and 30 broadcast meteorologists to the HWT to participate in live demonstrations, archive events, and data-denial experiments influencing today’s operational warning environment and shaping the future of warning research, technology, and communication for years to come.

Full access
Donald Burgess, Kiel Ortega, Greg Stumpf, Gabe Garfield, Chris Karstens, Tiffany Meyer, Brandon Smith, Doug Speheger, Jim Ladue, Rick Smith, and Tim Marshall

Abstract

The tornado that affected Moore, Oklahoma, and the surrounding area on 20 May 2013 was an extreme event. It traveled 23 km and damage was up to 1.7 km wide. The tornado killed 24 people, injured over 200 others, and damaged many structures. A team of surveyors from the Norman, Oklahoma, National Weather Center and two private companies performed a detailed survey (all objects/structures) of the tornado to provide better documentation than is normally done, in part to aid future studies of the event. The team began surveying tornado damage on the morning of 21 May and continued the survey process for the next several weeks. Extensive ground surveys were performed. The surveys were aided by use of high-resolution aerial and satellite imagery. The survey process utilized the enhanced Fujita (EF) scale and was facilitated by use of a National Weather Service (NWS) software package: the Damage Assessment Toolkit (DAT). The survey team defined a “well built” house that qualified for an EF5 rating. Survey results document 4253 objects damaged by the tornado, 4222 of them EF-scale damage indicators (DIs). Of the total DIs, about 50% were associated with EF0 ratings. Excluding EF0 damage, 38% were associated with EF1, 24% with EF2, 21% with EF3, 17% with EF4, and only 0.4% associated with EF5. For the strongest level of damage (EF5), only nine homes were found. Survey results are similar to other documented tornadoes, but the amount of EF1 damage is greater than in other cases. Also discussed is the use of non-DI objects that are damaged and ways in which to improve future surveys.

Full access