Search Results
You are looking at 1 - 5 of 5 items for
- Author or Editor: James G. Ladue x
- Refine by Access: All Content x
Abstract
The relationship between automated low-level velocity derived from WSR-88D severe storm algorithms and two groups of tornado intensity were evaluated using a 4-yr climatology of 1975 tornado events spawned from 1655 supercells and 320 quasi-linear convective systems (QLCSs). A comparison of peak velocity from groups of detections from the Mesocyclone Detection Algorithm and Tornado Detection Algorithm for each tornado track found overlapping distributions when discriminating between weak [rated as category 0 or 1 on the enhanced Fujita scale (EF0 and EF1)] and strong (EF2–5) events for both rotational and delta velocities. Dataset thresholding by estimated affected population lowered the range of observed velocities, particularly for weak tornadoes while retaining a greater frequency of events for strong tornadoes. Heidke skill scores for strength discrimination were dependent on algorithm, velocity parameter, population threshold, and convective mode, and varied from 0.23 and 0.66. Bootstrapping the skill scores for each algorithm showed a wide range of low-level velocities (at least 7 m s−1 in width) providing an equivalent optimal skill at discriminating between weak and strong tornadoes. This ultimately limits identification of a single threshold for optimal strength discrimination but the results match closely with larger prior manual studies of low-level velocities.
Abstract
The relationship between automated low-level velocity derived from WSR-88D severe storm algorithms and two groups of tornado intensity were evaluated using a 4-yr climatology of 1975 tornado events spawned from 1655 supercells and 320 quasi-linear convective systems (QLCSs). A comparison of peak velocity from groups of detections from the Mesocyclone Detection Algorithm and Tornado Detection Algorithm for each tornado track found overlapping distributions when discriminating between weak [rated as category 0 or 1 on the enhanced Fujita scale (EF0 and EF1)] and strong (EF2–5) events for both rotational and delta velocities. Dataset thresholding by estimated affected population lowered the range of observed velocities, particularly for weak tornadoes while retaining a greater frequency of events for strong tornadoes. Heidke skill scores for strength discrimination were dependent on algorithm, velocity parameter, population threshold, and convective mode, and varied from 0.23 and 0.66. Bootstrapping the skill scores for each algorithm showed a wide range of low-level velocities (at least 7 m s−1 in width) providing an equivalent optimal skill at discriminating between weak and strong tornadoes. This ultimately limits identification of a single threshold for optimal strength discrimination but the results match closely with larger prior manual studies of low-level velocities.
Abstract
The relationship between cloud-to-ground (CG) lightning polarity and surface equivalent potential temperature (θ e ) is examined for the 26 April 1991, Andover–Wichita, Kansas; the 13 March 1990, Hesston, Kansas; and the 28 August 1990, Plainfield, Illinois, tornadic storm events. The majority of thunderstorms whose CG lightning activity was dominated by negative flashes (labeled negative storms) formed in regions of weak θ e gradient and downstream of a θ e maximum. The majority of thunderstorms whose initial CG lightning activity was dominated by positive flashes formed in regions of strong θ e gradient, upstream of a θ e maximum. Some of these storms moved adjacent to the θ e maximum and were dominated by positive CG lightning throughout their lifetimes (labeled “positive storms”). The other initially positive storms moved through the θ e maximum where their updrafts appeared to undergo intensification. The storms’ dominant CG polarity switched from positive to negative after they crossed the θ e maximum (labeled reversal storms). Summary statistics based on this storm classification show that all the reversal storms examined for these three events were severe and half of them produced tornadoes of F3–F5 intensity. By comparison, only 58% of the negative storms produced severe weather and only 10% produced tornadoes of F3–F5 intensity. It is suggested that the CG lightning reversal process may be initiated by rapid updraft intensification brought about by an increase in the buoyancy of low-level inflow air as initially positive storms pass through mesoscale regions of high θ e . As these storms move out of a θ e maximum, massive precipitation fallout may occur when their updrafts weaken and can no longer support the mass of liquid water and ice aloft. The fallout may in turn cause a major redistribution of the electrical charge within the storm resulting in polarity reversal and/or downdraft-induced tornadogenesis.
Abstract
The relationship between cloud-to-ground (CG) lightning polarity and surface equivalent potential temperature (θ e ) is examined for the 26 April 1991, Andover–Wichita, Kansas; the 13 March 1990, Hesston, Kansas; and the 28 August 1990, Plainfield, Illinois, tornadic storm events. The majority of thunderstorms whose CG lightning activity was dominated by negative flashes (labeled negative storms) formed in regions of weak θ e gradient and downstream of a θ e maximum. The majority of thunderstorms whose initial CG lightning activity was dominated by positive flashes formed in regions of strong θ e gradient, upstream of a θ e maximum. Some of these storms moved adjacent to the θ e maximum and were dominated by positive CG lightning throughout their lifetimes (labeled “positive storms”). The other initially positive storms moved through the θ e maximum where their updrafts appeared to undergo intensification. The storms’ dominant CG polarity switched from positive to negative after they crossed the θ e maximum (labeled reversal storms). Summary statistics based on this storm classification show that all the reversal storms examined for these three events were severe and half of them produced tornadoes of F3–F5 intensity. By comparison, only 58% of the negative storms produced severe weather and only 10% produced tornadoes of F3–F5 intensity. It is suggested that the CG lightning reversal process may be initiated by rapid updraft intensification brought about by an increase in the buoyancy of low-level inflow air as initially positive storms pass through mesoscale regions of high θ e . As these storms move out of a θ e maximum, massive precipitation fallout may occur when their updrafts weaken and can no longer support the mass of liquid water and ice aloft. The fallout may in turn cause a major redistribution of the electrical charge within the storm resulting in polarity reversal and/or downdraft-induced tornadogenesis.
Abstract
A storm-intercept team from the University of Oklahoma, using the Los Alamos National Laboratory portable, continuous wave/frequency modulated–continuous wave, 3-cm Doppler radar, collected close-range data at and below cloud base in six supercell tornadoes in the southern plains during the springs of 1990 and 1991. Data collection and analysis techniques are described. Wind spectra from five weak-to-strong tornadoes and from one violent tornado are presented and discussed in conjunction with simultaneous boresighted video documentation, photogrammetric analysis, and damage surveys.
Maximum Doppler wind speeds of 55–105 m s−1 were found in five of the tornadoes; wind speeds as high as 120–125 m s− were found in a large tornado during an outbreak on 26 April 1991. These may be the highest wind speeds ever measured by Doppler radar and the first radar measurements of F-5 intensity wind speeds. The variation in the spectrum across the 26 April 1991 tornado is presented. Standard and mobile soundings, and surface data, used to determine the “thermodynamic speed limit” indicate that it was usually exceeded by 50%–100%. A comparison of actual Doppler spectra with simulated spectra suggests that the maximum in radar reflectivity in supercell tornadoes lies well outside the core.
Abstract
A storm-intercept team from the University of Oklahoma, using the Los Alamos National Laboratory portable, continuous wave/frequency modulated–continuous wave, 3-cm Doppler radar, collected close-range data at and below cloud base in six supercell tornadoes in the southern plains during the springs of 1990 and 1991. Data collection and analysis techniques are described. Wind spectra from five weak-to-strong tornadoes and from one violent tornado are presented and discussed in conjunction with simultaneous boresighted video documentation, photogrammetric analysis, and damage surveys.
Maximum Doppler wind speeds of 55–105 m s−1 were found in five of the tornadoes; wind speeds as high as 120–125 m s− were found in a large tornado during an outbreak on 26 April 1991. These may be the highest wind speeds ever measured by Doppler radar and the first radar measurements of F-5 intensity wind speeds. The variation in the spectrum across the 26 April 1991 tornado is presented. Standard and mobile soundings, and surface data, used to determine the “thermodynamic speed limit” indicate that it was usually exceeded by 50%–100%. A comparison of actual Doppler spectra with simulated spectra suggests that the maximum in radar reflectivity in supercell tornadoes lies well outside the core.
During the early to middle 2000s, in response to demand for more detail in wind damage surveying and recordkeeping, a team of atmospheric scientists and wind engineers developed the enhanced Fujita (EF) scale. The EF scale, codified officially into National Weather Service (NWS) use in February 2007, offers wind speed estimates for a range of degrees of damage (DoDs) across each of 28 damage indicators (DIs). In practice, this has increased precision of damage surveys for tornado and thunderstorm-wind events. Still, concerns remain about both the representativeness of DoDs and the sufficiency of DIs, including the following: How dependable are the wind speed ranges for certain DoDs? What other DIs can be included? How can recent advances in mapping and documentation tools be integrated into the surveying process and the storm records? What changes should be made to the existing scale: why, how, and by whom? What alternative methods may be included or adapted for estimating tornado intensity?
To begin coordinated discussion on these and related topics, interested scientists and engineers (including some involved in EF scale development) organized a national EF Scale Stakeholders' Meeting, held on 2–3 March 2010 in Norman, Oklahoma. This article presents more detailed background information, summarizes the meeting, presents possibilities for the future of the EF scale and damage surveys, and solicits ideas from the engineering and atmospheric science communities.
During the early to middle 2000s, in response to demand for more detail in wind damage surveying and recordkeeping, a team of atmospheric scientists and wind engineers developed the enhanced Fujita (EF) scale. The EF scale, codified officially into National Weather Service (NWS) use in February 2007, offers wind speed estimates for a range of degrees of damage (DoDs) across each of 28 damage indicators (DIs). In practice, this has increased precision of damage surveys for tornado and thunderstorm-wind events. Still, concerns remain about both the representativeness of DoDs and the sufficiency of DIs, including the following: How dependable are the wind speed ranges for certain DoDs? What other DIs can be included? How can recent advances in mapping and documentation tools be integrated into the surveying process and the storm records? What changes should be made to the existing scale: why, how, and by whom? What alternative methods may be included or adapted for estimating tornado intensity?
To begin coordinated discussion on these and related topics, interested scientists and engineers (including some involved in EF scale development) organized a national EF Scale Stakeholders' Meeting, held on 2–3 March 2010 in Norman, Oklahoma. This article presents more detailed background information, summarizes the meeting, presents possibilities for the future of the EF scale and damage surveys, and solicits ideas from the engineering and atmospheric science communities.
Abstract
Scientists at NOAA are testing a new tool that allows forecasters to communicate estimated probabilities of severe hazards (tornadoes, severe wind, and hail) as part of the Forecasting a Continuum of Environmental Threats (FACETs) framework. In this study, we employ the embedded systems theory (EST)—a communication framework that analyzes small group workplace practices as products of group, organizational, and local dynamics—to understand how probabilistic hazard information (PHI) is produced and negotiated among multiple NWS weather forecast offices in an experimental setting. Gathering feedback from NWS meteorologists who participated in the 2020 Hazard Services (HS)-PHI Interoffice Collaboration experiment, we explored implications of local and interoffice collaboration while using this experimental tool. By using a qualitative thematic analysis, it was found that differing probability thresholds, forecasting styles, social dynamics, and workload will be social factors that developers should consider as they bring PHI toward operational readiness. Warning operations in this new paradigm were also implemented into the EST model to create a communication ecosystem for future weather hazard communication research.
Significance Statement
Meteorologists are currently exploring how to use probabilities to communicate life-saving information. From tornadoes to hail, a new type of probabilistic hazard information could fundamentally change the way that NWS meteorologists collaborate with one another when issuing weather products, especially near and along the boundaries of County Warning Areas. To explore potential collaboration challenges and solutions, we applied a communication framework and explored perceptions that NWS meteorologists had while using this new tool in an experimental setting. NWS meteorologists expressed that differing ways of communicating hazard information between each office, along with forecasting styles and workload, would change the way they go about producing critical hazard information to the public.
Abstract
Scientists at NOAA are testing a new tool that allows forecasters to communicate estimated probabilities of severe hazards (tornadoes, severe wind, and hail) as part of the Forecasting a Continuum of Environmental Threats (FACETs) framework. In this study, we employ the embedded systems theory (EST)—a communication framework that analyzes small group workplace practices as products of group, organizational, and local dynamics—to understand how probabilistic hazard information (PHI) is produced and negotiated among multiple NWS weather forecast offices in an experimental setting. Gathering feedback from NWS meteorologists who participated in the 2020 Hazard Services (HS)-PHI Interoffice Collaboration experiment, we explored implications of local and interoffice collaboration while using this experimental tool. By using a qualitative thematic analysis, it was found that differing probability thresholds, forecasting styles, social dynamics, and workload will be social factors that developers should consider as they bring PHI toward operational readiness. Warning operations in this new paradigm were also implemented into the EST model to create a communication ecosystem for future weather hazard communication research.
Significance Statement
Meteorologists are currently exploring how to use probabilities to communicate life-saving information. From tornadoes to hail, a new type of probabilistic hazard information could fundamentally change the way that NWS meteorologists collaborate with one another when issuing weather products, especially near and along the boundaries of County Warning Areas. To explore potential collaboration challenges and solutions, we applied a communication framework and explored perceptions that NWS meteorologists had while using this new tool in an experimental setting. NWS meteorologists expressed that differing ways of communicating hazard information between each office, along with forecasting styles and workload, would change the way they go about producing critical hazard information to the public.