Search Results
You are looking at 1 - 6 of 6 items for
- Author or Editor: Michael Steinberg x
- Refine by Access: All Content x
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
In May 2003 there was a very destructive extended outbreak of tornadoes across the central and eastern United States. More than a dozen tornadoes struck each day from 3 May to 11 May 2003. This outbreak caused 41 fatalities, 642 injuries, and approximately $829 million dollars of property damage. The outbreak set a record for most tornadoes ever reported in a week (334 between 4–10 May), and strong tornadoes (F2 or greater) occurred in an unbroken sequence of nine straight days. Fortunately, despite this being one of the largest extended outbreaks of tornadoes on record, it did not cause as many fatalities as in the few comparable past outbreaks, due in large measure to the warning efforts of National Weather Service, television, and private-company forecasters and the smaller number of violent (F4–F5) tornadoes. This event was also relatively predictable; the onset of the outbreak was forecast skillfully many days in advance.
An unusually persistent upper-level trough in the intermountain west and sustained low-level southerly winds through the southern Great Plains produced the extended period of tornado-favorable conditions. Three other extended outbreaks in the past 88 years were statistically comparable to this outbreak, and two short-duration events (Palm Sunday 1965 and the 1974 Superoutbreak) were comparable in the overall number of strong tornadoes. An analysis of tornado statistics and environmental conditions indicates that extended outbreaks of this character occur roughly every 10 to 100 years.
Abstract
There is a great need for gridded daily precipitation datasets to support a wide variety of disciplines in science and industry. Production of such datasets faces many challenges, from station data ingest to gridded dataset distribution. The quality of the dataset is directly related to its information content, and each step in the production process provides an opportunity to maximize that content. The first opportunity is maximizing station density from a variety of sources and assuring high quality through intensive screening, including manual review. To accommodate varying data latency times, the Parameter-Elevation Regressions on Independent Slopes Model (PRISM) Climate Group releases eight versions of a day’s precipitation grid, from 24 h after day’s end to 6 months of elapsed time. The second opportunity is to distribute the station data to a grid using methods that add information and minimize the smoothing effect of interpolation. We use two competing methods, one that utilizes the information in long-term precipitation climatologies, and the other using weather radar return patterns. Last, maintaining consistency among different time scales (monthly vs daily) affords the opportunity to exploit information available at each scale. Maintaining temporal consistency over longer time scales is at cross purposes with maximizing information content. We therefore produce two datasets, one that maximizes data sources and a second that includes only networks with long-term stations and no radar (a short-term data source). Further work is under way to improve station metadata, refine interpolation methods by producing climatologies targeted to specific storm conditions, and employ higher-resolution radar products.
Abstract
There is a great need for gridded daily precipitation datasets to support a wide variety of disciplines in science and industry. Production of such datasets faces many challenges, from station data ingest to gridded dataset distribution. The quality of the dataset is directly related to its information content, and each step in the production process provides an opportunity to maximize that content. The first opportunity is maximizing station density from a variety of sources and assuring high quality through intensive screening, including manual review. To accommodate varying data latency times, the Parameter-Elevation Regressions on Independent Slopes Model (PRISM) Climate Group releases eight versions of a day’s precipitation grid, from 24 h after day’s end to 6 months of elapsed time. The second opportunity is to distribute the station data to a grid using methods that add information and minimize the smoothing effect of interpolation. We use two competing methods, one that utilizes the information in long-term precipitation climatologies, and the other using weather radar return patterns. Last, maintaining consistency among different time scales (monthly vs daily) affords the opportunity to exploit information available at each scale. Maintaining temporal consistency over longer time scales is at cross purposes with maximizing information content. We therefore produce two datasets, one that maximizes data sources and a second that includes only networks with long-term stations and no radar (a short-term data source). Further work is under way to improve station metadata, refine interpolation methods by producing climatologies targeted to specific storm conditions, and employ higher-resolution radar products.
Abstract
The Australian marine research, industry, and stakeholder community has recently undertaken an extensive collaborative process to identify the highest national priorities for wind-waves research. This was undertaken under the auspices of the Forum for Operational Oceanography Surface Waves Working Group. The main steps in the process were first, soliciting possible research questions from the community via an online survey; second, reviewing the questions at a face-to-face workshop; and third, online ranking of the research questions by individuals. This process resulted in 15 identified priorities, covering research activities and the development of infrastructure. The top five priorities are 1) enhanced and updated nearshore and coastal bathymetry; 2) improved understanding of extreme sea states; 3) maintain and enhance the in situ buoy network; 4) improved data access and sharing; and 5) ensemble and probabilistic wave modeling and forecasting. In this paper, each of the 15 priorities is discussed in detail, providing insight into why each priority is important, and the current state of the art, both nationally and internationally, where relevant. While this process has been driven by Australian needs, it is likely that the results will be relevant to other marine-focused nations.
Abstract
The Australian marine research, industry, and stakeholder community has recently undertaken an extensive collaborative process to identify the highest national priorities for wind-waves research. This was undertaken under the auspices of the Forum for Operational Oceanography Surface Waves Working Group. The main steps in the process were first, soliciting possible research questions from the community via an online survey; second, reviewing the questions at a face-to-face workshop; and third, online ranking of the research questions by individuals. This process resulted in 15 identified priorities, covering research activities and the development of infrastructure. The top five priorities are 1) enhanced and updated nearshore and coastal bathymetry; 2) improved understanding of extreme sea states; 3) maintain and enhance the in situ buoy network; 4) improved data access and sharing; and 5) ensemble and probabilistic wave modeling and forecasting. In this paper, each of the 15 priorities is discussed in detail, providing insight into why each priority is important, and the current state of the art, both nationally and internationally, where relevant. While this process has been driven by Australian needs, it is likely that the results will be relevant to other marine-focused nations.