Search Results
You are looking at 1 - 6 of 6 items for
- Author or Editor: A. McGovern x
- Refine by Access: All Content x
Abstract
Moisture boundaries, or drylines, are common over the southern U.S. high plains and are one of the most important airmass boundaries for convective initiation over this region. In favorable environments, drylines can initiate storms that produce strong and violent tornadoes, large hail, lightning, and heavy rainfall. Despite their importance, there are few studies documenting climatological dryline location and frequency, or performing systematic dryline forecast evaluation, which likely stems from difficulties in objectively identifying drylines over large datasets. Previous studies have employed tedious manual identification procedures. This study aims to streamline dryline identification by developing an automated, multiparameter algorithm, which applies image-processing and pattern recognition techniques to various meteorological fields and their gradients to identify drylines. The algorithm is applied to five years of high-resolution 24-h forecasts from Weather Research and Forecasting (WRF) Model simulations valid April–June 2007–11. Manually identified dryline positions, which were available from a previous study using the same dataset, are used as truth to evaluate the algorithm performance. Generally, the algorithm performed very well. High probability of detection (POD) scores indicated that the majority of drylines were identified by the method. However, a relatively high false alarm ratio (FAR) was also found, indicating that a large number of nondryline features were also identified. Preliminary use of random forests (a machine learning technique) significantly decreased the FAR, while minimally impacting the POD. The algorithm lays the groundwork for applications including model evaluation and operational forecasting, and should enable efficient analysis of drylines from very large datasets.
Abstract
Moisture boundaries, or drylines, are common over the southern U.S. high plains and are one of the most important airmass boundaries for convective initiation over this region. In favorable environments, drylines can initiate storms that produce strong and violent tornadoes, large hail, lightning, and heavy rainfall. Despite their importance, there are few studies documenting climatological dryline location and frequency, or performing systematic dryline forecast evaluation, which likely stems from difficulties in objectively identifying drylines over large datasets. Previous studies have employed tedious manual identification procedures. This study aims to streamline dryline identification by developing an automated, multiparameter algorithm, which applies image-processing and pattern recognition techniques to various meteorological fields and their gradients to identify drylines. The algorithm is applied to five years of high-resolution 24-h forecasts from Weather Research and Forecasting (WRF) Model simulations valid April–June 2007–11. Manually identified dryline positions, which were available from a previous study using the same dataset, are used as truth to evaluate the algorithm performance. Generally, the algorithm performed very well. High probability of detection (POD) scores indicated that the majority of drylines were identified by the method. However, a relatively high false alarm ratio (FAR) was also found, indicating that a large number of nondryline features were also identified. Preliminary use of random forests (a machine learning technique) significantly decreased the FAR, while minimally impacting the POD. The algorithm lays the groundwork for applications including model evaluation and operational forecasting, and should enable efficient analysis of drylines from very large datasets.
Abstract
Oklahoma Mesonet surface data and North American Regional Reanalysis data were integrated with the tracks of over 900 tornadic and nontornadic supercell thunderstorms in Oklahoma from 1994 to 2003 to observe the evolution of near-storm environments with data currently available to operational forecasters. These data are used to train a complex data-mining algorithm that can analyze the variability of meteorological data in both space and time and produce a probabilistic prediction of tornadogenesis given variables describing the near-storm environment. The algorithm was assessed for utility in four ways. First, its probability forecasts were scored. The algorithm did produce some useful skill in discriminating between tornadic and nontornadic supercells as well as in producing reliable probabilities. Second, its selection of relevant attributes was assessed for physical significance. Surface thermodynamic parameters, instability, and bulk wind shear were among the most significant attributes. Third, the algorithm’s skill was compared with the skill of single variables commonly used for tornado prediction. The algorithm did noticeably outperform all of the single variables, including composite parameters. Fourth, the situational variations of the predictions from the algorithm were shown in case studies. They revealed instances both in which the algorithm excelled and in which the algorithm was limited.
Abstract
Oklahoma Mesonet surface data and North American Regional Reanalysis data were integrated with the tracks of over 900 tornadic and nontornadic supercell thunderstorms in Oklahoma from 1994 to 2003 to observe the evolution of near-storm environments with data currently available to operational forecasters. These data are used to train a complex data-mining algorithm that can analyze the variability of meteorological data in both space and time and produce a probabilistic prediction of tornadogenesis given variables describing the near-storm environment. The algorithm was assessed for utility in four ways. First, its probability forecasts were scored. The algorithm did produce some useful skill in discriminating between tornadic and nontornadic supercells as well as in producing reliable probabilities. Second, its selection of relevant attributes was assessed for physical significance. Surface thermodynamic parameters, instability, and bulk wind shear were among the most significant attributes. Third, the algorithm’s skill was compared with the skill of single variables commonly used for tornado prediction. The algorithm did noticeably outperform all of the single variables, including composite parameters. Fourth, the situational variations of the predictions from the algorithm were shown in case studies. They revealed instances both in which the algorithm excelled and in which the algorithm was limited.
Abstract
Forecasting severe hail accurately requires predicting how well atmospheric conditions support the development of thunderstorms, the growth of large hail, and the minimal loss of hail mass to melting before reaching the surface. Existing hail forecasting techniques incorporate information about these processes from proximity soundings and numerical weather prediction models, but they make many simplifying assumptions, are sensitive to differences in numerical model configuration, and are often not calibrated to observations. In this paper a storm-based probabilistic machine learning hail forecasting method is developed to overcome the deficiencies of existing methods. An object identification and tracking algorithm locates potential hailstorms in convection-allowing model output and gridded radar data. Forecast storms are matched with observed storms to determine hail occurrence and the parameters of the radar-estimated hail size distribution. The database of forecast storms contains information about storm properties and the conditions of the prestorm environment. Machine learning models are used to synthesize that information to predict the probability of a storm producing hail and the radar-estimated hail size distribution parameters for each forecast storm. Forecasts from the machine learning models are produced using two convection-allowing ensemble systems and the results are compared to other hail forecasting methods. The machine learning forecasts have a higher critical success index (CSI) at most probability thresholds and greater reliability for predicting both severe and significant hail.
Abstract
Forecasting severe hail accurately requires predicting how well atmospheric conditions support the development of thunderstorms, the growth of large hail, and the minimal loss of hail mass to melting before reaching the surface. Existing hail forecasting techniques incorporate information about these processes from proximity soundings and numerical weather prediction models, but they make many simplifying assumptions, are sensitive to differences in numerical model configuration, and are often not calibrated to observations. In this paper a storm-based probabilistic machine learning hail forecasting method is developed to overcome the deficiencies of existing methods. An object identification and tracking algorithm locates potential hailstorms in convection-allowing model output and gridded radar data. Forecast storms are matched with observed storms to determine hail occurrence and the parameters of the radar-estimated hail size distribution. The database of forecast storms contains information about storm properties and the conditions of the prestorm environment. Machine learning models are used to synthesize that information to predict the probability of a storm producing hail and the radar-estimated hail size distribution parameters for each forecast storm. Forecasts from the machine learning models are produced using two convection-allowing ensemble systems and the results are compared to other hail forecasting methods. The machine learning forecasts have a higher critical success index (CSI) at most probability thresholds and greater reliability for predicting both severe and significant hail.
Abstract
As states, cities, tribes, and private interests cope with climate damages and seek to increase preparedness and resilience, they will need to navigate myriad choices and options available to them. Making these choices in ways that identify pathways for climate action that support their development objectives will require constructive public dialogue, community participation, and flexible and ongoing access to science- and experience-based knowledge. In 2016, a Federal Advisory Committee (FAC) was convened to recommend how to conduct a sustained National Climate Assessment (NCA) to increase the relevance and usability of assessments for informing action. The FAC was disbanded in 2017, but members and additional experts reconvened to complete the report that is presented here. A key recommendation is establishing a new nonfederal “climate assessment consortium” to increase the role of state/local/tribal government and civil society in assessments. The expanded process would 1) focus on applied problems faced by practitioners, 2) organize sustained partnerships for collaborative learning across similar projects and case studies to identify effective tested practices, and 3) assess and improve knowledge-based methods for project implementation. Specific recommendations include evaluating climate models and data using user-defined metrics; improving benefit–cost assessment and supporting decision-making under uncertainty; and accelerating application of tools and methods such as citizen science, artificial intelligence, indicators, and geospatial analysis. The recommendations are the result of broad consultation and present an ambitious agenda for federal agencies, state/local/tribal jurisdictions, universities and the research sector, professional associations, nongovernmental and community-based organizations, and private-sector firms.
Abstract
As states, cities, tribes, and private interests cope with climate damages and seek to increase preparedness and resilience, they will need to navigate myriad choices and options available to them. Making these choices in ways that identify pathways for climate action that support their development objectives will require constructive public dialogue, community participation, and flexible and ongoing access to science- and experience-based knowledge. In 2016, a Federal Advisory Committee (FAC) was convened to recommend how to conduct a sustained National Climate Assessment (NCA) to increase the relevance and usability of assessments for informing action. The FAC was disbanded in 2017, but members and additional experts reconvened to complete the report that is presented here. A key recommendation is establishing a new nonfederal “climate assessment consortium” to increase the role of state/local/tribal government and civil society in assessments. The expanded process would 1) focus on applied problems faced by practitioners, 2) organize sustained partnerships for collaborative learning across similar projects and case studies to identify effective tested practices, and 3) assess and improve knowledge-based methods for project implementation. Specific recommendations include evaluating climate models and data using user-defined metrics; improving benefit–cost assessment and supporting decision-making under uncertainty; and accelerating application of tools and methods such as citizen science, artificial intelligence, indicators, and geospatial analysis. The recommendations are the result of broad consultation and present an ambitious agenda for federal agencies, state/local/tribal jurisdictions, universities and the research sector, professional associations, nongovernmental and community-based organizations, and private-sector firms.