Search Results
You are looking at 11 - 17 of 17 items for
- Author or Editor: James Correia Jr. x
- Refine by Access: All Content x
Abstract
A proposed new method for hazard identification and prediction was evaluated with forecasters in the National Oceanic and Atmospheric Administration Hazardous Weather Testbed during 2014. This method combines hazard-following objects with forecaster-issued trends of exceedance probabilities to produce probabilistic hazard information, as opposed to the static, deterministic polygon and attendant text product methodology presently employed by the National Weather Service to issue severe thunderstorm and tornado warnings. Three components of the test bed activities are discussed: usage of the new tools, verification of storm-based warnings and probabilistic forecasts from a control–test experiment, and subjective feedback on the proposed paradigm change. Forecasters were able to quickly adapt to the new tools and concepts and ultimately produced probabilistic hazard information in a timely manner. The probabilistic forecasts from two severe hail events tested in a control–test experiment were more skillful than storm-based warnings and were found to have reliability in the low-probability spectrum. False alarm area decreased while the traditional verification metrics degraded with increasing probability thresholds. The latter finding is attributable to a limitation in applying the current verification methodology to probabilistic forecasts. Relaxation of on-the-fence decisions exposed a need to provide information for hazard areas below the decision-point thresholds of current warnings. Automated guidance information was helpful in combating potential workload issues, and forecasters raised a need for improved guidance and training to inform consistent and reliable forecasts.
Abstract
A proposed new method for hazard identification and prediction was evaluated with forecasters in the National Oceanic and Atmospheric Administration Hazardous Weather Testbed during 2014. This method combines hazard-following objects with forecaster-issued trends of exceedance probabilities to produce probabilistic hazard information, as opposed to the static, deterministic polygon and attendant text product methodology presently employed by the National Weather Service to issue severe thunderstorm and tornado warnings. Three components of the test bed activities are discussed: usage of the new tools, verification of storm-based warnings and probabilistic forecasts from a control–test experiment, and subjective feedback on the proposed paradigm change. Forecasters were able to quickly adapt to the new tools and concepts and ultimately produced probabilistic hazard information in a timely manner. The probabilistic forecasts from two severe hail events tested in a control–test experiment were more skillful than storm-based warnings and were found to have reliability in the low-probability spectrum. False alarm area decreased while the traditional verification metrics degraded with increasing probability thresholds. The latter finding is attributable to a limitation in applying the current verification methodology to probabilistic forecasts. Relaxation of on-the-fence decisions exposed a need to provide information for hazard areas below the decision-point thresholds of current warnings. Automated guidance information was helpful in combating potential workload issues, and forecasters raised a need for improved guidance and training to inform consistent and reliable forecasts.
Abstract
Providing advance warning for impending severe convective weather events (i.e., tornadoes, hail, wind) fundamentally requires an ability to predict and/or detect these hazards and subsequently communicate their potential threat in real time. The National Weather Service (NWS) provides advance warning for severe convective weather through the issuance of tornado and severe thunderstorm warnings, a system that has remained relatively unchanged for approximately the past 65 years. Forecasting a Continuum of Environmental Threats (FACETs) proposes a reinvention of this system, transitioning from a deterministic product-centric paradigm to one based on probabilistic hazard information (PHI) for hazardous weather events. Four years of iterative development and rapid prototyping in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) with NWS forecasters and partners has yielded insights into this new paradigm by discovering efficient ways to generate, inform, and utilize a continuous flow of information through the development of a human–machine mix. Forecasters conditionally used automated object-based guidance within four levels of automation to issue deterministic products containing PHI. Forecasters accomplished this task in a timely manner while focusing on communication and conveying forecast confidence, elements considered necessary by emergency managers. Observed annual increases in the usage of first-guess probabilistic guidance by forecasters were related to improvements made to the prototyped software, guidance, and techniques. However, increasing usage of automation requires improvements in guidance, data integration, and data visualization to garner trust more effectively. Additional opportunities exist to address limitations in procedures for motion derivation and geospatial mapping of subjective probability.
Abstract
Providing advance warning for impending severe convective weather events (i.e., tornadoes, hail, wind) fundamentally requires an ability to predict and/or detect these hazards and subsequently communicate their potential threat in real time. The National Weather Service (NWS) provides advance warning for severe convective weather through the issuance of tornado and severe thunderstorm warnings, a system that has remained relatively unchanged for approximately the past 65 years. Forecasting a Continuum of Environmental Threats (FACETs) proposes a reinvention of this system, transitioning from a deterministic product-centric paradigm to one based on probabilistic hazard information (PHI) for hazardous weather events. Four years of iterative development and rapid prototyping in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) with NWS forecasters and partners has yielded insights into this new paradigm by discovering efficient ways to generate, inform, and utilize a continuous flow of information through the development of a human–machine mix. Forecasters conditionally used automated object-based guidance within four levels of automation to issue deterministic products containing PHI. Forecasters accomplished this task in a timely manner while focusing on communication and conveying forecast confidence, elements considered necessary by emergency managers. Observed annual increases in the usage of first-guess probabilistic guidance by forecasters were related to improvements made to the prototyped software, guidance, and techniques. However, increasing usage of automation requires improvements in guidance, data integration, and data visualization to garner trust more effectively. Additional opportunities exist to address limitations in procedures for motion derivation and geospatial mapping of subjective probability.
The North American Regional Climate Change Assessment Program (NARCCAP) is an international effort designed to investigate the uncertainties in regional-scale projections of future climate and produce highresolution climate change scenarios using multiple regional climate models (RCMs) nested within atmosphere–ocean general circulation models (AOGCMs) forced with the Special Report on Emission Scenarios (SRES) A2 scenario, with a common domain covering the conterminous United States, northern Mexico, and most of Canada. The program also includes an evaluation component (phase I) wherein the participating RCMs, with a grid spacing of 50 km, are nested within 25 years of National Centers for Environmental Prediction–Department of Energy (NCEP–DOE) Reanalysis II.
This paper provides an overview of evaluations of the phase I domain-wide simulations focusing on monthly and seasonal temperature and precipitation, as well as more detailed investigation of four subregions. The overall quality of the simulations is determined, comparing the model performances with each other as well as with other regional model evaluations over North America. The metrics used herein do differentiate among the models but, as found in previous studies, it is not possible to determine a “best” model among them. The ensemble average of the six models does not perform best for all measures, as has been reported in a number of global climate model studies. The subset ensemble of the two models using spectral nudging is more often successful for domain-wide root-mean-square error (RMSE), especially for temperature. This evaluation phase of NARCCAP will inform later program elements concerning differentially weighting the models for use in producing robust regional probabilities of future climate change.
The North American Regional Climate Change Assessment Program (NARCCAP) is an international effort designed to investigate the uncertainties in regional-scale projections of future climate and produce highresolution climate change scenarios using multiple regional climate models (RCMs) nested within atmosphere–ocean general circulation models (AOGCMs) forced with the Special Report on Emission Scenarios (SRES) A2 scenario, with a common domain covering the conterminous United States, northern Mexico, and most of Canada. The program also includes an evaluation component (phase I) wherein the participating RCMs, with a grid spacing of 50 km, are nested within 25 years of National Centers for Environmental Prediction–Department of Energy (NCEP–DOE) Reanalysis II.
This paper provides an overview of evaluations of the phase I domain-wide simulations focusing on monthly and seasonal temperature and precipitation, as well as more detailed investigation of four subregions. The overall quality of the simulations is determined, comparing the model performances with each other as well as with other regional model evaluations over North America. The metrics used herein do differentiate among the models but, as found in previous studies, it is not possible to determine a “best” model among them. The ensemble average of the six models does not perform best for all measures, as has been reported in a number of global climate model studies. The subset ensemble of the two models using spectral nudging is more often successful for domain-wide root-mean-square error (RMSE), especially for temperature. This evaluation phase of NARCCAP will inform later program elements concerning differentially weighting the models for use in producing robust regional probabilities of future climate change.
Abstract
Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.
Abstract
Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.
The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.
The 2011 Spring Forecasting Experiment in the NOAA Hazardous Weather Testbed (HWT) featured a significant component on convection initiation (CI). As in previous HWT experiments, the CI study was a collaborative effort between forecasters and researchers, with equal emphasis on experimental forecasting strategies and evaluation of prototype model guidance products. The overarching goal of the CI effort was to identify the primary challenges of the CI forecasting problem and to establish a framework for additional studies and possible routine forecasting of CI. This study confirms that convection-allowing models with grid spacing ~4 km represent many aspects of the formation and development of deep convection clouds explicitly and with predictive utility. Further, it shows that automated algorithms can skillfully identify the CI process during model integration. However, it also reveals that automated detection of individual convection cells, by itself, provides inadequate guidance for the disruptive potential of deep convection activity. Thus, future work on the CI forecasting problem should be couched in terms of convection-event prediction rather than detection and prediction of individual convection cells.
The NOAA Hazardous Weather Testbed (HWT) conducts annual spring forecasting experiments organized by the Storm Prediction Center and National Severe Storms Laboratory to test and evaluate emerging scientific concepts and technologies for improved analysis and prediction of hazardous mesoscale weather. A primary goal is to accelerate the transfer of promising new scientific concepts and tools from research to operations through the use of intensive real-time experimental forecasting and evaluation activities conducted during the spring and early summer convective storm period. The 2010 NOAA/HWT Spring Forecasting Experiment (SE2010), conducted 17 May through 18 June, had a broad focus, with emphases on heavy rainfall and aviation weather, through collaboration with the Hydrometeorological Prediction Center (HPC) and the Aviation Weather Center (AWC), respectively. In addition, using the computing resources of the National Institute for Computational Sciences at the University of Tennessee, the Center for Analysis and Prediction of Storms at the University of Oklahoma provided unprecedented real-time conterminous United States (CONUS) forecasts from a multimodel Storm-Scale Ensemble Forecast (SSEF) system with 4-km grid spacing and 26 members and from a 1-km grid spacing configuration of the Weather Research and Forecasting model. Several other organizations provided additional experimental high-resolution model output. This article summarizes the activities, insights, and preliminary findings from SE2010, emphasizing the use of the SSEF system and the successful collaboration with the HPC and AWC.
A supplement to this article is available online (DOI:10.1175/BAMS-D-11-00040.2)
The NOAA Hazardous Weather Testbed (HWT) conducts annual spring forecasting experiments organized by the Storm Prediction Center and National Severe Storms Laboratory to test and evaluate emerging scientific concepts and technologies for improved analysis and prediction of hazardous mesoscale weather. A primary goal is to accelerate the transfer of promising new scientific concepts and tools from research to operations through the use of intensive real-time experimental forecasting and evaluation activities conducted during the spring and early summer convective storm period. The 2010 NOAA/HWT Spring Forecasting Experiment (SE2010), conducted 17 May through 18 June, had a broad focus, with emphases on heavy rainfall and aviation weather, through collaboration with the Hydrometeorological Prediction Center (HPC) and the Aviation Weather Center (AWC), respectively. In addition, using the computing resources of the National Institute for Computational Sciences at the University of Tennessee, the Center for Analysis and Prediction of Storms at the University of Oklahoma provided unprecedented real-time conterminous United States (CONUS) forecasts from a multimodel Storm-Scale Ensemble Forecast (SSEF) system with 4-km grid spacing and 26 members and from a 1-km grid spacing configuration of the Weather Research and Forecasting model. Several other organizations provided additional experimental high-resolution model output. This article summarizes the activities, insights, and preliminary findings from SE2010, emphasizing the use of the SSEF system and the successful collaboration with the HPC and AWC.
A supplement to this article is available online (DOI:10.1175/BAMS-D-11-00040.2)