Search Results

You are looking at 1 - 10 of 10 items for :

  • Author or Editor: James Correia Jr. x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Harold E. Brooks
and
James Correia Jr.

Abstract

Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012.

Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.

Full access
William A. Gallus Jr.
,
James Correia Jr.
, and
Isidora Jankov

Abstract

Warm season convective system rainfall forecasts remain a particularly difficult forecast challenge. For these events, it is possible that ensemble forecasts would provide helpful information unavailable in a single deterministic forecast. In this study, an intense derecho event accompanied by a well-organized band of heavy rainfall is used to show that for some situations, the predictability of rainfall even within a 12–24-h period is so low that a wide range of simulations using different models, different physical parameterizations, and different initial conditions all fail to provide even a small signal that the event will occur. The failure of a wide range of models and parameterizations to depict the event might suggest inadequate representation of the initial conditions. However, a range of different initial conditions also failed to lead to a well-simulated event, suggesting that some events are unlikely to be predictable with the current observational network, and ensemble guidance for such cases may provide limited additional information useful to a forecaster.

Full access
Michael C. Coniglio
,
James Correia Jr.
,
Patrick T. Marsh
, and
Fanyou Kong

Abstract

This study evaluates forecasts of thermodynamic variables from five convection-allowing configurations of the Weather Research and Forecasting Model (WRF) with the Advanced Research core (WRF-ARW). The forecasts vary only in their planetary boundary layer (PBL) scheme, including three “local” schemes [Mellor–Yamada–Janjić (MYJ), quasi-normal scale elimination (QNSE), and Mellor–Yamada–Nakanishi–Niino (MYNN)] and two schemes that include “nonlocal” mixing [the asymmetric cloud model version 2 (ACM2) and the Yonei University (YSU) scheme]. The forecasts are compared to springtime radiosonde observations upstream from deep convection to gain a better understanding of the thermodynamic characteristics of these PBL schemes in this regime. The morning PBLs are all too cool and dry despite having little bias in PBL depth (except for YSU). In the evening, the local schemes produce shallower PBLs that are often too shallow and too moist compared to nonlocal schemes. However, MYNN is nearly unbiased in PBL depth, moisture, and potential temperature, which is comparable to the background North American Mesoscale model (NAM) forecasts. This result gives confidence in the use of the MYNN scheme in convection-allowing configurations of WRF-ARW to alleviate the typical cool, moist bias of the MYJ scheme in convective boundary layers upstream from convection. The morning cool and dry biases lead to an underprediction of mixed-layer CAPE (MLCAPE) and an overprediction of mixed-layer convective inhibition (MLCIN) at that time in all schemes. MLCAPE and MLCIN forecasts improve in the evening, with MYJ, QNSE, and MYNN having small mean errors, but ACM2 and YSU having a somewhat low bias. Strong observed capping inversions tend to be associated with an underprediction of MLCIN in the evening, as the model profiles are too smooth. MLCAPE tends to be overpredicted (underpredicted) by MYJ and QNSE (MYNN, ACM2, and YSU) when the observed MLCAPE is relatively small (large).

Full access
Joseph J. J. James
,
Chen Ling
,
Christopher D. Karstens
,
James Correia Jr.
,
Kristin Calhoun
,
Tiffany Meyer
, and
Daphne LaDue

Abstract

During spring 2016 the Probabilistic Hazard Information (PHI) prototype experiment was run in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) as part of the Forecasting a Continuum of Environmental Threats (FACETS) program. Nine National Weather Service forecasters were trained to use the web-based PHI prototype tool to produce dynamic PHI for severe weather threats. Archived and real-time weather scenarios were used to test this new paradigm of issuing probabilistic information, rather than deterministic information. The forecasters’ mental workload was evaluated after each scenario using the NASA-Task Load Index (TLX) questionnaire. This study summarizes the analysis results of mental workload experienced by forecasters while using the PHI prototype. Six subdimensions of mental workload: mental demand, physical demand, temporal demand, performance, effort, and frustration were analyzed to derive top contributing factors to workload. Average mental workload was 46.6 (out of 100, standard deviation: 19, range 70.8). Top contributing factors to workload included using automated guidance, PHI object quantity, multiple displays, and formulating probabilities in the new paradigm. Automated guidance provided support to forecasters in maintaining situational awareness and managing increased quantities of threats. The results of this study provided understanding of forecasters’ mental workload and task strategies and developed insights to improve usability of the PHI prototype tool.

Free access
Adam J. Clark
,
John S. Kain
,
Patrick T. Marsh
,
James Correia Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

A three-dimensional (in space and time) object identification algorithm is applied to high-resolution forecasts of hourly maximum updraft helicity (UH)—a diagnostic that identifies simulated rotating storms—with the goal of diagnosing the relationship between forecast UH objects and observed tornado pathlengths. UH objects are contiguous swaths of UH exceeding a specified threshold. Including time allows tracks to span multiple hours and entire life cycles of simulated rotating storms. The object algorithm is applied to 3 yr of 36-h forecasts initialized daily from a 4-km grid-spacing version of the Weather Research and Forecasting Model (WRF) run in real time at the National Severe Storms Laboratory (NSSL), and forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. Methods for visualizing UH object attributes are presented, and the relationship between pathlengths of UH objects and tornadoes for corresponding 18- or 24-h periods is examined. For deterministic NSSL-WRF UH forecasts, the relationship of UH pathlengths to tornadoes was much stronger during spring (March–May) than in summer (June–August). Filtering UH track segments produced by high-based and/or elevated storms improved the UH–tornado pathlength correlations. The best ensemble results were obtained after filtering high-based and/or elevated UH track segments for the 20 cases in April–May 2010, during which correlation coefficients were as high as 0.91. The results indicate that forecast UH pathlengths during spring could be a very skillful predictor for the severity of tornado outbreaks as measured by total pathlength.

Full access
Dan Bikos
,
Daniel T. Lindsey
,
Jason Otkin
,
Justin Sieglaff
,
Louie Grasso
,
Chris Siewert
,
James Correia Jr.
,
Michael Coniglio
,
Robert Rabin
,
John S. Kain
, and
Scott Dembek

Abstract

Output from a real-time high-resolution numerical model is used to generate synthetic infrared satellite imagery. It is shown that this imagery helps to characterize model-simulated large-scale precursors to the formation of deep-convective storms as well as the subsequent development of storm systems. A strategy for using this imagery in the forecasting of severe convective weather is presented. This strategy involves comparing model-simulated precursors to their observed counterparts to help anticipate model errors in the timing and location of storm formation, while using the simulated storm evolution as guidance.

Full access
Adam J. Clark
,
Jidong Gao
,
Patrick T. Marsh
,
Travis Smith
,
John S. Kain
,
James Correia Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

Examining forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA/Hazardous Weather Testbed Spring Forecasting Experiment, recent research diagnosed a strong relationship between the cumulative pathlengths of simulated rotating storms (measured using a three-dimensional object identification algorithm applied to forecast updraft helicity) and the cumulative pathlengths of tornadoes. This paper updates those results by including data from the 2011 SSEF system, and illustrates forecast examples from three major 2011 tornado outbreaks—16 and 27 April, and 24 May—as well as two forecast failure cases from June 2010. Finally, analysis updraft helicity (UH) from 27 April 2011 is computed using a three-dimensional variational data assimilation system to obtain 1.25-km grid-spacing analyses at 5-min intervals and compared to forecast UH from individual SSEF members.

Full access
Christopher D. Karstens
,
Greg Stumpf
,
Chen Ling
,
Lesheng Hua
,
Darrel Kingfield
,
Travis M. Smith
,
James Correia Jr.
,
Kristin Calhoun
,
Kiel Ortega
,
Chris Melick
, and
Lans P. Rothfusz

Abstract

A proposed new method for hazard identification and prediction was evaluated with forecasters in the National Oceanic and Atmospheric Administration Hazardous Weather Testbed during 2014. This method combines hazard-following objects with forecaster-issued trends of exceedance probabilities to produce probabilistic hazard information, as opposed to the static, deterministic polygon and attendant text product methodology presently employed by the National Weather Service to issue severe thunderstorm and tornado warnings. Three components of the test bed activities are discussed: usage of the new tools, verification of storm-based warnings and probabilistic forecasts from a control–test experiment, and subjective feedback on the proposed paradigm change. Forecasters were able to quickly adapt to the new tools and concepts and ultimately produced probabilistic hazard information in a timely manner. The probabilistic forecasts from two severe hail events tested in a control–test experiment were more skillful than storm-based warnings and were found to have reliability in the low-probability spectrum. False alarm area decreased while the traditional verification metrics degraded with increasing probability thresholds. The latter finding is attributable to a limitation in applying the current verification methodology to probabilistic forecasts. Relaxation of on-the-fence decisions exposed a need to provide information for hazard areas below the decision-point thresholds of current warnings. Automated guidance information was helpful in combating potential workload issues, and forecasters raised a need for improved guidance and training to inform consistent and reliable forecasts.

Full access
Christopher D. Karstens
,
James Correia Jr.
,
Daphne S. LaDue
,
Jonathan Wolfe
,
Tiffany C. Meyer
,
David R. Harrison
,
John L. Cintineo
,
Kristin M. Calhoun
,
Travis M. Smith
,
Alan E. Gerard
, and
Lans P. Rothfusz

Abstract

Providing advance warning for impending severe convective weather events (i.e., tornadoes, hail, wind) fundamentally requires an ability to predict and/or detect these hazards and subsequently communicate their potential threat in real time. The National Weather Service (NWS) provides advance warning for severe convective weather through the issuance of tornado and severe thunderstorm warnings, a system that has remained relatively unchanged for approximately the past 65 years. Forecasting a Continuum of Environmental Threats (FACETs) proposes a reinvention of this system, transitioning from a deterministic product-centric paradigm to one based on probabilistic hazard information (PHI) for hazardous weather events. Four years of iterative development and rapid prototyping in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) with NWS forecasters and partners has yielded insights into this new paradigm by discovering efficient ways to generate, inform, and utilize a continuous flow of information through the development of a human–machine mix. Forecasters conditionally used automated object-based guidance within four levels of automation to issue deterministic products containing PHI. Forecasters accomplished this task in a timely manner while focusing on communication and conveying forecast confidence, elements considered necessary by emergency managers. Observed annual increases in the usage of first-guess probabilistic guidance by forecasters were related to improvements made to the prototyped software, guidance, and techniques. However, increasing usage of automation requires improvements in guidance, data integration, and data visualization to garner trust more effectively. Additional opportunities exist to address limitations in procedures for motion derivation and geospatial mapping of subjective probability.

Full access
Burkely T. Gallo
,
Adam J. Clark
,
Israel Jirak
,
John S. Kain
,
Steven J. Weiss
,
Michael Coniglio
,
Kent Knopfmeier
,
James Correia Jr.
,
Christopher J. Melick
,
Christopher D. Karstens
,
Eswar Iyer
,
Andrew R. Dean
,
Ming Xue
,
Fanyou Kong
,
Youngsun Jung
,
Feifei Shen
,
Kevin W. Thomas
,
Keith Brewster
,
Derek Stratman
,
Gregory W. Carbin
,
William Line
,
Rebecca Adams-Selin
, and
Steve Willington

Abstract

Led by NOAA’s Storm Prediction Center and National Severe Storms Laboratory, annual spring forecasting experiments (SFEs) in the Hazardous Weather Testbed test and evaluate cutting-edge technologies and concepts for improving severe weather prediction through intensive real-time forecasting and evaluation activities. Experimental forecast guidance is provided through collaborations with several U.S. government and academic institutions, as well as the Met Office. The purpose of this article is to summarize activities, insights, and preliminary findings from recent SFEs, emphasizing SFE 2015. Several innovative aspects of recent experiments are discussed, including the 1) use of convection-allowing model (CAM) ensembles with advanced ensemble data assimilation, 2) generation of severe weather outlooks valid at time periods shorter than those issued operationally (e.g., 1–4 h), 3) use of CAMs to issue outlooks beyond the day 1 period, 4) increased interaction through software allowing participants to create individual severe weather outlooks, and 5) tests of newly developed storm-attribute-based diagnostics for predicting tornadoes and hail size. Additionally, plans for future experiments will be discussed, including the creation of a Community Leveraged Unified Ensemble (CLUE) system, which will test various strategies for CAM ensemble design using carefully designed sets of ensemble members contributed by different agencies to drive evidence-based decision-making for near-future operational systems.

Full access