Browse

You are looking at 161 - 170 of 2,812 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
David A. Lavers, N. Bruce Ingleby, Aneesh C. Subramanian, David S. Richardson, F. Martin Ralph, James D. Doyle, Carolyn A. Reynolds, Ryan D. Torn, Mark J. Rodwell, Vijay Tallapragada, and Florian Pappenberger

Abstract

A key aim of observational campaigns is to sample atmosphere–ocean phenomena to improve understanding of these phenomena, and in turn, numerical weather prediction. In early 2018 and 2019, the Atmospheric River Reconnaissance (AR Recon) campaign released dropsondes and radiosondes into atmospheric rivers (ARs) over the northeast Pacific Ocean to collect unique observations of temperature, winds, and moisture in ARs. These narrow regions of water vapor transport in the atmosphere—like rivers in the sky—can be associated with extreme precipitation and flooding events in the midlatitudes. This study uses the dropsonde observations collected during the AR Recon campaign and the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) to evaluate forecasts of ARs. Results show that ECMWF IFS forecasts 1) were colder than observations by up to 0.6 K throughout the troposphere; 2) have a dry bias in the lower troposphere, which along with weaker winds below 950 hPa, resulted in weaker horizontal water vapor fluxes in the 950–1000-hPa layer; and 3) exhibit an underdispersiveness in the water vapor flux that largely arises from model representativeness errors associated with dropsondes. Four U.S. West Coast radiosonde sites confirm the IFS cold bias throughout winter. These issues are likely to affect the model’s hydrological cycle and hence precipitation forecasts.

Open access
Joseph J. J. James, Chen Ling, Christopher D. Karstens, James Correia Jr., Kristin Calhoun, Tiffany Meyer, and Daphne LaDue

Abstract

During spring 2016 the Probabilistic Hazard Information (PHI) prototype experiment was run in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) as part of the Forecasting a Continuum of Environmental Threats (FACETS) program. Nine National Weather Service forecasters were trained to use the web-based PHI prototype tool to produce dynamic PHI for severe weather threats. Archived and real-time weather scenarios were used to test this new paradigm of issuing probabilistic information, rather than deterministic information. The forecasters’ mental workload was evaluated after each scenario using the NASA-Task Load Index (TLX) questionnaire. This study summarizes the analysis results of mental workload experienced by forecasters while using the PHI prototype. Six subdimensions of mental workload: mental demand, physical demand, temporal demand, performance, effort, and frustration were analyzed to derive top contributing factors to workload. Average mental workload was 46.6 (out of 100, standard deviation: 19, range 70.8). Top contributing factors to workload included using automated guidance, PHI object quantity, multiple displays, and formulating probabilities in the new paradigm. Automated guidance provided support to forecasters in maintaining situational awareness and managing increased quantities of threats. The results of this study provided understanding of forecasters’ mental workload and task strategies and developed insights to improve usability of the PHI prototype tool.

Free access
Eric D. Loken, Adam J. Clark, and Christopher D. Karstens

Abstract

Extracting explicit severe weather forecast guidance from convection-allowing ensembles (CAEs) is challenging since CAEs cannot directly simulate individual severe weather hazards. Currently, CAE-based severe weather probabilities must be inferred from one or more storm-related variables, which may require extensive calibration and/or contain limited information. Machine learning (ML) offers a way to obtain severe weather forecast probabilities from CAEs by relating CAE forecast variables to observed severe weather reports. This paper develops and verifies a random forest (RF)-based ML method for creating day 1 (1200–1200 UTC) severe weather hazard probabilities and categorical outlooks based on 0000 UTC Storm-Scale Ensemble of Opportunity (SSEO) forecast data and observed Storm Prediction Center (SPC) storm reports. RF forecast probabilities are compared against severe weather forecasts from calibrated SSEO 2–5-km updraft helicity (UH) forecasts and SPC convective outlooks issued at 0600 UTC. Continuous RF probabilities routinely have the highest Brier skill scores (BSSs), regardless of whether the forecasts are evaluated over the full domain or regional/seasonal subsets. Even when RF probabilities are truncated at the probability levels issued by the SPC, the RF forecasts often have BSSs better than or comparable to corresponding UH and SPC forecasts. Relative to the UH and SPC forecasts, the RF approach performs best for severe wind and hail prediction during the spring and summer (i.e., March–August). Overall, it is concluded that the RF method presented here provides skillful, reliable CAE-derived severe weather probabilities that may be useful to severe weather forecasters and decision-makers.

Free access
Jen Henderson, Erik R. Nielsen, Gregory R. Herman, and Russ S. Schumacher

Abstract

The U.S. weather warning system is designed to help operational forecasters identify hazards and issue alerts to assist people in taking life-saving actions. Assessing risks for separate hazards, such as flash flooding, can be challenging for individuals, depending on their contexts, resources, and abilities. When two or more hazards co-occur in time and space, such as tornadoes and flash floods, which we call TORFFs, risk assessment and available actions people can take to stay safe become increasingly complex and potentially dangerous. TORFF advice can suggest contradictory action—that people get low for a tornado and seek higher ground for a flash flood. The origin of risk information about such threats is the National Weather Service (NWS) Weather Forecast Office. This article contributes to an understanding of the warning and forecast system though a naturalistic study of the NWS during a TORFF event in the southeastern United States. Drawing on literature for the Social Amplification of Risk Framework, this article argues that during TORFFs, elements of the NWS warning operations can unintentionally amplify or attenuate one threat over the other. Our results reveal three ways this amplification or attenuation might occur: 1) underlying assumptions that forecasters understandably make about the danger of different threats; 2) threat terminology and coordination with national offices that shape the communication of risks during a multihazard event; and 3) organizational arrangements of space and forecaster expertise during operations. We conclude with suggestions for rethinking sites of amplification and attenuation and additional areas of future study.

Free access
Robinson Wallace, Katja Friedrich, Wiebke Deierling, Evan A. Kalina, and Paul Schlatter

Abstract

Thunderstorms that produce hail accumulations at the surface can impact residents by obstructing roadways, closing airports, and causing localized flooding from hail-clogged drainages. These storms have recently gained an increased interest within the scientific community. However, differences that are observable in real time between these storms and storms that produce nonimpactful hail accumulations have yet to be documented. Similarly, the characteristics within a single storm that are useful to quantify or predict hail accumulations are not fully understood. This study uses lightning and dual-polarization radar data to characterize hail accumulations from three storms that occurred on the same day along the Colorado–Wyoming Front Range. Each storm’s characteristics are verified against radar-derived hail accumulation maps and in situ observations. The storms differed in maximum accumulation, either producing 22 cm, 7 cm, or no accumulation. The magnitude of surface hail accumulations is found to be dependent on a combination of in-cloud hail production, storm translation speed, and hailstone melting. The optimal combination for substantial hail accumulations is enhanced in-cloud hail production, slow storm speed, and limited hailstone melting. However, during periods of similar in-cloud hail production, lesser accumulations are derived when storm speed and/or hailstone melting, identified by radar presentation, is sufficiently large. These results will aid forecasters in identifying when hail accumulations are occurring in real time.

Free access
Qidong Yang, Chia-Ying Lee, and Michael K. Tippett

ABSTRACT

Rapid intensification (RI) is an outstanding source of error in tropical cyclone (TC) intensity predictions. RI is generally defined as a 24-h increase in TC maximum sustained surface wind speed greater than some threshold, typically 25, 30, or 35 kt (1 kt ≈ 0.51 m s−1). Here, a long short-term memory (LSTM) model for probabilistic RI predictions is developed and evaluated. The variables (features) of the model include storm characteristics (e.g., storm intensity) and environmental variables (e.g., vertical shear) over the previous 48 h. A basin-aware RI prediction model is trained (1981–2009), validated (2010–13), and tested (2014–17) on global data. Models are trained on overlapping 48-h data, which allows multiple training examples for each storm. A challenge is that the data are highly unbalanced in the sense that there are many more non-RI cases than RI cases. To cope with this data imbalance, the synthetic minority-oversampling technique (SMOTE) is used to balance the training data by generating artificial RI cases. Model ensembling is also applied to improve prediction skill further. The model’s Brier skill scores in the Atlantic and eastern North Pacific are higher than those of operational predictions for RI thresholds of 25 and 30 kt and comparable for 35 kt on the independent test data. Composites of the features associated with RI and non-RI situations provide physical insights for how the model discriminates between RI and non-RI cases. Prediction case studies are presented for some recent storms.

Free access
Maria J. Molina, John T. Allen, and Andreas F. Prein

Abstract

The tornado outbreak of 21–23 January 2017 caused 20 fatalities, more than 200 injuries, and over a billion dollars in damage in the Southeast United States. The event occurred concurrently with a record-breaking warm Gulf of Mexico (GoM) basin. This article explores the influence that warm GoM sea surface temperatures (SSTs) had on the tornado outbreak. Backward trajectory analysis, combined with a Lagrangian-based moisture-attribution algorithm, reveals that the tornado outbreak’s moisture predominantly originated from the southeast GoM and the northwest Caribbean Sea. We used the WRF Model to generate a control simulation of the event and explore the response to perturbed SSTs. With the aid of a tornadic storm proxy derived from updraft helicity, we show that the 21–23 January 2017 tornado outbreak exhibits sensitivity to upstream SSTs during the first day of the event. Warmer SSTs across remote moisture sources and adjacent waters increase tornado frequency, in contrast to cooler SSTs, which reduce tornado activity. Upstream SST sensitivity is reduced once convection is ongoing and modifying local moisture and instability availability. Our results highlight the importance of air–sea interactions before airmass advection toward the continental United States. The complex and nonlinear nature of the relationship between upstream SSTs and local precursor environments is also discussed.

Free access
John L. Cintineo, Michael J. Pavolonis, Justin M. Sieglaff, Lee Cronce, and Jason Brunner

ABSTRACT

Severe convective storms are hazardous to both life and property and thus their accurate and timely prediction is imperative. In response to this critical need to help fulfill the mission of the National Oceanic and Atmospheric Administration (NOAA), NOAA and the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin (UW) have developed NOAA ProbSevere—an operational short-term forecasting subsystem within the Multi-Radar Multi-Sensor (MRMS) system, providing storm-based probabilistic guidance to severe convective hazards. ProbSevere extracts and integrates pertinent data from a variety of meteorological sources via multiplatform multiscale storm identification and tracking in order to compute severe hazard probabilities in a statistical framework, using naïve Bayesian classifiers. Version 1 of ProbSevere (PSv1) employed one model—the “probability of any severe hazard” trained on the U.S. National Weather Service (NWS) criteria. Version 2 of ProbSevere (PSv2) implements four models, three naïve Bayesian classifiers trained to specific hazards: 1) severe hail, 2) severe straight-line wind gusts, 3) tornadoes; and a combined model for any of the aforementioned hazards, which takes the maximum probability of the three classifiers. This paper overviews the ProbSevere system and details the construction and selection of predictors for the models. An evaluation of the four models demonstrated that v2 is more skillful than v1 for each severe hazard with higher critical success index scores and that the optimal probability threshold varies by region of the United States. The discussion highlights PSv2 in NOAA’s Hazardous Weather Testbed (HWT) and current and future research for convective nowcasting.

Free access
Michael F. Sessa and Robert J. Trapp

Abstract

In a previous study, idealized model simulations of supercell thunderstorms were used to demonstrate support of the hypothesis that wide, intense tornadoes should form more readily out of wide, rotating updrafts. Observational data were used herein to test the generality of this hypothesis, especially to tornado-bearing convective morphologies such as quasi-linear convective systems (QLCSs), and within environments such as those found in the southeastern United States during boreal spring and autumn. A new radar dataset was assembled that focuses explicitly on the pretornadic characteristics of the mesocyclone, such as width and differential velocity: the pretornadic focus allows us to eliminate the effects of the tornado itself on the mesocyclone characteristics. GR2Analyst was used to manually analyze 102 tornadic events during the period 27 April 2011–1 May 2019. The corresponding tornadoes had damage (EF) ratings ranging from EF0 to EF5, and all were within 100 km of a WSR-88D. A key finding is that the linear regression between the mean, pretornadic mesocyclone width and the EF rating of the corresponding tornado yields a coefficient of determination (R 2) value of 0.75. This linear relationship is higher for discrete (supercell) cases (R 2 = 0.82), and lower for QLCS cases (R 2 = 0.37). Overall, we have found that pretornadic mesocyclone width tends to be a persistent, relatively time-invariant characteristic that is a good predictor of potential tornado intensity. In contrast, the pretornadic mesocyclone intensity (differential velocity) tends to exhibit considerable time variability, and thus would offer less reliability in anticipating tornado intensity.

Free access
Paula Maldonado, Juan Ruiz, and Celeste Saulo

Abstract

Specification of suitable initial conditions to accurately forecast high-impact weather events associated with intense thunderstorms still poses a significant challenge for convective-scale forecasting. Radar data assimilation has been showing encouraging results to produce an accurate estimate of the state of the atmosphere at the mesoscale, as it combines high-spatiotemporal-resolution observations with convection-permitting numerical weather prediction models. However, many open questions remain regarding the configuration of state-of-the-art data assimilation systems at the mesoscale and their potential impact upon short-range weather forecasts. In this work, several observing system simulation experiments of a mesoscale convective system were performed to assess the sensitivity of the local ensemble transform Kalman filter to both relaxation-to-prior spread (RTPS) inflation and horizontal localization of the error covariance matrix. Realistic large-scale forcing and model errors have been taken into account in the simulation of reflectivity and Doppler velocity observations. Overall, the most accurate analyses in terms of RMSE were produced with a relatively small horizontal localization cutoff radius (~3.6–7.3 km) and large RTPS inflation parameter (~0.9–0.95). Additionally, the impact of horizontal localization on short-range ensemble forecast was larger compared to inflation, almost doubling the lead times up to which the effect of using a more accurate state to initialize the forecast persisted.

Free access