Browse

You are looking at 31 - 40 of 2,678 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Ethan Collins
,
Zachary J. Lebo
,
Robert Cox
,
Christopher Hammer
,
Matthew Brothers
,
Bart Geerts
,
Robert Capella
, and
Sarah McCorkle

Abstract

Strong wind events cause significant societal damage ranging from loss of property and disruption of commerce to loss of life. Over portions of the United States, the strongest winds occur in the cold season and may be driven by interactions with the terrain (downslope winds, gap flow, and mountain wave activity). In the first part of this two-part series, we evaluate the High-Resolution Rapid Refresh (HRRR) model wind speed and gust forecasts for the 2016–22 winter months over Wyoming and Colorado, an area prone to downslope windstorms and gap flows due to its complex topography. The HRRR model exhibits a positive bias for low wind speeds/gusts and a large negative bias for strong wind speeds/gusts. In general, the model misses many strong wind events, but when it does predict strong winds, there is a high false alarm probability. An analysis of proxies for surface winds is conducted. Specifically, 700- and 850-mb (1 mb = 1 hPa) geopotential height gradients are found to be good proxies for strong wind speeds and gusts at two wind-prone locations in Wyoming. Given the good agreement between low-level height gradients and surface wind speeds yet a strong negative bias for strong wind speeds and gusts, there is a potential shortcoming in the boundary layer physics in the HRRR model with regard to predicting strong winds over complex terrain, which is the focus of the second part of this two-part study. Last, the sites with the largest strong wind speed bias are found to mostly sit on the leeward side of high mountains, suggesting that the HRRR model performs poorly in the prediction of downslope windstorms.

Significance Statement

We investigate the performance of the High-Resolution Rapid Refresh (HRRR) model with respect to strong wintertime wind speeds and gusts over the complex terrain of Wyoming and Colorado. We show that the overall performance of the HRRR model is low with regard to strong wind speed and wind gust forecasts across the investigated winter seasons, with a large negative bias in predicted strong wind speeds and gusts and a small positive bias for weak wind speeds and gusts. The largest biases are found to be on the leeward side of high mountains, indicating poor prediction of downslope winds. This study also utilizes National Weather Service forecasting metrics to understand their performance with respect to strong wind forecasts, and we find that they provide skill in forecasting these events.

Restricted access
Ethan Collins
,
Zachary J. Lebo
,
Robert Cox
,
Christopher Hammer
,
Matthew Brothers
,
Bart Geerts
,
Robert Capella
, and
Sarah McCorkle

Abstract

Strong wind events can cause severe economic loss and societal impacts. Peak winds over Wyoming and Colorado occur during the wintertime months, and in the first part of this two-part series, it was shown that the High-Resolution Rapid Refresh (HRRR) model displays large negative biases with respect to strong wind events. In this part of the study, we address two questions: 1) does increasing the horizontal resolution improve the representation of strong wind events over this region, and 2) are the biases in HRRR-forecasted winds related to the selected planetary boundary layer (PBL), surface layer (SL), and/or land surface model (LSM) parameterizations? We conduct Weather Research and Forecasting (WRF) Model simulations to address these two main questions. Increasing the horizontal resolution leads to a small improvement in the simulation of strong wind speeds over the complex terrain of Wyoming and Colorado. In general, changes in the PBL, SL, and LSM parameterizations show much larger changes in simulated wind speeds compared to increasing the model resolution alone. Specifically, changing from the Mellor–Yamada–Nakanishi–Niino scheme to the Mellor–Yamada–Janjić PBL and SL schemes results in nearly no change in the r 2 values, but there is a decrease in the magnitude of the strong wind speed bias (from −12.52 to −10.16 m s−1). We attribute these differences to differences in the diagnosis of surface wind speeds and mixing in the boundary layer. Further analysis is conducted to determine the value of 1-km forecasts of strong winds compared with wind speed diagnostics commonly used by the National Weather Service.

Significance Statement

Motivated by prior studies showing low skill in the prediction of strong winds using state-of-the-art weather forecast models, in this study, we aim to investigate two questions: 1) does increasing the horizontal resolution improve the prediction of strong wind events over the complex terrain of Wyoming and Colorado, and 2) are the biases in High-Resolution Rapid Refresh (HRRR) forecasted winds related to the selected planetary boundary layer, surface layer, and/or land surface model parameterizations? We find that increasing the horizontal resolution provides a slight improvement in the prediction of strong winds. Further, considerable improvement in the prediction of strong winds is found for varying boundary layer, surface layer, and land surface parameterizations.

Restricted access
Gregory J. Stumpf
and
Sarah M. Stough

Abstract

Legacy National Weather Service verification techniques, when applied to current static severe convective warnings, exhibit limitations, particularly in accounting for the precise spatial and temporal aspects of warnings and severe convective events. Consequently, they are not particularly well suited for application to some proposed future National Weather Service warning delivery methods considered under the Forecasting a Continuum of Environmental Threats (FACETs) initiative. These methods include threats-in-motion (TIM), wherein warning polygons move nearly continuously with convective hazards, and probabilistic hazard information (PHI), a concept that involves augmenting warnings with rapidly updating probabilistic plumes. A new geospatial verification method was developed and evaluated, by which warnings and observations are placed on equivalent grids within a common reference frame, with each grid cell being represented as a hit, miss, false alarm, or correct null for each minute. New measures are computed, including false alarm area and location-specific lead time, departure time, and false alarm time. Using the 27 April 2011 tornado event, we applied the TIM and PHI warning techniques to demonstrate the benefits of rapidly updating warning areas, showcase the application of the geospatial verification method within this novel warning framework, and highlight the impact of varying probabilistic warning thresholds on warning performance. Additionally, the geospatial verification method was tested on a storm-based warning dataset (2008–22) to derive annual, monthly, and hourly statistics.

Open access
Andrew Hazelton
,
Xiaomin Chen
,
Ghassan J. Alaka Jr.
,
George R. Alvey III
,
Sundararaman Gopalakrishnan
, and
Frank Marks

Abstract

Understanding how model physics impact tropical cyclone (TC) structure, motion, and evolution is critical for the development of TC forecast models. This study examines the impacts of microphysics and planetary boundary layer (PBL) physics on forecasts using the Hurricane Analysis and Forecast System (HAFS), which is newly operational in 2023. The “HAFS-B” version is specifically evaluated, and three sensitivity tests (for over 400 cases in 15 Atlantic TCs) are compared with retrospective HAFS-B runs. Sensitivity tests are generated by 1) changing the microphysics in HAFS-B from Thompson to GFDL, 2) turning off the TC-specific PBL modifications that have been implemented in operational HAFS-B, and 3) combining the PBL and microphysics modifications. The forecasts are compared through standard verification metrics, and also examination of composite structure. Verification results show that Thompson microphysics slightly degrades the days 3–4 forecast track in HAFS-B, but improves forecasts of long-term intensity. The TC-specific PBL changes lead to a reduction in a negative intensity bias and improvement in RI skill, but cause some degradation in prediction of 34-kt (1 kt ≈ 0.51 m s−1) wind radii. Composites illustrate slightly deeper vortices in runs with the Thompson microphysics, and stronger PBL inflow with the TC-specific PBL modifications. These combined results demonstrate the critical role of model physics in regulating TC structure and intensity, and point to the need to continue to develop improvements to HAFS physics. The study also shows that the combination of both PBL and microphysics modifications (which are both included in one of the two versions of HAFS in the first operational implementation) leads to the best overall results.

Significance Statement

A new hurricane model, the Hurricane Analysis and Forecast System (HAFS), is being introduced for operational prediction during the 2023 hurricane season. One of the most important parts of any forecast model are the “physics parameterizations,” or approximations of physical processes that govern things like turbulence, cloud formation, etc. In this study, we tested these approximations in one configuration of HAFS, HAFS-B. Specifically, we looked at two different versions of the microphysics (modeling the growth of water and ice in clouds) and boundary layer physics (the approximations for turbulence in the lowest level of the atmosphere). We found that both of these sets of model physics had important effects on the forecasts from HAFS. The microphysics had notable impacts on the track forecasts, and also changed the vertical depth of the model hurricanes. The boundary layer physics, including some of our changes based on observed hurricanes and turbulence-resolving models, helped the model better predict rapid intensification (periods where the wind speed increases quickly). Work is ongoing to improve the model physics for better forecasts of rapid intensification and overall storm structure, including storm size. The study also shows the combination of both PBL and microphysics modifications overall leads to the best results and thus was used as one of the two first operational implementations of HAFS.

Restricted access
Jingyi Wen
,
Zhiyong Meng
,
Lanqiang Bai
, and
Ruilin Zhou

Abstract

This study documents the features of tornadoes, their parent storms, and the environments of the only two documented tornado outbreak events in China. The two events were associated with Tropical Cyclone (TC) Yagi on 12 August 2018 with 11 tornadoes and with an extratropical cyclone (EC) on 11 July 2021 (EC 711) with 13 tornadoes. Most tornadoes in TC Yagi were spawned from discrete minisupercells, while a majority of tornadoes in EC 711 were produced from supercells imbedded in QLCSs or cloud clusters. In both events, the high-tornado-density area was better collocated with the K index rather than MLCAPE, and with entraining rather than non-entraining parameters possibly due to their sensitivity to midlevel moisture. EC 711 had a larger displacement between maximum entraining CAPE and vertical wind shear than TC Yagi, with the maximum entraining CAPE better collocated with the high-tornado-density area than vertical wind shear. Relative to TC Yagi, EC 711 had stronger entraining CAPE, 0–1-km storm relative helicity, 0–6-km vertical wind shear, and composite parameters such as an entraining significant tornado parameter, which caused its generally stronger tornado vortex signatures (TVSs) and mesocyclones with a larger diameter and longer life span. No significant differences were found in the composite parameter of these two events from U.S. statistics. Although obvious dry air intrusions were observed in both events, no apparent impact was observed on the potential of tornado outbreak in EC 711. In TC Yagi, however, the dry air intrusion may have helped tornado outbreak due to cloudiness erosion and thus the increase in surface temperature and low-level lapse rate.

Restricted access
Katherine E. McKeown
,
Casey E. Davenport
,
Matthew D. Eastin
,
Sarah M. Purpura
, and
Roger R. Riggin IV

Abstract

The evolution of supercell thunderstorms traversing complex terrain is not well understood and remains a short-term forecast challenge across the Appalachian Mountains of the eastern United States. Although case studies have been conducted, there has been no large multicase observational analysis focusing on the central and southern Appalachians. To address this gap, we analyzed 62 isolated warm-season supercells that occurred in this region. Each supercell was categorized as either crossing (∼40%) or noncrossing (∼60%) based on their maintenance of supercellular structure while traversing prominent terrain. The structural evolution of each storm was analyzed via operationally relevant parameters extracted from WSR-88D radar data. The most significant differences in radar-observed structure among storm categories were associated with the mesocyclone; crossing storms exhibited stronger, wider, and deeper mesocyclones, along with more prominent and persistent hook echoes. Crossing storms also moved faster. Among the supercells that crossed the most prominent peaks and ridges, significant increases in base reflectivity, vertically integrated liquid, echo tops, and mesocyclone intensity/depth were observed, in conjunction with more frequent large hail and tornado reports, as the storms ascended windward slopes. Then, as the supercells descended leeward slopes, significant increases in mesocyclone depth and tornado frequency were observed. Such results reinforce the notion that supercell evolution can be modulated substantially by passage through and over complex terrain.

Significance Statement

Understanding of thunderstorm evolution and severe weather production in regions of complex terrain remains limited, particularly for storms with rotating updrafts known as supercell thunderstorms. This study provides a systematic analysis of numerous warm season supercell storms that moved through the central and southern Appalachian Mountains. We focus on operationally relevant radar characteristics and differences among storms that maintain supercellular structure as they traverse the terrain (crossing) versus those that do not (noncrossing). Our results identify radar characteristics useful in distinguishing between crossing and noncrossing storms, along with typical supercell evolution and severe weather production as storms cross the more prominent peaks and ridges of the central and southern Appalachian Mountains.

Restricted access
Xi Liu
,
Yu Zheng
,
Xiaoran Zhuang
,
Yaqiang Wang
,
Xin Li
,
Zhang Bei
, and
Wenhua Zhang

Abstract

The accurate prediction of short-term rainfall, and in particular the forecast of hourly heavy rainfall (HHR) probability, remains challenging for numerical weather prediction (NWP) models. Here, we introduce a deep learning (DL) model, PredRNNv2-AWS, a convolutional recurrent neural network designed for deterministic short-term rainfall forecasting. This model integrates surface rainfall observations and atmospheric variables simulated by the Precision Weather Analysis and Forecasting System (PWAFS). Our DL model produces realistic hourly rainfall forecasts for the next 13 h. Quantitative evaluations show that the use of surface rainfall observations as one of the predictors achieves higher performance (threat score) with 263% and 186% relative improvements over NWP simulations for the first 3 h and the entire forecast hours, respectively, at a threshold of 5 mm h−1. Noting that the optical-flow method also performs well in the initial hours, its predictions quickly worsen in the final hours compared to other experiments. The machine learning model, LightGBM, is then integrated to classify HHR from the predicted hourly rainfall of PredRNNv2-AWS. The results show that PredRNNv2-AWS can better reflect actual HHR conditions compared with PredRNNv2 and PWAFS. A representative case demonstrates the superiority of PredRNNv2-AWS in predicting the evolution of the rainy system, which substantially improves the accuracy of the HHR prediction. A test case involving the extreme flood event in Zhengzhou exemplifies the generalizability of our proposed model. Our model offers a reliable framework to predict target variables that can be obtained from numerical simulations and observations, e.g., visibility, wind power, solar energy, and air pollution.

Open access
John Krause
and
Vinzent Klaus

Abstract

A novel differential reflectivity (Z DR) column detection method, the hotspot technique, has been developed. Utilizing constant altitude plan projection indicators (CAPPI) of Z DR, reflectivity, and a proxy for circular depolarization ratio at the height of the −10°C isotherm, the method identifies the location of the base of the Z DR column rather than the entire Z DR column depth. The new method is compared to two other existing Z DR column detection methods and shown to be an improvement in regions where there is a Z DR bias.

Significance Statement

Thunderstorm updrafts are the area of a storm where precipitation grows, electrification is initiated, and tornadoes may form. Therefore, accurate detection and quantification of updraft properties using weather radar data is of great importance for assessing a storm’s damage potential in real time. Current methods to automatically detect updraft areas, however, are error-prone due to common deficiencies in radar measurements. We present a novel algorithmic approach to identify storm updrafts that eliminates some of the known shortcomings of existing methods. In the future, our method could be used to develop new hail detection algorithms, or to improve short-term weather forecasting models.

Restricted access
H. Christophersen
,
J. Nachamkin
, and
W. Davis

Abstract

This study assesses the accuracy of the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) forecasts for clouds within stable and unstable environments (thereafter refers as “stable” and “unstable” clouds). This evaluation is conducted by comparing these forecasts against satellite retrievals through a combination of traditional, spatial, and object-based methods. To facilitate this assessment, the Model Evaluation Tools (MET) community tool is employed. The findings underscore the significance of fine-tuning the MET parameters to achieve a more accurate representation of the features under scrutiny. The study’s results reveal that when employing traditional pointwise statistics (e.g., frequency bias and equitable threat score), there is consistency in the results whether calculated from Method for Object-Based Diagnostic Evaluation (MODE)-based objects or derived from the complete fields. Furthermore, the object-based statistics offer valuable insights, indicating that COAMPS generally predicts cloud object locations accurately, though the spread of these predicted locations tends to increase with time. It tends to overpredict the object area for unstable clouds while underpredicting it for stable clouds over time. These results are in alignment with the traditional pointwise bias scores for the entire grid. Overall, the spatial metrics provided by the object-based verification methods emerge as crucial and practical tools for the validation of cloud forecasts.

Significance Statement

As the general Navy meteorological and oceanographic (METOC) community engages in collaboration with the broader scientific community, our goal is to harness community tools like MET for the systematic evaluation of weather forecasts, with a specific focus on variables crucial to the Navy. Clouds, given their significant impact on visibility, hold particular importance in our investigations. Cloud forecasts pose unique challenges, primarily attributable to the intricate physics governing cloud development and the complexity of representing these processes within numerical models. Cloud observations are also constrained by limitations, arising from both top-down satellite measurements and bottom-up ground-based measurements. This study illustrates that, with a comprehensive understanding of community tools, cloud forecasts can be consistently verified. This verification encompasses traditional evaluation methods, measuring general qualities such as bias and root-mean-squared error, as well as newer techniques like spatial and object-based methods designed to account for displacement errors.

Restricted access
Shu-Chih Yang
,
Yi-Pin Chang
,
Hsiang-Wen Cheng
,
Kuan-Jen Lin
,
Ya-Ting Tsai
,
Jing-Shan Hong
, and
Yu-Chi Li

Abstract

In this study, we investigate the impact of assimilating densely distributed Global Navigation Satellite System (GNSS) zenith total delay (ZTD) and surface station (SFC) data on the prediction of very short-term heavy rainfall associated with afternoon thunderstorm (AT) events in the Taipei basin. Under weak synoptic-scale conditions, four cases characterized by different rainfall features are chosen for investigation. Experiments are conducted with a 3-h assimilation period, followed by 3-h forecasts. Also, various experiments are performed to explore the sensitivity of AT initialization. Data assimilation experiments are conducted with a convective-scale Weather Research and Forecasting–local ensemble transform Kalman filter (WRF-LETKF) system. The results show that ZTD assimilation can provide effective moisture corrections. Assimilating SFC wind and temperature data could additionally improve the near-surface convergence and cold bias, further increasing the impact of ZTD assimilation. Frequently assimilating SFC data every 10 min provides the best forecast performance especially for rainfall intensity predictions. Such a benefit could still be identified in the earlier forecast initialized 2 h before the start of the event. Detailed analysis of a case on 22 July 2019 reveals that frequent assimilation provides initial conditions that can lead to fast vertical expansion of the convection and trigger an intense AT. This study proposes a new metric using the fraction skill score to construct an informative diagram to evaluate the location and intensity of heavy rainfall forecast and display a clear characteristic of different cases. Issues of how assimilation strategies affect the impact of ground-based observations in a convective ensemble data assimilation system and AT development are also discussed.

Significance Statement

In this study, we investigate the impact of frequently assimilating densely distributed ground-based observations on predicting four afternoon thunderstorm events in the Taipei basin. While assimilating GNSS-ZTD data can improve the moisture fields for initializing convection, assimilating surface station data improves the prediction of rainfall location and intensity, particularly when surface data are assimilated at a very high frequency of 10 min.

Open access