Browse

You are looking at 1 - 10 of 3,079 items for :

  • Weather and Forecasting x
  • Refine by Access: Content accessible to me x
Clear All
Gregory J. Stumpf
and
Sarah M. Stough

Abstract

Legacy National Weather Service verification techniques, when applied to current static severe convective warnings, exhibit limitations, particularly in accounting for the precise spatial and temporal aspects of warnings and severe convective events. Consequently, they are not particularly well suited for application to some proposed future National Weather Service warning delivery methods considered under the Forecasting a Continuum of Environmental Threats (FACETs) initiative. These methods include threats-in-motion (TIM), wherein warning polygons move nearly continuously with convective hazards, and probabilistic hazard information (PHI), a concept that involves augmenting warnings with rapidly updating probabilistic plumes. A new geospatial verification method was developed and evaluated, by which warnings and observations are placed on equivalent grids within a common reference frame, with each grid cell being represented as a hit, miss, false alarm, or correct null for each minute. New measures are computed, including false alarm area and location-specific lead time, departure time, and false alarm time. Using the 27 April 2011 tornado event, we applied the TIM and PHI warning techniques to demonstrate the benefits of rapidly updating warning areas, showcase the application of the geospatial verification method within this novel warning framework, and highlight the impact of varying probabilistic warning thresholds on warning performance. Additionally, the geospatial verification method was tested on a storm-based warning dataset (2008–22) to derive annual, monthly, and hourly statistics.

Open access
Free access
Free access
Xi Liu
,
Yu Zheng
,
Xiaoran Zhuang
,
Yaqiang Wang
,
Xin Li
,
Zhang Bei
, and
Wenhua Zhang

Abstract

The accurate prediction of short-term rainfall, and in particular the forecast of hourly heavy rainfall (HHR) probability, remains challenging for numerical weather prediction (NWP) models. Here, we introduce a deep learning (DL) model, PredRNNv2-AWS, a convolutional recurrent neural network designed for deterministic short-term rainfall forecasting. This model integrates surface rainfall observations and atmospheric variables simulated by the Precision Weather Analysis and Forecasting System (PWAFS). Our DL model produces realistic hourly rainfall forecasts for the next 13 h. Quantitative evaluations show that the use of surface rainfall observations as one of the predictors achieves higher performance (threat score) with 263% and 186% relative improvements over NWP simulations for the first 3 h and the entire forecast hours, respectively, at a threshold of 5 mm h−1. Noting that the optical-flow method also performs well in the initial hours, its predictions quickly worsen in the final hours compared to other experiments. The machine learning model, LightGBM, is then integrated to classify HHR from the predicted hourly rainfall of PredRNNv2-AWS. The results show that PredRNNv2-AWS can better reflect actual HHR conditions compared with PredRNNv2 and PWAFS. A representative case demonstrates the superiority of PredRNNv2-AWS in predicting the evolution of the rainy system, which substantially improves the accuracy of the HHR prediction. A test case involving the extreme flood event in Zhengzhou exemplifies the generalizability of our proposed model. Our model offers a reliable framework to predict target variables that can be obtained from numerical simulations and observations, e.g., visibility, wind power, solar energy, and air pollution.

Open access
Shu-Chih Yang
,
Yi-Pin Chang
,
Hsiang-Wen Cheng
,
Kuan-Jen Lin
,
Ya-Ting Tsai
,
Jing-Shan Hong
, and
Yu-Chi Li

Abstract

In this study, we investigate the impact of assimilating densely distributed Global Navigation Satellite System (GNSS) zenith total delay (ZTD) and surface station (SFC) data on the prediction of very short-term heavy rainfall associated with afternoon thunderstorm (AT) events in the Taipei basin. Under weak synoptic-scale conditions, four cases characterized by different rainfall features are chosen for investigation. Experiments are conducted with a 3-h assimilation period, followed by 3-h forecasts. Also, various experiments are performed to explore the sensitivity of AT initialization. Data assimilation experiments are conducted with a convective-scale Weather Research and Forecasting–local ensemble transform Kalman filter (WRF-LETKF) system. The results show that ZTD assimilation can provide effective moisture corrections. Assimilating SFC wind and temperature data could additionally improve the near-surface convergence and cold bias, further increasing the impact of ZTD assimilation. Frequently assimilating SFC data every 10 min provides the best forecast performance especially for rainfall intensity predictions. Such a benefit could still be identified in the earlier forecast initialized 2 h before the start of the event. Detailed analysis of a case on 22 July 2019 reveals that frequent assimilation provides initial conditions that can lead to fast vertical expansion of the convection and trigger an intense AT. This study proposes a new metric using the fraction skill score to construct an informative diagram to evaluate the location and intensity of heavy rainfall forecast and display a clear characteristic of different cases. Issues of how assimilation strategies affect the impact of ground-based observations in a convective ensemble data assimilation system and AT development are also discussed.

Significance Statement

In this study, we investigate the impact of frequently assimilating densely distributed ground-based observations on predicting four afternoon thunderstorm events in the Taipei basin. While assimilating GNSS-ZTD data can improve the moisture fields for initializing convection, assimilating surface station data improves the prediction of rainfall location and intensity, particularly when surface data are assimilated at a very high frequency of 10 min.

Open access
Peter J. Marinescu
,
Daniel Abdi
,
Kyle Hilburn
,
Isidora Jankov
, and
Liao-Fan Lin

Abstract

Estimates of soil moisture from two National Oceanic and Atmospheric Administration (NOAA) models are compared to in situ observations. The estimates are from a high-resolution atmospheric model with a land surface model [High-Resolution Rapid Refresh (HRRR) model] and a hydrologic model from the NOAA Climate Prediction Center (CPC). Both models produce wetter soils in dry regions and drier soils in wet regions, as compared to the in situ observations. These soil moisture differences occur at most soil depths but are larger at the deeper depths below the surface (100 cm). Comparisons of soil moisture variability are also assessed as a function of soil moisture regime. Both models have lower standard deviations as compared to the in situ observations for all soil moisture regimes. The HRRR model’s soil moisture is better correlated with in situ observations for drier soils as compared to wetter soils—a trend that was not present in the CPC model comparisons. In terms of seasonality, soil moisture comparisons vary depending on the metric, time of year, and soil moisture regime. Therefore, consideration of both the seasonality and soil moisture regime is needed to accurately determine model biases. These NOAA soil moisture estimates are used for a variety of forecasting and societal applications, and understanding their differences provides important context for their applications and can lead to model improvements.

Significance Statement

Soil moisture is an essential variable coupling the land surface to the atmosphere. Accurate estimates of soil moisture are important for forecasting near-surface temperature and moisture, predicting where clouds will form, and assessing drought and fire risks. There are multiple estimates of soil moisture available, and in this study, we compare soil moisture estimates from two different National Oceanic and Atmospheric Administration (NOAA) models to in situ observations. These comparisons include both soil moisture amount and variability and are conducted at several soil depths, in different soil moisture regimes, and for different seasons and years. This comprehensive assessment allows for an accurate assessment of biases within these models that would be missed when conducting analyses more broadly.

Open access
Temple R. Lee
,
Sandip Pal
,
Ronald D. Leeper
,
Tim Wilson
,
Howard J. Diamond
,
Tilden P. Meyers
, and
David D. Turner

Abstract

The scientific literature has many studies evaluating numerical weather prediction (NWP) models. However, many of those studies averaged across a myriad of different atmospheric conditions and surface forcings which can obfuscate the atmospheric conditions when NWP models perform well versus when they perform inadequately. To help isolate these different scenarios, we used observations from the U.S. Climate Reference Network (USCRN) obtained between 1 January and 31 December 2021 to distinguish among different near-surface atmospheric conditions (i.e., different near-surface heating rates ( d T d t ), incoming shortwave radiation (SWd ) regimes, and 5-cm soil moisture (SM 05)) to evaluate the High-Resolution Rapid Refresh (HRRR) model, which is a 3-km model used for operational weather forecasting in the U.S. On days with small (large) d T d t , we found afternoon T biases of about 2°C (−1°C) and afternoon SWd biases of up to 170 W m−2 (100 W m−2), but negligible impacts on SM 05 biases. On days with small (large) SWd , we found daytime temperature biases of about 3°C (−2.5°C) and daytime SWd biases of up to 190 W m−2 (80 W m−2). Whereas different SM 05 had little impact on T and SWd biases, dry (wet) conditions had positive (negative) SM 05 biases. We argue that the proper evaluation of weather forecasting models requires careful consideration of different near-surface atmospheric conditions and is critical to better identifying model deficiencies which supports improvements to the parameterization schemes used therein. A similar, regime-specific model verification approach may also be used to help evaluate other geophysical models.

Open access
Free access
Stephanie S. Rushley
,
Matthew A. Janiga
,
William Crawford
,
Carolyn A. Reynolds
,
William Komaromi
, and
Justin McLay

Abstract

Accurately simulating the Madden–Julian oscillation (MJO), which dominates intraseasonal (30–90 day) variability in the tropics, is critical to predicting tropical cyclones (TCs) and other phenomena at extended-range (2–3 week) time scales. MJO biases in intensity and propagation speed are a common problem in global coupled models. For example, the MJO in the Navy Earth System Prediction Capability (ESPC), a global coupled model, has been shown to be too strong and too fast, which has implications for the MJO–TC relationship in that model. The biases and extended-range prediction skill in the operational version of the Navy ESPC are compared to experiments applying different versions of analysis correction-based additive inflation (ACAI) to reduce model biases. ACAI is a method in which time-mean and stochastic perturbations based on analysis increments are added to the model tendencies with the goals of reducing systematic error and accounting for model uncertainty. Over the extended boreal summer (May–November), ACAI reduces the root-mean-squared error (RMSE) and improves the spread–skill relationship of the total tropical and MJO-filtered OLR and low-level zonal winds. While ACAI improves skill in the environmental fields of low-level absolute vorticity, potential intensity, and vertical wind shear, it degrades the skill in the relative humidity, which increases the positive bias in the genesis potential index (GPI) in the operational Navy ESPC. Northern Hemisphere integrated TC genesis biases are reduced (increased number of TCs) in the ACAI experiments, which is consistent with the positive GPI bias in the ACAI simulations.

Open access
Metodija M. Shapkalijevski

Abstract

The increased social need for more precise and reliable weather forecasts, especially when focusing on extreme weather events, pushes forward research and development in meteorology toward novel numerical weather prediction (NWP) systems that can provide simulations that resolve atmospheric processes on hectometric scales on demand. Such high-resolution NWP systems require a more detailed representation of the nonresolved processes, i.e., usage of scale-aware schemes for convection and three-dimensional turbulence (and radiation), which would additionally increase the computation needs. Therefore, developing and applying comprehensive, reliable, and computationally acceptable parameterizations in NWP systems is of urgent importance. All operationally used NWP systems are based on averaged Navier–Stokes equations, and thus require an approximation for the small-scale turbulent fluxes of momentum, energy, and matter in the system. The availability of high-fidelity data from turbulence experiments and direct numerical simulations has helped scientists in the past to construct and calibrate a range of turbulence closure approximations (from the relatively simple to more complex), some of which have been adopted and are in use in the current operational NWP systems. The significant development of learned-by-data (LBD) algorithms over the past decade (e.g., artificial intelligence) motivates engineers and researchers in fluid dynamics to explore alternatives for modeling turbulence by directly using turbulence data to quantify and reduce model uncertainties systematically. This review elaborates on the LBD approaches and their use in NWP currently, and also searches for novel data-informed turbulence models that can potentially be used and applied in NWP. Based on this literature analysis, the challenges and perspectives to do so are discussed.

Open access