Browse

You are looking at 71 - 80 of 3,074 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Qin Xu
,
Kang Nai
,
Li Wei
,
Nathan Snook
,
Yunheng Wang
, and
Ming Xue

Abstract

A time–space shift method is developed for relocating model-predicted tornado vortices to radar-observed locations to improve the model initial conditions and subsequent predictions of tornadoes. The method consists of the following three steps. (i) Use the vortex center location estimated from radar observations to sample the best ensemble member from tornado-resolving ensemble predictions. Here, the best member is defined in terms of the predicted vortex center track that has a closest point, say at the time of t = t *, to the estimated vortex center at the initial time t 0 (when the tornado vortex signature is first detected in radar observations). (ii) Create a time-shifted field from the best ensemble member in which the field within a circular area of about 10-km radius around the vortex center is taken from t = t *, while the field outside this circular area is transformed smoothly via temporal interpolation to the best ensemble member at t 0. (iii) Create a time–space-shifted field in which the above time-shifted circular area is further shifted horizontally to co-center with the estimated vortex center at t 0, while the field outside this circular area is transformed smoothly via spatial interpolation to the non-shifted field at t 0 from the best ensemble member. The method is applied to the 20 May 2013 Oklahoma Newcastle–Moore tornado case, and is shown to be very effective in improving the tornado track and intensity predictions.

Significance Statement

The time–space shift method developed in this paper can smoothly relocate tornado vortices in model-predicted fields to match radar-observed locations. The method is found to be very effective in improving not only model initial condition but also the subsequent tornado track and intensity predictions. The method is also not sensitive to small errors in radar-estimated vortex center location at the initial time. The method should be useful for future real-time or even operational applications although further tests and improvements are needed (and are planned).

Restricted access
Rebecca D. Adams-Selin
,
Christina Kalb
,
Tara Jensen
,
John Henderson
,
Tim Supinie
,
Lucas Harris
,
Yunheng Wang
,
Burkely T. Gallo
, and
Adam J. Clark

Abstract

Hail forecasts produced by the CAM-HAILCAST pseudo-Lagrangian hail size forecasting model were evaluated during the 2019, 2020, and 2021 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments (SFEs). As part of this evaluation, HWT SFE participants were polled about their definition of a “good” hail forecast. Participants were presented with two different verification methods conducted over three different spatiotemporal scales, and were then asked to subjectively evaluate the hail forecast as well as the different verification methods themselves. Results recommended use of multiple verification methods tailored to the type of forecast expected by the end-user interpreting and applying the forecast. The hail forecasts evaluated during this period included an implementation of CAM-HAILCAST in the Limited Area Model of the Unified Forecast System with the Finite Volume 3 (FV3) dynamical core. Evaluation of FV3-HAILCAST over both 1- and 24-h periods found continued improvement from 2019 to 2021. The improvement was largely a result of wide intervariability among FV3 ensemble members with different microphysics parameterizations in 2019 lessening significantly during 2020 and 2021. Overprediction throughout the diurnal cycle also lessened by 2021. A combination of both upscaling neighborhood verification and an object-based technique that only retained matched convective objects was necessary to understand the improvement, agreeing with the HWT SFE participants’ recommendations for multiple verification methods.

Significance Statement

“Good” forecasts of hail can be determined in multiple ways and must depend on both the performance of the guidance and the perspective of the end-user. This work looks at different verification strategies to capture the performance of the CAM-HAILCAST hail forecasting model across three years of the Spring Forecasting Experiment (SFE) in different parent models. Verification strategies were informed by SFE participant input via a survey. Skill variability among models decreased in SFE 2021 relative to prior SFEs. The FV3 model in 2021, compared to 2019, provided improved forecasts of both convective distribution and 38-mm (1.5 in.) hail size, as well as less overforecasting of convection from 1900 to 2300 UTC.

Restricted access
Matthew B. Switanek
,
Thomas M. Hamill
,
Lindsey N. Long
, and
Michael Scheuerer

Abstract

Tropical cyclones are extreme events with enormous and devastating consequences to life, property, and our economies. As a result, large-scale efforts have been devoted to improving tropical cyclone forecasts with lead times ranging from a few days to months. More recently, subseasonal forecasts (e.g., 2–6-week lead time) of tropical cyclones have received greater attention. Here, we study whether bias-corrected, subseasonal tropical cyclone reforecasts of the GEFS and ECMWF models are skillful in the Atlantic basin. We focus on the peak hurricane season, July–November, using the reforecast years 2000–19. Model reforecasts of accumulated cyclone energy (ACE) are produced, and validated, for lead times of 1–2 and 3–4 weeks. Week-1–2 forecasts are substantially more skillful than a 31-day moving-window climatology, while week-3–4 forecasts still exhibit positive skill throughout much of the hurricane season. Furthermore, the skill of the combination of the two models is found to be an improvement with respect to either individual model. In addition to the GEFS and ECMWF model reforecasts, we develop a statistical modeling framework that solely relies on daily sea surface temperatures. The reforecasts of ACE from this statistical model are capable of producing better skill than the GEFS or ECMWF model, individually, and it can be leveraged to further enhance the model combination reforecast skill for the 3–4-week lead time.

Restricted access
Patrick Murphy
and
Clifford Mass

Abstract

This paper examines the relationship between daily carbon emissions for California’s savanna and forest wildfires and regional meteorology over the past 18 years. For each fuel type, the associated weather [daily maximum wind, daily vapor pressure deficit (VPD), and 30-day-prior VPD] is determined for all fire days, the first day of each fire, and the day of maximum emissions of each fire at each fire location. Carbon emissions, used as a marker of wildfire existence and growth, for both savanna and forest wildfires are found to vary greatly with regional meteorology, with the relationship between emissions and meteorology varying with the amount of emissions, fire location, and fuel type. Weak emissions are associated with climatologically typical dryness and wind. For moderate emissions, increasing emissions are associated with higher VPD from increased warming and only display a weak relationship with wind speed. High emissions, which encompass ∼85% of the total emissions but only ∼4% of the fire days, are associated with strong winds and large VPDs. Using spatial meteorological composites for California subregions, we find that weak-to-moderate emissions are associated with modestly warmer-than-normal temperatures and light winds across the domain. In contrast, high emissions are associated with strong winds and substantial temperature anomalies, with colder-than-normal temperatures east of the Sierra Nevada and warmer-than-normal conditions over the coastal zone and the interior of California.

Significance Statement

The purpose of this work is to better understand the influence of spatially and temporally variable meteorology and spatially variable surface fuels on California’s fires. This is important because much research has focused on large climatic scales that may dilute the true influence of weather (here, high winds and dryness) on fire growth. We use a satellite-recorded fire emissions dataset to quantify daily wildfire existence and growth and to determine the relationship between regional meteorology and wildfires across varying emissions in varying fuels. The result is a novel view of the relationship between California wildfires and rapidly variable, regional meteorology.

Restricted access
Xiaomin Chen
,
Andrew Hazelton
,
Frank D. Marks
,
Ghassan J. Alaka Jr.
, and
Chunxi Zhang

Abstract

Continuous development and evaluation of planetary boundary layer (PBL) parameterizations in hurricane conditions are crucial for improving tropical cyclone (TC) forecasts. A turbulence kinetic energy (TKE)-based eddy-diffusivity mass-flux (EDMF-TKE) PBL scheme, implemented in NOAA’s Hurricane Analysis and Forecast System (HAFS), was recently improved in hurricane conditions using large-eddy simulations. This study evaluates the performance of HAFS TC forecasts with the original (experiment HAFA) and modified EDMF-TKE (experiment HAFY) based on a large sample of cases during the 2021 North Atlantic hurricane season. Results indicate that intensity and structure forecast skill was better overall in HAFY than in HAFA, including during rapid intensification. Composite analyses demonstrate that HAFY produces shallower and stronger boundary layer inflow, especially within 1–3 times the radius of maximum wind (RMW). Stronger inflow and more moisture in the boundary layer contribute to stronger moisture convergence near the RMW. These boundary layer characteristics are consistent with stronger, deeper, and more compact TC vortices in HAFY than in HAFA. Nevertheless, track skill in HAFY is slightly reduced, which is in part attributable to the cross-track error from a few early cycles of Hurricane Henri that exhibited ∼400 n mi (1 n mi = 1.852 km) track error at longer lead times. Sensitivity experiments based on HAFY demonstrate that turning off cumulus schemes notably reduces the track errors of Henri while turning off the deep cumulus scheme reduces the intensity errors. This finding hints at the necessity of unifying the mass fluxes in PBL and cumulus schemes in future model physics development.

Restricted access
Christopher A. Kerr
,
Brian C. Matilla
,
Yaping Wang
,
Derek R. Stratman
,
Thomas A. Jones
, and
Nusrat Yussouf

Abstract

Since 2017, the Warn-on-Forecast System (WoFS) has been tested and evaluated during the Hazardous Weather Testbed Spring Forecasting Experiment (SFE) and summer convective seasons. The system has shown promise in predicting high temporal and spatial specificity of individual evolving thunderstorms. However, this baseline version of the WoFS has a 3-km horizontal grid spacing and cannot resolve some convective processes. Efforts are under way to develop a WoFS prototype at a 1-km grid spacing (WoFS-1km) with the hope to improve forecast accuracy. This requires extensive changes to data assimilation specifications and observation processing parameters. A preliminary version of WoFS-1km nested within WoFS at 3 km (WoFS-3km) was developed, tested, and run during the 2021 SFE in pseudo–real time. Ten case studies were successfully completed and provided simulations of a variety of convective modes. The reflectivity and rotation storm objects from WoFS-1km are verified against both WoFS-3km and 1-km forecasts initialized from downscaled WoFS-3km analyses using both neighborhood- and object-based techniques. Neighborhood-based verification suggests WoFS-1km improves reflectivity bias but not spatial placement. The WoFS-1km object-based reflectivity forecast accuracy is higher in most cases, leading to a net improvement. Both the WoFS-1km and downscaled forecasts have ideal reflectivity object frequency biases while the WoFS-3km overpredicts the number of reflectivity objects. The rotation object verification is ambiguous as many cases are negatively impacted by 1-km data assimilation. This initial evaluation of a WoFS-1km prototype is a solid foundation for further development and future testing.

Significance Statement

This study investigates the impacts of performing data assimilation directly on a 1-km WoFS model grid. Most previous studies have only initialized 1-km WoFS forecasts from coarser analyses. The results demonstrate some improvements to reflectivity forecasts through data assimilation on a 1-km model grid although finer resolution data assimilation did not improve rotation forecasts.

Restricted access
Yung-Yun Cheng
,
Chia-Tung Chang
,
Buo-Fu Chen
,
Hung-Chi Kuo
, and
Cheng-Shang Lee

Abstract

This paper proposes a new quantitative precipitation estimation (QPE) technique to provide accurate rainfall estimates in complex terrain, where conventional QPE has limitations. The operational radar QPE in Taiwan is mainly based on the simplified relationship between radar reflectivity Z and rain rate R [R(Z) relation] and only utilizes the single-point lowest available echo to estimate rain rates, leading to low accuracy in complex terrain. Here, we conduct QPE using deep learning that extracts features from 3D radar reflectivities to address the above issues. Convolutional neural networks (CNN) are used to analyze contoured frequency by altitude diagrams (CFADs) to generate the QPE. CNN models are trained on existing rain gauges in northern and eastern Taiwan with the 3-yr data during 2015–17 and validated and tested using 2018 data. The weights of heavy rains (≥10 mm h−1) are increased in the model loss calculation to handle the unbalanced rainfall data and improve accuracy. Results show that the CNN outperforms the R(Z) relation based on the 2018 rain gauge data. Furthermore, this research proposes methods to conduct 2D gridded QPE at every pixel by blending estimates from various trained CNN models. Verification based on independent rain gauges shows that the CNN QPE solves the underestimation of the R(Z) relation in mountainous areas. Case studies are presented to visualize the results, showing that the CNN QPE generates better small-scale rainfall features and more accurate precipitation information. This deep learning QPE technique may be helpful for the disaster prevention of small-scale flash floods in complex terrain.

Restricted access
Anirudhan Badrinath
,
Luca Delle Monache
,
Negin Hayatbini
,
Will Chapman
,
Forest Cannon
, and
Marty Ralph

Abstract

A machine learning method based on spatial convolution to capture complex spatial precipitation patterns is proposed to identify and reduce biases affecting predictions of a dynamical model. The method is based on a combination of a classification and dual-regression model approach using modified U-Net convolutional neural networks (CNN) to postprocess daily accumulated precipitation over the U.S. West Coast. In this study, we leverage 34 years of high-resolution deterministic Western Weather Research and Forecasting (West-WRF) precipitation reforecasts as training data for the U-Net CNN. The data are split such that the test set contains 4 water years of data that encompass characteristic West Coast precipitation regimes: El Niño, La Niña, and dry and wet El Niño–Southern Oscillation (ENSO neutral) water years. On the unseen 4-yr dataset, the trained CNN yields a 12.9%–15.9% reduction in root-mean-square error (RMSE) and 2.7%–3.4% improvement in Pearson correlation (PC) over West-WRF for lead times of 1–4 days. Compared to an adapted model output statistics correction, the CNN reduces RMSE by 7.4%–8.9% and improves PC by 3.3%–4.2% across all events. Effectively, the CNN adds more than a day of predictive skill when compared to West-WRF. The CNN outperforms the other methods also for the prediction of extreme events, which we define as the top 10% of events with the greatest average daily accumulated precipitation. The improvement over West-WRF’s RMSE (PC) for these events is 19.8%–21.0% (4.9%–5.5%) and MOS’s RMSE (PC) is 8.8%–9.7% (4.2%–4.7%). Hence, the proposed U-Net CNN shows significantly improved forecast skill over existing methods, highlighting a promising path forward for improving precipitation forecasts.

Significance Statement

Extreme precipitation events and atmospheric rivers, which contain narrow bands of water vapor transport, can cause millions of dollars in damages. We demonstrate the utility of a computer vision-based machine learning technique for improving precipitation forecasts. We show that there is a significant increase in predictive accuracy for daily accumulated precipitation using these machine learning methods, over a 4-yr period of unseen cases, including those corresponding to the extreme precipitation associated with atmospheric rivers.

Restricted access
Partha S. Bhattacharjee
,
Li Zhang
,
Barry Baker
,
Li Pan
,
Raffaele Montuoro
,
Georg A. Grell
, and
Jeffery T. McQueen

Abstract

The NWS/NCEP recently implemented a new global deterministic aerosol forecast model named the Global Ensemble Forecast Systems Aerosols (GEFS-Aerosols), which is based on the Finite Volume version 3 GFS (FV3GFS). It replaced the operational NOAA Environmental Modeling System (NEMS) GFS Aerosol Component version 2 (NGACv2), which was based on a global spectral model (GSM). GEFS-Aerosols uses aerosol modules from the GOCART previously integrated in the WRF Model with Chemistry (WRF-Chem), FENGSHA dust scheme, and several other updates. In this study, we have extensively evaluated aerosol optical depth (AOD) forecasts from GEFS-Aerosols against various observations over a timespan longer than one year (2019–20). The total AOD improvement (in terms of seasonal mean) in GEFS-Aerosols is about 40% compared to NGACv2 in the fall and winter season of 2019. In terms of aerosol species, the biggest improvement came from the enhanced representation of biomass burning aerosol species as GEFS-Aerosols is able to capture more fire events in southern Africa, South America, and Asia than its predecessor. Dust AODs reproduce the seasonal variation over Africa and the Middle East. We have found that correlation of total AOD over large regions of the globe remains consistent for forecast days 3–5. However, we have found that GEFS-Aerosols generates some systematic positive biases for organic carbon AOD near biomass burning regions and sulfate AOD over prediction over East Asia. The addition of a data assimilation capability to GEFS-Aerosols in the near future is expected to address these biases and provide a positive impact to aerosol forecasts by the model.

Significance Statement

The purpose of this study is to quantify improvements associated with the newly implemented global aerosol forecast model at NWS/NCEP. The monthly and seasonal variations of AOD forecasts of various aerosol regimes are overall consistent with the observations. Our results provide a guide to downstream regional air quality models like CMAQ that will use GEFS-Aerosols to provide lateral boundary conditions.

Restricted access
Aaron J. Hill
,
Russ S. Schumacher
, and
Israel L. Jirak

Abstract

Historical observations of severe weather and simulated severe weather environments (i.e., features) from the Global Ensemble Forecast System v12 (GEFSv12) Reforecast Dataset (GEFS/R) are used in conjunction to train and test random forest (RF) machine learning (ML) models to probabilistically forecast severe weather out to days 4–8. RFs are trained with ∼9 years of the GEFS/R and severe weather reports to establish statistical relationships. Feature engineering is briefly explored to examine alternative methods for gathering features around observed events, including simplifying features using spatial averaging and increasing the GEFS/R ensemble size with time lagging. Validated RF models are tested with ∼1.5 years of real-time forecast output from the operational GEFSv12 ensemble and are evaluated alongside expert human-generated outlooks from the Storm Prediction Center (SPC). Both RF-based forecasts and SPC outlooks are skillful with respect to climatology at days 4 and 5 with diminishing skill thereafter. The RF-based forecasts exhibit tendencies to slightly underforecast severe weather events, but they tend to be well-calibrated at lower probability thresholds. Spatially averaging predictors during RF training allows for prior-day thermodynamic and kinematic environments to generate skillful forecasts, while time lagging acts to expand the forecast areas, increasing resolution but decreasing overall skill. The results highlight the utility of ML-generated products to aid SPC forecast operations into the medium range.

Significance Statement

Medium-range severe weather forecasts generated from statistical models are explored here alongside operational forecasts from the Storm Prediction Center (SPC). Human forecasters at the SPC rely on traditional numerical weather prediction model output to make medium-range outlooks and statistical products that mimic operational forecasts can be used as guidance tools for forecasters. The statistical models relate simulated severe weather environments from a global weather model to historical records of severe weather and perform noticeably better than human-generated outlooks at shorter lead times (e.g., day 4 and 5) and are capable of capturing the general location of severe weather events 8 days in advance. The results highlight the value in these data-driven methods in supporting operational forecasting.

Restricted access