Browse

You are looking at 11 - 20 of 2,345 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Vittorio A. Gensini, Cody Converse, Walker S. Ashley, and Mateusz Taszarek

Abstract

Previous studies have identified environmental characteristics that skillfully discriminate between severe and significant-severe weather events, but they have largely been limited by sample size and/or population of predictor variables. Given the heightened societal impacts of significant-severe weather, this topic was revisited using over 150 000 ERA5 reanalysis-derived vertical profiles extracted at the grid point nearest—and just prior to—tornado and hail reports during the period 1996–2019. Profiles were quality controlled and used to calculate 84 variables. Several machine learning classification algorithms were trained, tested, and cross validated on these data to assess skill in predicting severe or significant-severe reports for tornadoes and hail. Random forest classification outperformed all tested methods as measured by cross-validated critical success index scores and area under the receiver operating characteristic curve values. In addition, random forest classification was found to be more reliable than other methods and exhibited negligible frequency bias. The top three most important random forest classification variables for tornadoes were wind speed at 500 hPa, wind speed at 850 hPa, and 0–500-m storm-relative helicity. For hail, storm-relative helicity in the 3–6 km and −10° to −30°C layers, along with 0–6-km bulk wind shear, were found to be most important. A game theoretic approach was used to help explain the output of the random forest classifiers and establish critical feature thresholds for operational nowcasting and forecasting. A use case of spatial applicability of the random forest model is also presented, demonstrating the potential utility for operational forecasting. Overall, this research supports a growing number of weather and climate studies finding admirable skill in random forest classification applications.

Open access
Marvin Kähnert, Harald Sodemann, Wim C. de Rooy, and Teresa M. Valkonen

Abstract

Forecasts of marine cold air outbreaks critically rely on the interplay of multiple parameterization schemes to represent subgrid-scale processes, including shallow convection, turbulence, and microphysics. Even though such an interplay has been recognized to contribute to forecast uncertainty, a quantification of this interplay is still missing. Here, we investigate the tendencies of temperature and specific humidity contributed by individual parameterization schemes in the operational weather prediction model AROME-Arctic. From a case study of an extensive marine cold air outbreak over the Nordic seas, we find that the type of planetary boundary layer assigned by the model algorithm modulates the contribution of individual schemes and affects the interactions between different schemes. In addition, we demonstrate the sensitivity of these interactions to an increase or decrease in the strength of the parameterized shallow convection. The individual tendencies from several parameterizations can thereby compensate each other, sometimes resulting in a small residual. In some instances this residual remains nearly unchanged between the sensitivity experiments, even though some individual tendencies differ by up to an order of magnitude. Using the individual tendency output, we can characterize the subgrid-scale as well as grid-scale responses of the model and trace them back to their underlying causes. We thereby highlight the utility of individual tendency output for understanding process-related differences between model runs with varying physical configurations and for the continued development of numerical weather prediction models.

Open access
Rochelle P. Worsnop, Michael Scheuerer, Francesca Di Giuseppe, Christopher Barnard, Thomas M. Hamill, and Claudia Vitolo

Abstract

Wildfire guidance two weeks ahead is needed for strategic planning of fire mitigation and suppression. However, fire forecasts driven by meteorological forecasts from numerical weather prediction models inherently suffer from systematic biases. This study uses several statistical-postprocessing methods to correct these biases and increase the skill of ensemble fire forecasts over the contiguous United States 8–14 days ahead. We train and validate the postprocessing models on 20 years of European Centre for Medium-Range Weather Forecasts (ECMWF) reforecasts and ERA5 reanalysis data for 11 meteorological variables related to fire, such as surface temperature, wind speed, relative humidity, cloud cover, and precipitation. The calibrated variables are then input to the Global ECMWF Fire Forecast (GEFF) system to produce probabilistic forecasts of daily fire indicators, which characterize the relationships between fuels, weather, and topography. Skill scores show that the postprocessed forecasts overall have greater positive skill at days 8–14 relative to raw and climatological forecasts. It is shown that the postprocessed forecasts are more reliable at predicting above- and below-normal probabilities of various fire indicators than the raw forecasts and that the greatest skill for days 8–14 is achieved by aggregating forecast days together.

Restricted access
Michael Maier-Gerber, Andreas H. Fink, Michael Riemer, Elmar Schoemer, Christoph Fischer, and Benedikt Schulz

Abstract

While previous research on subseasonal tropical cyclone (TC) occurrence has mostly focused on either the validation of numerical weather prediction (NWP) models, or the development of statistical models trained on past data, the present study combines both approaches to a statistical–dynamical (hybrid) model for probabilistic forecasts in the North Atlantic basin. Although state-of-the-art NWP models have been shown to lack predictive skill with respect to subseasonal weekly TC occurrence, they may predict the environmental conditions sufficiently well to generate predictors for a statistical model. Therefore, an extensive predictor set was generated, including predictor groups representing the climatological seasonal cycle (CSC), oceanic, and tropical conditions, tropical wave modes, as well as extratropical influences, respectively. The developed hybrid forecast model is systematically validated for the Gulf of Mexico and central main development region (MDR) for lead times up to 5 weeks. Moreover, its performance is compared against a statistical approach trained on past data, as well as against different climatological and NWP benchmarks. For subseasonal lead times, the CSC models are found to outperform the NWP models, which quickly lose skill within the first two forecast weeks, even in case of recalibration. The statistical models trained on past data increase skill over the CSC models, whereas even greater improvements in skill are gained by the hybrid approach out to week 5. The vast majority of the additional subseasonal skill in the hybrid model, relative to the CSC model, could be attributed to the tropical (oceanic) conditions in the Gulf of Mexico (central MDR).

Open access
Michael M. French and Darrel M. Kingfield

Abstract

A sample of 198 supercells are investigated to determine if a radar proxy for the area of the storm midlevel updraft may be a skillful predictor of imminent tornado formation and/or peak tornado intensity. A novel algorithm, a modified version of the Thunderstorm Risk Estimation from Nowcasting Development via Size Sorting (TRENDSS) algorithm is used to estimate the area of the enhanced differential radar reflectivity factor (Z DR) column in Weather Surveillance Radar–1988 Doppler data; the Z DR column area is used as a proxy for the area of the midlevel updraft. The areas of Z DR columns are compared for 154 tornadic supercells and 44 nontornadic supercells, including 30+ supercells with tornadoes rated EF1, EF2, and EF3; 8 supercells with EF4+ tornadoes also are analyzed. It is found that (i) at the time of their peak 0–1-km azimuthal shear, nontornadic supercells have consistently small (<20 km2) Z DR column areas, while tornadic cases exhibit much greater variability in areas; and (ii) at the time of tornadogenesis, EF3+ tornadic cases have larger Z DR column areas than tornadic cases rated EF1/2. In addition, all eight violent tornadoes sampled have Z DR column areas > 30 km2 at the time of tornadogenesis. However, only weak positive correlation is found between Z DR column area and both radar-estimated peak tornado intensity and maximum tornado path width. Planned future work that focuses on mechanisms linking updraft size and tornado formation and intensity is summarized and the use of the modified TRENDSS algorithm, which is immune to Z DR bias and thus ideal for real-time operational use, is emphasized.

Restricted access
Jing Zhang, Jie Feng, Hong Li, Yuejian Zhu, Xiefei Zhi, and Feng Zhang

Abstract

Operational and research applications generally use the consensus approach for forecasting the track and intensity of tropical cyclones (TCs) due to the spatial displacement of the TC location and structure in ensemble member forecasts. This approach simply averages the location and intensity information for TCs in individual ensemble members, which is distinct from the traditional pointwise arithmetic mean (AM) method for ensemble forecast fields. The consensus approach, despite having improved skills relative to the AM in predicting the TC intensity, cannot provide forecasts of the TC spatial structure. We introduced a unified TC ensemble mean forecast based on the feature-oriented mean (FM) method to overcome the inconsistency between the AM and consensus forecasts. FM spatially aligns the TC-related features in each ensemble field to their geographical mean positions before the amplitude of their features is averaged. We select 219 TC forecast samples during the summer of 2017 for an overall evaluation of the FM performance. The results show that the TC track consensus forecasts can differ from AM track forecasts by hundreds of kilometers at long lead times. AM also gives a systematic and statistically significant underestimation of the TC intensity compared with the consensus forecast. By contrast, FM has a very similar TC track and intensity forecast skill to the consensus approach. FM can also provide the corresponding ensemble mean forecasts of the TC spatial structure that are significantly more accurate than AM for the low- and upper-level circulation in TCs. The FM method has the potential to serve as a valuable unified ensemble mean approach for the TC prediction.

Open access
Qian Zhou, Lei Chen, Wansuo Duan, Xu Wang, Ziqing Zu, Xiang Li, Shouwen Zhang, and Yunfei Zhang

Abstract

Using the latest operational version of the ENSO forecast system from the National Marine Environmental Forecasting Center (NMEFC) of China, ensemble forecasting experiments are performed for El Niño–Southern Oscillation (ENSO) events that occurred from 1997 to 2017 by generating initial perturbations of the conditional nonlinear optimal perturbation (CNOP) and climatically relevant singular vector (CSV) structures. It is shown that when the initial perturbation of the leading CSV structure in the ensemble forecast of the CSVs scheme is replaced by those of the CNOP structure, the resulted ensemble ENSO forecasts of the CNOP+CSVs scheme tend to possess a larger spread than the forecasts obtained with the CSVs scheme alone, leading to a better match between the root-mean-square error and the ensemble spread, a more reasonable Talagrand diagram, and an improved Brier skill score (BSS). All these results indicate that the ensemble forecasts generated by the CNOP+CSVs scheme can improve both the accuracy of ENSO forecasting and the reliability of the ensemble forecasting system. Therefore, ENSO ensemble forecasting should consider the effect of nonlinearity on the ensemble initial perturbations to achieve a much higher skill. It is expected that fully nonlinear ensemble initial perturbations can be sufficiently yielded to produce ensemble forecasts for ENSO, finally improving the ENSO forecast skill to the greatest possible extent. The CNOP will be a useful method to yield fully nonlinear optimal initial perturbations for ensemble forecasting.

Open access
Kevin Bachmann and Ryan D. Torn

Abstract

Tropical cyclones are associated with a variety of significant social hazards, including wind, rain, and storm surge. Despite this, most of the model validation effort has been directed toward track and intensity forecasts. In contrast, few studies have investigated the skill of state-of-the-art, high-resolution ensemble prediction systems in predicting associated TC hazards, which is crucial since TC position and intensity do not always correlate with the TC-related hazards, and can result in impacts far from the actual TC center. Furthermore, dynamic models can provide flow-dependent uncertainty estimates, which in turn can provide more specific guidance to forecasters than statistical uncertainty estimates based on past errors. This study validates probabilistic forecasts of wind speed and precipitation hazards derived from the HWRF ensemble prediction system and compares its skill to forecasts by the stochastically based operational Monte Carlo Model (NHC), the IFS (ECMWF), and the GEFS (NOAA) in use in 2017–19. Wind and precipitation forecasts are validated against NHC best track wind radii information and the National Stage IV QPE Product. The HWRF 34-kt (1 kt ≈ 0.51 m s−1) wind forecasts have comparable skill to the global models up to 60-h lead time before HWRF skill decreases, possibly due to detrimental impacts of large track errors. In contrast, HWRF has comparable quality to its competitors for higher thresholds of 50 and 64 kt throughout 120-h lead time. In terms of precipitation hazards, HWRF performs similar or better than global models, but depicts higher, although not perfect, reliability, especially for events over 5 in. (120 h)−1. Postprocessing, like quantile mapping, improves forecast skill for all models significantly and can alleviate reliability issues of the global models.

Restricted access
Makenzie J. Krocak, Jinan N. Allan, Joseph T. Ripberger, Carol L. Silva, and Hank C. Jenkins-Smith

Abstract

Nocturnal tornadoes are challenging to forecast and even more challenging to communicate. Numerous studies have evaluated the forecasting challenges, but fewer have investigated when and where these events pose the greatest communication challenges. This study seeks to evaluate variation in confidence among U.S. residents in receiving and responding to tornado warnings by hour of day. Survey experiment data come from the Severe Weather and Society Survey, an annual survey of U.S. adults. Results indicate that respondents are less confident about receiving warnings overnight, specifically in the early morning hours [from 12:00 AM to 4:00 AM local time (0000–0400 LT)]. We then use the survey results to inform an analysis of hourly tornado climatology data. We evaluate where nocturnal tornadoes are most likely to occur during the time frame when residents are least confident in their ability to receive tornado warnings. Results show that the Southeast experiences the highest number of nocturnal tornadoes during the time period of lowest confidence, as well as the largest proportion of tornadoes in that time frame. Finally, we estimate and assess two multiple linear regression models to identify individual characteristics that may influence a respondent’s confidence in receiving a tornado between 12:00 AM and 4:00 AM. These results indicate that age, race, weather awareness, weather sources, and the proportion of nocturnal tornadoes in the local area relate to warning reception confidence. The results of this study should help inform policymakers and practitioners about the populations at greatest risk for challenges associated with nocturnal tornadoes. Discussion focuses on developing more effective communication strategies, particularly for diverse and vulnerable populations.

Restricted access
Kosuke Ono

Abstract

This study extends Bayesian model averaging (BMA) to a form suitable for time series forecasts. BMA is applied to a three-member ensemble for temperature forecasts with a 1-h interval time series at specific stations. The results of such an application typically have problematic characteristics. BMA weights assigned to ensemble members fluctuate widely within a few hours because BMA optimizations are independent at each lead time, which is incompatible with the spatiotemporal continuity of meteorological phenomena. To ameliorate this issue, a degree of correlation among different lead times is introduced by the extension of latent variables to lead times adjacent to the target lead time for the calculation of BMA weights and variances. This extension approach stabilizes the BMA weights, improving the performance of deterministic and probabilistic forecasts. Also, an investigation of the effects of this extension technique on the shapes of forecasted probability density functions showed that the extension approach offers advantages in bimodal cases. This extension technique may show promise in other applications to improve the performance of forecasts by BMA.

Restricted access