Browse

You are looking at 101 - 110 of 18,114 items for :

  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All
Rebecca D. Adams-Selin

Abstract

Recent advances in hail trajectory modeling regularly produce datasets containing millions of hail trajectories. Because hail growth within a storm cannot be entirely separated from the structure of the trajectories producing it, a method to condense the multidimensionality of the trajectory information into a discrete number of features analyzable by humans is necessary. This article presents a three-dimensional trajectory clustering technique that is designed to group trajectories that have similar updraft-relative structures and orientations. The new technique is an application of a two-dimensional method common in the data mining field. Hail trajectories (or “parent” trajectories) are partitioned into segments before they are clustered using a modified version of the density-based spatial applications with noise (DBSCAN) method. Parent trajectories with segments that are members of at least two common clusters are then grouped into parent trajectory clusters before output. This multistep method has several advantages. Hail trajectories with structural similarities along only portions of their length, e.g., sourced from different locations around the updraft before converging to a common pathway, can still be grouped. However, the physical information inherent in the full length of the trajectory is retained, unlike methods that cluster trajectory segments alone. The conversion of trajectories to an updraft-relative space also allows trajectories separated in time to be clustered. Once the final output trajectory clusters are identified, a method for calculating a representative trajectory for each cluster is proposed. Cluster distributions of hailstone and environmental characteristics at each time step in the representative trajectory can also be calculated.

Significance Statement

To understand how a storm produces large hail, we need to understand the paths that hailstones take in a storm when growing. We can simulate these paths using computer models. However, the millions of hailstones in a simulated storm create millions of paths, which is hard to analyze. This article describes a machine learning method that groups together hailstone paths based on how similar their three-dimensional structures look. It will let hail scientists analyze hailstone pathways in storms more easily, and therefore better understand how hail growth happens.

Restricted access
Karim Ali
,
David M. Schultz
,
Alistair Revell
,
Timothy Stallard
, and
Pablo Ouro

Abstract

To simulate the large-scale impacts of wind farms, wind turbines are parameterized within mesoscale models in which grid sizes are typically much larger than turbine scales. Five wind-farm parameterizations were implemented in the Weather Research and Forecasting (WRF) Model v4.3.3 to simulate multiple operational wind farms in the North Sea, which were verified against a satellite image, airborne measurements, and the FINO-1 meteorological mast data on 14 October 2017. The parameterization by Volker et al. underestimated the turbulence and wind speed deficit compared to measurements and to the parameterization of Fitch et al., which is the default in WRF. The Abkar and Porté-Agel parameterization gave close predictions of wind speed to that of Fitch et al. with a lower magnitude of predicted turbulence, although the parameterization was sensitive to a tunable constant. The parameterization by Pan and Archer resulted in turbine-induced thrust and turbulence that were slightly less than that of Fitch et al., but resulted in a substantial drop in power generation due to the magnification of wind speed differences in the power calculation. The parameterization by Redfern et al. was not substantially different from Fitch et al. in the absence of conditions such as strong wind veer. The simulations indicated the need for a turbine-induced turbulence source within a wind-farm parameterization for improved prediction of near-surface wind speed, near-surface temperature, and turbulence. The induced turbulence was responsible for enhancing turbulent momentum flux near the surface, causing a local speed-up of near-surface wind speed inside a wind farm. Our findings highlighted that wakes from large offshore wind farms could extend 100 km downwind, reducing downwind power production as in the case of the 400-MW Bard Offshore 1 wind farm whose power output was reduced by the wakes of the 402-MW Veja Mate wind farm for this case study.

Significance Statement

Because wind farms are smaller than the common grid spacing of numerical weather prediction models, the impacts of wind farms on the weather have to be indirectly incorporated through parameterizations. Several approaches to parameterization are available and the most appropriate scheme is not always clear. The absence of a turbulence source in a parameterization leads to substantial inaccuracies in predicting near-surface wind speed and turbulence over a wind farm. The impact of large clusters of offshore wind turbines in the wind field can exceed 100 km downwind, resulting in a substantial loss of power for downwind turbines. The prediction of this power loss can be sensitive to the chosen parameterization, contributing to uncertainty in wind-farm economic planning.

Open access
Anders A. Jensen
,
Gregory Thompson
,
Kyoko Ikeda
, and
Sarah A. Tessendorf

Abstract

Methods to improve the representation of hail in the Thompson–Eidhammer microphysics scheme are explored. A new two-moment and predicted density graupel category is implemented into the Thompson–Eidhammer scheme. Additionally, the one-moment graupel category’s intercept parameter is modified, based on hail observations, to shift the properties of the graupel category to become more hail-like since the category is designed to represent both graupel and hail. Finally, methods to diagnose maximum expected hail size at the surface and aloft are implemented. The original Thompson–Eidhammer version, the newly implemented two-moment and predicted density graupel version, and the modified (to be more hail-like) one-moment version are evaluated using a case that occurred during the Plains Elevated Convection at Night (PECAN) field campaign, during which hail-producing storms merged into a strong mesoscale convective system. The three versions of the scheme are evaluated for their ability to predict hail sizes compared to observed hail sizes from storm reports and estimated from radar, their ability to predict radar reflectivity signatures at various altitudes, and their ability to predict cold-pool features like temperature and wind speed. One key benefit of using the two-moment and predicted density graupel category is that the simulated reflectivity values in the upper levels of discrete storms are clearly improved. This improvement coincides with a significant reduction in the areal extent of graupel aloft, also seen when using the updated one-moment scheme. The two-moment and predicted density graupel scheme is also better able to predict a wide variety of hail sizes at the surface, including large (>2-in. diameter) hail that was observed during this case.

Restricted access
Andrew Walsworth
,
Jonathan Poterjoy
, and
Elizabeth Satterfield

Abstract

For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance.

Significance Statement

To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study.

Restricted access
Amanda M. Murphy
,
Cameron R. Homeyer
, and
Kiley Q. Allen

Abstract

Many studies have aimed to identify novel storm characteristics that are indicative of current or future severe weather potential using a combination of ground-based radar observations and severe reports. However, this is often done on a small scale using limited case studies on the order of tens to hundreds of storms due to how time-intensive this process is. Herein, we introduce the GridRad-Severe dataset, a database including ∼100 severe weather days per year and upward of 1.3 million objectively tracked storms from 2010 to 2019. Composite radar volumes spanning objectively determined, report-centered domains are created for each selected day using the GridRad compositing technique, with dates objectively determined using report thresholds defined to capture the highest-end severe weather days from each year, evenly distributed across all severe report types (tornadoes, severe hail, and severe wind). Spatiotemporal domain bounds for each event are objectively determined to encompass both the majority of reports and the time of convection initiation. Severe weather reports are matched to storms that are objectively tracked using the radar data, so the evolution of the storm cells and their severe weather production can be evaluated. Herein, we apply storm mode (single-cell, multicell, or mesoscale convective system storms) and right-moving supercell classification techniques to the dataset, and revisit various questions about severe storms and their bulk characteristics posed and evaluated in past work. Additional applications of this dataset are reviewed for possible future studies.

Restricted access
Yuhei Takaya
,
Kensuke K. Komatsu
,
Hideitsu Hino
, and
Frédéric Vitart

Abstract

Probabilistic forecasting is a common activity in many fields of the Earth sciences. Assessing the quality of probabilistic forecasts—probabilistic forecast verification—is therefore an essential task in these activities. Numerous methods and metrics have been proposed for this purpose; however, the probabilistic verification of vector variables of ensemble forecasts has received less attention than others. Here we introduce a new approach that is applicable for verifying ensemble forecasts of continuous, scalar, and two-dimensional vector data. The proposed method uses a fixed-radius near-neighbors search to compute two information-based scores, the ignorance score (the logarithmic score) and the information gain, which quantifies the skill gain from the reference forecast. Basic characteristics of the proposed scores were examined using idealized Monte Carlo simulations. The results indicated that both the continuous ranked probability score (CRPS) and the proposed score with a relatively small ensemble size (<25) are not proper in terms of the forecast dispersion. The proposed verification method was successfully used to verify the Madden–Julian oscillation index, which is a two-dimensional quantity. The proposed method is expected to advance probabilistic ensemble forecasts in various fields.

Significance Statement

In the Earth sciences, stochastic future states are estimated by solving a large number of forecasts (called ensemble forecasts) based on physical equations with slightly different initial conditions and stochastic parameters. The verification of probabilistic forecasts is an essential part of forecasting and modeling activity in the Earth sciences. However, there is no information-based probabilistic verification score applicable for vector variables of ensemble forecasts. The purpose of this study is to introduce a novel method for verifying scalar and two-dimensional vector variables of ensemble forecasts. The proposed method offers a new approach to probabilistic verification and is expected to advance probabilistic ensemble forecasts in various fields.

Restricted access
Benjamin W. Green
,
Eric Sinsky
,
Shan Sun
,
Vijay Tallapragada
, and
Georg A. Grell

Abstract

NOAA has been developing a fully coupled Earth system model under the Unified Forecast System framework that will be responsible for global (ensemble) predictions at lead times of 0–35 days. The development has involved several prototype runs consisting of bimonthly initializations over a 7-yr period for a total of 168 cases. This study leverages these existing (baseline) prototypes to isolate the impact of substituting (one-at-a-time) parameterizations for convection, microphysics, and planetary boundary layer on 35-day forecasts. Through these physics sensitivity experiments, it is found that no particular configuration of the subseasonal-length coupled model is uniformly better or worse, based on several metrics including mean-state biases and skill scores for the Madden–Julian oscillation, precipitation, and 2-m temperature. Importantly, the spatial patterns of many “first-order” biases (e.g., impact of convection on precipitation) are remarkably similar between the end of the first week and weeks 3–4, indicating that some subseasonal biases may be mitigated through tuning at shorter time scales. This result, while shown for the first time in the context of subseasonal prediction with different physics schemes, is consistent with findings in climate models that some mean-state biases evident in multiyear averages can manifest in only a few days. An additional convective parameterization test using a different baseline shows that attempting to generalize results between or within modeling systems may be misguided. The limitations of generalizing results when testing physics schemes are most acute in modeling systems that undergo rapid, intense development from myriad contributors—as is the case in (quasi) operational environments.

Restricted access
Isaac Arseneau
and
Brian Ancell

Abstract

Ensemble sensitivity analysis (ESA) is a numerical method by which the potential value of a single additional observation can be estimated using an ensemble numerical weather forecast. By performing ESA observation targeting on runs of the Texas Tech University WRF Ensemble from the spring of 2016, a dataset of predicted variance reductions (hereinafter referred to as target values) was obtained over approximately 6 weeks of convective forecasts for the central United States. It was then ascertained from these cases that the geographic variation in target values is large for any one case, with local maxima often several standard deviations higher than the mean and surrounded by sharp gradients. Radiosondes launched from the surface, then, would need to take this variation into account to properly sample a specific target by launching upstream of where the target is located rather than directly underneath. In many cases, the difference between the maximum target value in the vertical and the actual target value observed along the balloon path was multiple standard deviations. This may help to explain the lower-than-expected forecast improvements observed in previous ESA targeting experiments, especially the Mesoscale Predictability Experiment (MPEX). If target values are a good predictor of observation value, then it is possible that taking the balloon path into account in targeting systems for radiosonde deployment may substantially improve on the value added to the forecast by individual observations.

Restricted access
Brice E. Coffer
,
Matthew D. Parker
,
John M. Peters
, and
Andrew R. Wade

Abstract

The development and intensification of low-level mesocyclones in supercell thunderstorms have often been attributed, at least in part, to augmented streamwise vorticity generated baroclinically in the forward flank of supercells. However, the ambient streamwise vorticity of the environment (often quantified via storm-relative helicity), especially near the ground, is particularly skillful at discriminating between nontornadic and tornadic supercells. This study investigates whether the origins of the inflow air into supercell low-level mesocyclones, both horizontally and vertically, can help explain the dynamical role of environmental versus storm-generated vorticity in the development of low-level mesocyclone rotation. Simulations of supercells, initialized with wind profiles common to supercell environments observed in nature, show that the air bound for the low-level mesocyclone primarily originates from the ambient environment (rather than from along the forward flank) and from very close to the ground, often in the lowest 200–400 m of the atmosphere. Given that the near-ground environmental air comprises the bulk of the inflow into low-level mesocyclones, this likely explains the forecast skill of environmental streamwise vorticity in the lowest few hundred meters of the atmosphere. The low-level mesocyclone does not appear to require much augmentation from the development of additional horizontal vorticity in the forward flank. Instead, the dominant contributor to vertical vorticity within the low-level mesocyclone is from the environmental horizontal vorticity. This study provides further context to the ongoing discussion regarding the development of rotation within supercell low-level mesocyclones.

Significance Statement

Supercell thunderstorms produce the majority of tornadoes, and a defining characteristic of supercells is their rotating updraft, known as the “mesocyclone.” When the mesocyclone is stronger at lower altitudes, the likelihood of tornadoes increases. The purpose of this study is to understand if the rotation of the mesocyclone in supercells is due to horizontal spin present in the ambient environment or whether additional horizontal spin generated by the storm itself primarily drives this rotation. Our results suggest that inflow air into supercells and low-level mesocyclone rotation are mainly due to the properties of the environmental inflow air, especially near the ground. This hopefully provides further context to how our community views the development of low-level mesocyclones in supercells.

Open access
Matthew D. Flournoy
and
Erik N. Rasmussen

Abstract

Recent studies have shown how very small differences in the background environment of a supercell can yield different outcomes, particularly in terms of tornado production. In this study, we use a novel convection initiation technique to simulate six supercells with a focus on their early development. Each experiment is identical, except for the strength of thermal forcing for the initial convection initiation. Each experiment yields a mature supercell, but differences in storm-scale characteristics like updraft speed, cold pool temperature deficit, and vertical vorticity development abound. Of these, the time when the midlevel updraft strengthens is most strongly related to initiation strength, with stronger thermal forcing favoring quicker updraft development. The same is true for the low-level updraft, with the additional relationship that stronger thermal forcing also tends to yield stronger low-level updrafts for around the first 2 h of the simulations. The experiments with faster updraft development tend to be associated with more rapid surface vortex intensification; however, cold pool evolution differs between simulations with weaker versus stronger thermal forcing. Stronger thermal forcing also yields deviant rightward storm motion earlier in the supercell’s life cycle that remains more consistent for the duration of the simulation. These results highlight the range of supercellular outcomes that are possible across a background environment due to differences in storm-scale initiation strength. They are also of potential importance for predicting the paths and tornado potential of supercells in real time.

Significance Statement

Despite a better understanding of processes related to tornado production in supercell thunderstorms, forecasters still have difficulty discriminating between tornadic and nontornadic supercells in close proximity to each other within the same severe weather event. In this study, we use six simulations of supercells to examine how these different outcomes can occur. Our results show that, given the same background environment, a storm that is more strongly initiated will exhibit faster updraft development and, possibly, quicker tornado production. The opposite can be said for storms that are more weakly initiated. Differences in initiation strength are also associated with different storm motions. These findings inspire future work to better relate supercell evolution to characteristics of initiation and the environment.

Restricted access