Search Results

You are looking at 1 - 10 of 35 items for

  • Author or Editor: Travis Smith x
  • Refine by Access: All Content x
Clear All Modify Search
Valliappa Lakshmanan
and
Travis Smith

Abstract

Although storm-tracking algorithms are a key ingredient of nowcasting systems, evaluation of storm-tracking algorithms has been indirect, labor intensive, or nonspecific. A set of easily computable bulk statistics that can be used to directly evaluate the performance of tracking algorithms on specific characteristics is introduced. These statistics are used to evaluate five widely used storm-tracking algorithms on a diverse set of radar reflectivity data cases. Based on this objective evaluation, a storm-tracking algorithm is devised that performs consistently and better than any of the previously suggested techniques.

Full access
Valliappa Lakshmanan
and
Travis Smith

Abstract

A technique to identify storms and capture scalar features within the geographic and temporal extent of the identified storms is described. The identification technique relies on clustering grid points in an observation field to find self-similar and spatially coherent clusters that meet the traditional understanding of what storms are. From these storms, geometric, spatial, and temporal features can be extracted. These scalar features can then be data mined to answer many types of research questions in an objective, data-driven manner. This is illustrated by using the technique to answer questions of forecaster skill and lightning predictability.

Full access
Valliappa Lakshmanan
,
Madison Miller
, and
Travis Smith

Abstract

Accumulating gridded fields over time greatly magnifies the impact of impulse noise in the individual grids. A quality control method that takes advantage of spatial and temporal coherence can reduce the impact of such noise in accumulation grids. Such a method can be implemented using the image processing techniques of hysteresis and multiple hypothesis tracking (MHT). These steps are described in this paper, and the method is applied to simulated data to quantify the improvements and to explain the effect of various parameters. Finally, the quality control technique is applied to some illustrative real-world datasets.

Full access
Ryan Lagerquist
,
Amy McGovern
, and
Travis Smith

Abstract

Thunderstorms in the United States cause over 100 deaths and $10 billion (U.S. dollars) in damage per year, much of which is attributable to straight-line (nontornadic) wind. This paper describes a machine-learning system that forecasts the probability of damaging straight-line wind (≥50 kt or 25.7 m s−1) for each storm cell in the continental United States, at distances up to 10 km outside the storm cell and lead times up to 90 min. Predictors are based on radar scans of the storm cell, storm motion, storm shape, and soundings of the near-storm environment. Verification data come from weather stations and quality-controlled storm reports. The system performs very well on independent testing data. The area under the receiver operating characteristic (ROC) curve ranges from 0.88 to 0.95, the critical success index (CSI) ranges from 0.27 to 0.91, and the Brier skill score (BSS) ranges from 0.19 to 0.65 (>0 is better than climatology). For all three scores, the best value occurs for the smallest distance (inside storm cell) and/or lead time (0–15 min), while the worst value occurs for the greatest distance (5–10 km outside storm cell) and/or lead time (60–90 min). The system was deployed during the 2017 Hazardous Weather Testbed.

Full access
Amy McGovern
,
Christopher D. Karstens
,
Travis Smith
, and
Ryan Lagerquist

Abstract

Real-time prediction of storm longevity is a critical challenge for National Weather Service (NWS) forecasters. These predictions can guide forecasters when they issue warnings and implicitly inform them about the potential severity of a storm. This paper presents a machine-learning (ML) system that was used for real-time prediction of storm longevity in the Probabilistic Hazard Information (PHI) tool, making it a Research-to-Operations (R2O) project. Currently, PHI provides forecasters with real-time storm variables and severity predictions from the ProbSevere system, but these predictions do not include storm longevity. We specifically designed our system to be tested in PHI during the 2016 and 2017 Hazardous Weather Testbed (HWT) experiments, which are a quasi-operational naturalistic environment. We considered three ML methods that have proven in prior work to be strong predictors for many weather prediction tasks: elastic nets, random forests, and gradient-boosted regression trees. We present experiments comparing the three ML methods with different types of input data, discuss trade-offs between forecast quality and requirements for real-time deployment, and present both subjective (human-based) and objective evaluation of real-time deployment in the HWT. Results demonstrate that the ML system has lower error than human forecasters, which suggests that it could be used to guide future storm-based warnings, enabling forecasters to focus on other aspects of the warning system.

Open access
G. Eli Jergensen
,
Amy McGovern
,
Ryan Lagerquist
, and
Travis Smith

Abstract

We demonstrate that machine learning (ML) can skillfully classify thunderstorms into three categories: supercell, part of a quasi-linear convective system, or disorganized. These classifications are based on radar data and environmental information obtained through a proximity sounding. We compare the performance of five ML algorithms: logistic regression with the elastic-net penalty, random forests, gradient-boosted forests, and support-vector machines with both a linear and nonlinear kernel. The gradient-boosted forest performs best, with an accuracy of 0.77 ± 0.02 and a Peirce score of 0.58 ± 0.04. The linear support-vector machine performs second best, with values of 0.70 ± 0.02 and 0.55 ± 0.05, respectively. We use two interpretation methods, permutation importance and sequential forward selection, to determine the most important predictors for the ML models. We also use partial-dependence plots to determine how these predictors influence the outcome. A main conclusion is that shape predictors, based on the outline of the storm, appear to be highly important across ML models. The training data, a storm-centered radar scan and modeled proximity sounding, are similar to real-time data. Thus, the models could be used operationally to aid human decision-making by reducing the cognitive load involved in manual storm-mode identification. Also, they could be run on historical data to perform climatological analyses, which could be valuable to both the research and operational communities.

Open access
Travis M. Smith
,
Kimberly L. Elmore
, and
Shannon A. Dulin

Abstract

The problem of predicting the onset of damaging downburst winds from high-reflectivity storm cells that develop in an environment of weak vertical shear with Weather Surveillance Radar-1988 Doppler (WSR-88D) is examined. Ninety-one storm cells that produced damaging outflows are analyzed with data from the WSR- 88D network, along with 1247 nonsevere storm cells that developed in the same environments. Twenty-six reflectivity and radial velocity–based parameters are calculated for each cell, and a linear discriminant analysis was performed on 65% of the dataset in order to develop prediction equations that would discriminate between severe downburst-producing cells and cells that did not produce a strong outflow. These prediction equations are evaluated on the remaining 35% of the dataset. The datasets were resampled 100 times to determine the range of possible results. The resulting automated algorithm has a median Heidke skill score (HSS) of 0.40 in the 20–45-km range with a median lead time of 5.5 min, and a median HSS of 0.17 in the 45–80-km range with a median lead time of 0 min. As these lead times are medians of the mean lead times calculated from a large, resampled dataset, many of the storm cells in the dataset had longer lead times than the reported median lead times.

Full access
Valliappa Lakshmanan
,
Angela Fritz
,
Travis Smith
,
Kurt Hondl
, and
Gregory Stumpf

Abstract

Echoes in radar reflectivity data do not always correspond to precipitating particles. Echoes on radar may result from biological targets such as insects, birds, or wind-borne particles; from anomalous propagation or ground clutter; or from test and interference patterns that inadvertently seep into the final products. Although weather forecasters can usually identify and account for the presence of such contamination, automated weather-radar algorithms are drastically affected. Several horizontal and vertical features have been proposed to discriminate between precipitation echoes and echoes that do not correspond to precipitation. None of these features by themselves can discriminate between precipitating and nonprecipitating areas. In this paper, a neural network is used to combine the individual features, some of which have already been proposed in the literature and some of which are introduced in this paper, into a single discriminator that can distinguish between “good” and “bad” echoes (i.e., precipitation and nonprecipitation, respectively). The method of computing the horizontal features leads to statistical anomalies in their distributions near the edges of echoes. Also described is how to avoid presenting such range gates to the neural network. The gate-by-gate discrimination provided by the neural network is followed by more holistic postprocessing based on spatial contiguity constraints and object identification to yield quality-controlled radar reflectivity scans that have most of the bad echo removed while leaving most of the good echo untouched. A possible multisensor extension, utilizing satellite data and surface observations, to the radar-only technique is also demonstrated. It is demonstrated that the resulting technique is highly skilled and that its skill exceeds that of the currently operational algorithm.

Full access
Madison L. Miller
,
Valliappa Lakshmanan
, and
Travis M. Smith

Abstract

The location and intensity of mesocyclone circulations can be tracked in real time by accumulating azimuthal shear values over time at every location of a uniform spatial grid. Azimuthal shear at low (0–3 km AGL) and midlevels (3–6 km AGL) of the atmosphere is computed in a noise-tolerant manner by fitting the Doppler velocity observations in the neighborhood of a pulse volume to a plane and finding the slope of that plane. Rotation tracks created in this manner are contaminated by nonmeteorological signatures caused by poor velocity dealiasing, ground clutter, radar test patterns, and spurious shear values. To improve the quality of these fields for real-time use and for an accumulated multiyear climatology, new dealiasing strategies, data thresholding, and multiple hypothesis tracking (MHT) techniques have been implemented. These techniques remove nearly all nonmeteorological contaminants, resulting in much clearer rotation tracks that appear to match mesocyclone paths and intensities closely.

Full access
Valliappa Lakshmanan
,
Travis Smith
,
Gregory Stumpf
, and
Kurt Hondl

Abstract

The Warning Decision Support System–Integrated Information (WDSS-II) is the second generation of a system of tools for the analysis, diagnosis, and visualization of remotely sensed weather data. WDSS-II provides a number of automated algorithms that operate on data from multiple radars to provide information with a greater temporal resolution and better spatial coverage than their currently operational counterparts. The individual automated algorithms that have been developed using the WDSS-II infrastructure together yield a forecasting and analysis system providing real-time products useful in severe weather nowcasting. The purposes of the individual algorithms and their relationships to each other are described, as is the method of dissemination of the created products.

Full access