Search Results

You are looking at 1 - 10 of 45 items for

  • Author or Editor: Corey K. Potvin x
  • Refine by Access: All Content x
Clear All Modify Search
Corey K. Potvin

Abstract

Vortex detection algorithms are required for both research and operational applications in which data volume precludes timely subjective examination of model or analysis fields. Unfortunately, objective detection of convective vortices is often hindered by the strength and complexity of the flow in which they are embedded. To address this problem, a variational vortex-fitting algorithm previously developed to detect and characterize vortices observed by Doppler radar has been modified to operate on gridded horizontal wind data. The latter are fit to a simple analytical model of a vortex and its proximate environment, allowing the retrieval of important vortex characteristics. This permits the development of detection criteria tied directly to vortex properties (e.g., maximum tangential wind), rather than to more general kinematical properties that may poorly represent the vortex itself (e.g., vertical vorticity) when the background flow is strongly sheared. Thus, the vortex characteristic estimates provided by the technique may permit more effective detection criteria while providing useful information about vortex size, intensity, and trends therein. In tests with two simulated supercells, the technique proficiently detects and characterizes vortices, even in the presence of complex flow. Sensitivity tests suggest the algorithm would work well for a variety of vortex sizes without additional tuning. Possible applications of the technique include investigating relationships between mesocyclone and tornado characteristics, and detecting tornadoes, mesocyclones, and mesovortices in real-time ensemble forecasts.

Full access
Corey K. Potvin
and
Montgomery L. Flora

Abstract

The Warn-on-Forecast (WoF) program aims to deploy real-time, convection-allowing, ensemble data assimilation and prediction systems to improve short-term forecasts of tornadoes, flooding, lightning, damaging wind, and large hail. Until convection-resolving (horizontal grid spacing Δx < 100 m) systems become available, however, resolution errors will limit the accuracy of ensemble model output. Improved understanding of grid spacing dependence of simulated convection is therefore needed to properly calibrate and interpret ensemble output, and to optimize trade-offs between model resolution and other computationally constrained parameters like ensemble size and forecast lead time.

Toward this end, the authors examine grid spacing sensitivities of simulated supercells over Δx of 333 m–4 km. Storm environment and physics parameterization are varied among the simulations. The results suggest that 4-km grid spacing is too coarse to reliably simulate supercells, occasionally leading to premature storm demise, whereas 3-km simulations more often capture operationally important features, including low-level rotation tracks. Further decreasing Δx to 1 km enables useful forecasts of rapid changes in low-level rotation intensity, though significant errors remain (e.g., in timing).

Grid spacing dependencies vary substantially among the experiments, suggesting that accurate calibration of ensemble output requires better understanding of how storm characteristics, environment, and parameterization schemes modulate grid spacing sensitivity. Much of the sensitivity arises from poorly resolving small-scale processes that impact larger (well resolved) scales. Repeating some of the 333-m simulations with coarsened initial conditions reveals that supercell forecasts can substantially benefit from reduced grid spacing even when limited observational density precludes finescale initialization.

Full access
Corey K. Potvin
and
Louis J. Wicker

Abstract

Kinematical analyses of mobile radar observations are critical to advancing the understanding of supercell thunderstorms. Maximizing the accuracy of these and subsequent dynamical analyses, and appropriately characterizing the uncertainty in ensuing conclusions about storm structure and processes, requires thorough knowledge of the typical errors obtained using different retrieval techniques. This study adopts an observing system simulation experiment (OSSE) framework to explore the errors obtained from ensemble Kalman filter (EnKF) assimilation versus dual-Doppler analysis (DDA) of storm-scale mobile radar data. The radar characteristics and EnKF model errors are varied to explore a range of plausible scenarios.

When dual-radar data are assimilated, the EnKF produces substantially better wind retrievals at higher altitudes, where DDAs are more sensitive to unaccounted flow evolution, and in data-sparse regions such as the storm inflow sector. Near the ground, however, the EnKF analyses are comparable to the DDAs when the radar cross-beam angles (CBAs) are poor, and slightly worse than the DDAs when the CBAs are optimal. In the single-radar case, the wind analyses benefit substantially from using finer grid spacing than in the dual-radar case for the objective analysis of radar observations. The analyses generally degrade when only single-radar data are assimilated, particularly when microphysical parameterization or low-level environmental wind errors are introduced. In some instances, this leads to large errors in low-level vorticity stretching and Lagrangian circulation calculations. Nevertheless, the results show that while multiradar observations of supercells are always preferable, judicious use of single-radar EnKF assimilation can yield useful analyses.

Full access
Derek R. Stratman
and
Corey K. Potvin

Abstract

Storm displacement errors can arise from a number of potential sources of error within a data assimilation (DA) and forecast system. Conversely, storm displacement errors can cause issues for storm-scale, ensemble-based systems using an ensemble Kalman filter (EnKF), such as NSSL’s Warn-on-Forecast System (WoFS). A previous study developed a fully grid-based feature alignment technique (FAT) to mitigate these phase errors and their impacts. However, that study developed and tested the FAT for single-storm cases. This study advances that work by implementing an object-based merging and matching technique into the FAT and tests the updated FAT in more complex scenarios of multiple storms. Ensemble-based experiments are conducted with and without the FAT for each of the scenarios. The experiments’ analyses and forecasts of storm-related fields are then evaluated using subjective and objective methods. Results from these idealized multiple-storm experiments continue to reveal the potential benefits of correcting storm displacement errors. For example, running the FAT even once can mitigate the “spinup” period experienced by the no-FAT experiments. The new results also show that running the FAT prior to every DA cycling step generally leads to more skillful forecasts at the smaller scales, especially in earlier-initialized forecasts. However, repeatedly running the FAT prior to every DA step can eventually lead to deterioration in analyses and forecasts. Potential solutions to this problem include using longer cycling intervals and running the FAT prior to DA less often. Additional ways to improve the FAT along with other results are presented and discussed.

Significance Statement

The purpose of this work is to explore the impact of correcting storm displacements on analyses and forecasts of storms using an ensemble-based data assimilation and forecast system in an idealized framework. Storm displacement errors are a common problem in current operational and experimental storm-scale forecast systems, so understanding their impact on these systems and providing a method to help mitigate them is important. Results from this study indicate that correcting storm displacement errors with the feature alignment technique can greatly improve analyses and forecasts in multiple-storm scenarios. Future work will focus on exploring the impact of correcting storm displacement errors in a real-data, storm-scale data assimilation and forecast system.

Full access
Corey K. Potvin
and
Louis J. Wicker

Abstract

Under the envisioned warn-on-forecast (WoF) paradigm, ensemble model guidance will play an increasingly critical role in the tornado warning process. While computational constraints will likely preclude explicit tornado prediction in initial WoF systems, real-time forecasts of low-level mesocyclone-scale rotation appear achievable within the next decade. Given that low-level mesocyclones are significantly more likely than higher-based mesocyclones to be tornadic, intensity and trajectory forecasts of low-level supercell rotation could provide valuable guidance to tornado warning and nowcasting operations. The efficacy of such forecasts is explored using three simulated supercells having weak, moderate, or strong low-level rotation. The results suggest early WoF systems may provide useful probabilistic 30–60-min forecasts of low-level supercell rotation, even in cases of large radar–storm distances and/or narrow cross-beam angles. Given the idealized nature of the experiments, however, they are best viewed as providing an upper-limit estimate of the accuracy of early WoF systems.

Full access
Alan Shapiro
,
Corey K. Potvin
, and
Jidong Gao

Abstract

The utility of the anelastic vertical vorticity equation in a weak-constraint (least squares error) variational dual-Doppler wind analysis procedure is explored. The analysis winds are obtained by minimizing a cost function accounting for the discrepancies between observed and analyzed radial winds, errors in the mass conservation equation, errors in the anelastic vertical vorticity equation, and spatial smoothness constraints. By using Taylor’s frozen-turbulence hypothesis to shift analysis winds to observation points, discrepancies between radially projected analysis winds and radial wind observations can be calculated at the actual times and locations the data are acquired. The frozen-turbulence hypothesis is also used to evaluate the local derivative term in the vorticity equation. Tests of the analysis procedure are performed with analytical pseudo-observations of an array of translating and temporally decaying counterrotating updrafts and downdrafts generated from a Beltrami flow solution of the Navier–Stokes equations. The experiments explore the value added to the analysis by the vorticity equation constraint in the common scenario of substantial missing low-level data (radial wind observations at heights beneath 1.5 km are withheld from the analysis). Experiments focus on the sensitivity of the most sensitive analysis variable—the vertical velocity component—to values of the weighting coefficients, volume scan period, number of volume scans, and errors in the estimated frozen-turbulence pattern-translation components. Although the vorticity equation constraint is found to add value to many of these analyses, the analysis can become significantly degraded if estimates of the pattern-translation components are largely in error or if the frozen-turbulence hypothesis itself breaks down. However, tests also suggest that these negative impacts can be mitigated if data are available in a rapid-scan mode.

Full access
Corey K. Potvin
,
Alan Shapiro
, and
Ming Xue

Abstract

One of the greatest challenges to dual-Doppler retrieval of the vertical wind is the lack of low-level divergence information available to the mass conservation constraint. This study examines the impact of a vertical vorticity equation constraint on vertical velocity retrievals when radar observations are lacking near the ground. The analysis proceeds in a three-dimensional variational data assimilation (3DVAR) framework with the anelastic form of the vertical vorticity equation imposed along with traditional data, mass conservation, and smoothness constraints. The technique is tested using emulated radial wind observations of a supercell storm simulated by the Advanced Regional Prediction System (ARPS), as well as real dual-Doppler observations of a supercell storm that occurred in Oklahoma on 8 May 2003. Special attention is given to procedures to evaluate the vorticity tendency term, including spatially variable advection correction and estimation of the intrinsic evolution. Volume scan times ranging from 5 min, typical of operational radar networks, down to 30 s, achievable by rapid-scan mobile radars, are considered. The vorticity constraint substantially improves the vertical velocity retrievals in our experiments, particularly for volume scan times smaller than 2 min.

Full access
Corey K. Potvin
,
Kimberly L. Elmore
, and
Steven J. Weiss

Abstract

Proximity sounding studies typically seek to optimize several trade-offs that involve somewhat arbitrary definitions of how to define a “proximity sounding.” More restrictive proximity criteria, which presumably produce results that are more characteristic of the near-storm environment, typically result in smaller sample sizes that can reduce the statistical significance of the results. Conversely, the use of broad proximity criteria will typically increase the sample size and the apparent robustness of the statistical analysis, but the sounding data may not necessarily be representative of near-storm environments, given the presence of mesoscale variability in the atmosphere. Previous investigations have used a wide range of spatial and temporal proximity criteria to analyze severe storm environments. However, the sensitivity of storm environment climatologies to the proximity definition has not yet been rigorously examined.

In this study, a very large set (∼1200) of proximity soundings associated with significant tornado reports is used to generate distributions of several parameters typically used to characterize severe weather environments. Statistical tests are used to assess the sensitivity of the parameter distributions to the proximity criteria. The results indicate that while soundings collected too far in space and time from significant tornadoes tend to be more representative of the larger-scale environment than of the storm environment, soundings collected too close to the tornado also tend to be less representative due to the convective feedback process. The storm environment itself is thus optimally sampled at an intermediate spatiotemporal range referred to here as the Goldilocks zone. Implications of these results for future proximity sounding studies are discussed.

Full access
Derek R. Stratman
,
Corey K. Potvin
, and
Louis J. Wicker

Abstract

A goal of Warn-on-Forecast (WoF) is to develop forecasting systems that produce accurate analyses and forecasts of severe weather to be utilized in operational warning settings. Recent WoF-related studies have indicated the need to alleviate storm displacement errors in both analyses and forecasts. A potential solution to reduce these errors is the feature alignment technique (FAT), which mitigates displacement errors between observations and model fields while satisfying constraints. This study merges the FAT with a local ensemble transform Kalman filter (LETKF) and uses observing system simulation experiments (OSSEs) to vet the FAT as a potential alleviator of forecast errors arising from storm displacement errors. An idealized truth run of a supercell on a 250-m grid is used to generate pseudoradar observations, which are assimilated onto a 2-km grid using a 50-member ensemble to produce analyses and forecasts of the supercell. The FAT uses composite reflectivity to generate a 2D field of displacement vectors that is used to align the model variables with the observations prior to each analysis cycle. The FAT is tested by displacing the initial model background fields from the observations or modifying the environmental wind profile to create a storm motion bias in the forecast cycles. The FAT–LETKF performance is evaluated and compared to that of the LETKF alone. The FAT substantially reduces errors in storm intensity, location, and structure during data assimilation and subsequent forecasts. These supercell OSSEs provide the foundation for future experiments with real data and more complex events.

Full access
Montgomery L. Flora
,
Corey K. Potvin
,
Amy McGovern
, and
Shawn Handler

Abstract

With increasing interest in explaining machine learning (ML) models, this paper synthesizes many topics related to ML explainability. We distinguish explainability from interpretability, local from global explainability, and feature importance versus feature relevance. We demonstrate and visualize different explanation methods, how to interpret them, and provide a complete Python package (scikit-explain) to allow future researchers and model developers to explore these explainability methods. The explainability methods include Shapley additive explanations (SHAP), Shapley additive global explanation (SAGE), and accumulated local effects (ALE). Our focus is primarily on Shapley-based techniques, which serve as a unifying framework for various existing methods to enhance model explainability. For example, SHAP unifies methods like local interpretable model-agnostic explanations (LIME) and tree interpreter for local explainability, while SAGE unifies the different variations of permutation importance for global explainability. We provide a short tutorial for explaining ML models using three disparate datasets: a convection-allowing model dataset for severe weather prediction, a nowcasting dataset for subfreezing road surface prediction, and satellite-based data for lightning prediction. In addition, we showcase the adverse effects that correlated features can have on the explainability of a model. Finally, we demonstrate the notion of evaluating model impacts of feature groups instead of individual features. Evaluating the feature groups mitigates the impacts of feature correlations and can provide a more holistic understanding of the model. All code, models, and data used in this study are freely available to accelerate the adoption of machine learning explainability in the atmospheric and other environmental sciences.

Open access