Search Results

You are looking at 1 - 10 of 35 items for

  • Author or Editor: Corey K. Potvin x
  • Refine by Access: All Content x
Clear All Modify Search
Corey K. Potvin

Abstract

Vortex detection algorithms are required for both research and operational applications in which data volume precludes timely subjective examination of model or analysis fields. Unfortunately, objective detection of convective vortices is often hindered by the strength and complexity of the flow in which they are embedded. To address this problem, a variational vortex-fitting algorithm previously developed to detect and characterize vortices observed by Doppler radar has been modified to operate on gridded horizontal wind data. The latter are fit to a simple analytical model of a vortex and its proximate environment, allowing the retrieval of important vortex characteristics. This permits the development of detection criteria tied directly to vortex properties (e.g., maximum tangential wind), rather than to more general kinematical properties that may poorly represent the vortex itself (e.g., vertical vorticity) when the background flow is strongly sheared. Thus, the vortex characteristic estimates provided by the technique may permit more effective detection criteria while providing useful information about vortex size, intensity, and trends therein. In tests with two simulated supercells, the technique proficiently detects and characterizes vortices, even in the presence of complex flow. Sensitivity tests suggest the algorithm would work well for a variety of vortex sizes without additional tuning. Possible applications of the technique include investigating relationships between mesocyclone and tornado characteristics, and detecting tornadoes, mesocyclones, and mesovortices in real-time ensemble forecasts.

Full access
Corey K. Potvin and Louis J. Wicker

Abstract

Under the envisioned warn-on-forecast (WoF) paradigm, ensemble model guidance will play an increasingly critical role in the tornado warning process. While computational constraints will likely preclude explicit tornado prediction in initial WoF systems, real-time forecasts of low-level mesocyclone-scale rotation appear achievable within the next decade. Given that low-level mesocyclones are significantly more likely than higher-based mesocyclones to be tornadic, intensity and trajectory forecasts of low-level supercell rotation could provide valuable guidance to tornado warning and nowcasting operations. The efficacy of such forecasts is explored using three simulated supercells having weak, moderate, or strong low-level rotation. The results suggest early WoF systems may provide useful probabilistic 30–60-min forecasts of low-level supercell rotation, even in cases of large radar–storm distances and/or narrow cross-beam angles. Given the idealized nature of the experiments, however, they are best viewed as providing an upper-limit estimate of the accuracy of early WoF systems.

Full access
Corey K. Potvin, Alan Shapiro, and Ming Xue

Abstract

One of the greatest challenges to dual-Doppler retrieval of the vertical wind is the lack of low-level divergence information available to the mass conservation constraint. This study examines the impact of a vertical vorticity equation constraint on vertical velocity retrievals when radar observations are lacking near the ground. The analysis proceeds in a three-dimensional variational data assimilation (3DVAR) framework with the anelastic form of the vertical vorticity equation imposed along with traditional data, mass conservation, and smoothness constraints. The technique is tested using emulated radial wind observations of a supercell storm simulated by the Advanced Regional Prediction System (ARPS), as well as real dual-Doppler observations of a supercell storm that occurred in Oklahoma on 8 May 2003. Special attention is given to procedures to evaluate the vorticity tendency term, including spatially variable advection correction and estimation of the intrinsic evolution. Volume scan times ranging from 5 min, typical of operational radar networks, down to 30 s, achievable by rapid-scan mobile radars, are considered. The vorticity constraint substantially improves the vertical velocity retrievals in our experiments, particularly for volume scan times smaller than 2 min.

Full access
Alan Shapiro, Corey K. Potvin, and Jidong Gao

Abstract

The utility of the anelastic vertical vorticity equation in a weak-constraint (least squares error) variational dual-Doppler wind analysis procedure is explored. The analysis winds are obtained by minimizing a cost function accounting for the discrepancies between observed and analyzed radial winds, errors in the mass conservation equation, errors in the anelastic vertical vorticity equation, and spatial smoothness constraints. By using Taylor’s frozen-turbulence hypothesis to shift analysis winds to observation points, discrepancies between radially projected analysis winds and radial wind observations can be calculated at the actual times and locations the data are acquired. The frozen-turbulence hypothesis is also used to evaluate the local derivative term in the vorticity equation. Tests of the analysis procedure are performed with analytical pseudo-observations of an array of translating and temporally decaying counterrotating updrafts and downdrafts generated from a Beltrami flow solution of the Navier–Stokes equations. The experiments explore the value added to the analysis by the vorticity equation constraint in the common scenario of substantial missing low-level data (radial wind observations at heights beneath 1.5 km are withheld from the analysis). Experiments focus on the sensitivity of the most sensitive analysis variable—the vertical velocity component—to values of the weighting coefficients, volume scan period, number of volume scans, and errors in the estimated frozen-turbulence pattern-translation components. Although the vorticity equation constraint is found to add value to many of these analyses, the analysis can become significantly degraded if estimates of the pattern-translation components are largely in error or if the frozen-turbulence hypothesis itself breaks down. However, tests also suggest that these negative impacts can be mitigated if data are available in a rapid-scan mode.

Full access
Derek R. Stratman and Corey K. Potvin

Abstract

Storm displacement errors can arise from a number of potential sources of error within a data assimilation (DA) and forecast system. Conversely, storm displacement errors can cause issues for storm-scale, ensemble-based systems using an ensemble Kalman filter (EnKF), such as NSSL’s Warn-on-Forecast System (WoFS). A previous study developed a fully grid-based feature alignment technique (FAT) to mitigate these phase errors and their impacts. However, that study developed and tested the FAT for single-storm cases. This study advances that work by implementing an object-based merging and matching technique into the FAT and tests the updated FAT in more complex scenarios of multiple storms. Ensemble-based experiments are conducted with and without the FAT for each of the scenarios. The experiments’ analyses and forecasts of storm-related fields are then evaluated using subjective and objective methods. Results from these idealized multiple-storm experiments continue to reveal the potential benefits of correcting storm displacement errors. For example, running the FAT even once can mitigate the “spin up” period experienced by the no-FAT experiments. The new results also show that running the FAT prior to every DA cycling step generally leads to more skillful forecasts at the smaller scales, especially in earlier-initialized forecasts. However, repeatedly running the FAT prior to every DA step can eventually lead to deterioration in analyses and forecasts. Potential solutions to this problem include using longer cycling intervals and running the FAT prior to DA less often. Additional ways to improve the FAT along with other results are presented and discussed.

Restricted access
Corey K. Potvin and Louis J. Wicker

Abstract

Kinematical analyses of mobile radar observations are critical to advancing the understanding of supercell thunderstorms. Maximizing the accuracy of these and subsequent dynamical analyses, and appropriately characterizing the uncertainty in ensuing conclusions about storm structure and processes, requires thorough knowledge of the typical errors obtained using different retrieval techniques. This study adopts an observing system simulation experiment (OSSE) framework to explore the errors obtained from ensemble Kalman filter (EnKF) assimilation versus dual-Doppler analysis (DDA) of storm-scale mobile radar data. The radar characteristics and EnKF model errors are varied to explore a range of plausible scenarios.

When dual-radar data are assimilated, the EnKF produces substantially better wind retrievals at higher altitudes, where DDAs are more sensitive to unaccounted flow evolution, and in data-sparse regions such as the storm inflow sector. Near the ground, however, the EnKF analyses are comparable to the DDAs when the radar cross-beam angles (CBAs) are poor, and slightly worse than the DDAs when the CBAs are optimal. In the single-radar case, the wind analyses benefit substantially from using finer grid spacing than in the dual-radar case for the objective analysis of radar observations. The analyses generally degrade when only single-radar data are assimilated, particularly when microphysical parameterization or low-level environmental wind errors are introduced. In some instances, this leads to large errors in low-level vorticity stretching and Lagrangian circulation calculations. Nevertheless, the results show that while multiradar observations of supercells are always preferable, judicious use of single-radar EnKF assimilation can yield useful analyses.

Full access
Corey K. Potvin and Montgomery L. Flora

Abstract

The Warn-on-Forecast (WoF) program aims to deploy real-time, convection-allowing, ensemble data assimilation and prediction systems to improve short-term forecasts of tornadoes, flooding, lightning, damaging wind, and large hail. Until convection-resolving (horizontal grid spacing Δx < 100 m) systems become available, however, resolution errors will limit the accuracy of ensemble model output. Improved understanding of grid spacing dependence of simulated convection is therefore needed to properly calibrate and interpret ensemble output, and to optimize trade-offs between model resolution and other computationally constrained parameters like ensemble size and forecast lead time.

Toward this end, the authors examine grid spacing sensitivities of simulated supercells over Δx of 333 m–4 km. Storm environment and physics parameterization are varied among the simulations. The results suggest that 4-km grid spacing is too coarse to reliably simulate supercells, occasionally leading to premature storm demise, whereas 3-km simulations more often capture operationally important features, including low-level rotation tracks. Further decreasing Δx to 1 km enables useful forecasts of rapid changes in low-level rotation intensity, though significant errors remain (e.g., in timing).

Grid spacing dependencies vary substantially among the experiments, suggesting that accurate calibration of ensemble output requires better understanding of how storm characteristics, environment, and parameterization schemes modulate grid spacing sensitivity. Much of the sensitivity arises from poorly resolving small-scale processes that impact larger (well resolved) scales. Repeating some of the 333-m simulations with coarsened initial conditions reveals that supercell forecasts can substantially benefit from reduced grid spacing even when limited observational density precludes finescale initialization.

Full access
Corey K. Potvin, Louis J. Wicker, and Alan Shapiro

Abstract

Dual-Doppler wind retrieval is an invaluable tool in the study of convective storms. However, the nature of the errors in the retrieved three-dimensional wind estimates and subsequent dynamical analyses is not precisely known, making it difficult to assign confidence to inferred storm behavior. Using an Observing System Simulation Experiment (OSSE) framework, this study characterizes these errors for a supercell thunderstorm observed at close range by two Doppler radars. Synthetic radar observations generated from a high-resolution numerical supercell simulation are input to a three-dimensional variational data assimilation (3DVAR) dual-Doppler wind retrieval technique. The sensitivity of the analyzed kinematics and dynamics to the dual-Doppler retrieval settings, hydrometeor fall speed parameterization errors, and radar cross-beam angle and scanning strategy is examined.

Imposing the commonly adopted assumptions of spatially constant storm motion and intrinsically steady flow produces large errors at higher altitudes. On the other hand, reasonably accurate analyses are obtained at lower and middle levels, even when the majority of the storm lies outside the 30° dual-Doppler lobe. Low-level parcel trajectories initiated around the main updraft and rear-flank downdraft are generally qualitatively accurate, as are time series of circulation computed around material circuits. Omitting upper-level radar observations to reduce volume scan times does not substantially degrade the lower- and middle-level analyses, which implies that shallower scanning strategies should enable an improved retrieval of supercell dynamics. The results suggest that inferences about supercell behavior based on qualitative features in 3DVAR dual-Doppler and subsequent dynamical retrievals may generally be reliable.

Full access
Alan Shapiro, Stefan Rahimi, Corey K. Potvin, and Leigh Orf

Abstract

An advection correction procedure is used to mitigate temporal interpolation errors in trajectory analyses constructed from gridded (in space and time) velocity data. The procedure is based on a technique introduced by Gal-Chen to reduce radar data analysis errors arising for the nonsimultaneity of the data collection. Experiments are conducted using data from a high-resolution Cloud Model 1 (CM1) numerical model simulation of a supercell storm initialized within an environment representative of the 24 May 2011 El Reno, Oklahoma, tornadic supercell storm. Trajectory analyses using advection correction are compared to traditional trajectory analyses using linear time interpolation. Backward trajectories are integrated over a 5-min period for a range of data input time intervals and for velocity-pattern-translation estimates obtained from different analysis subdomain sizes (box widths) and first-guess options. The use of advection correction reduces trajectory end-point position errors for a large majority of the trajectories in the analysis domain, with substantial improvements for trajectories launched in the vicinity of the model storm’s gust front and in bands within the rear-flank downdraft. However, the pattern-translation components retrieved by this procedure may be nonunique if the data input time intervals are too large.

Full access
Montgomery L. Flora, Corey K. Potvin, and Louis J. Wicker

Abstract

As convection-allowing ensembles are routinely used to forecast the evolution of severe thunderstorms, developing an understanding of storm-scale predictability is critical. Using a full-physics numerical weather prediction (NWP) framework, the sensitivity of ensemble forecasts of supercells to initial condition (IC) uncertainty is investigated using a perfect model assumption. Three cases are used from the real-time NSSL Experimental Warn-on-Forecast System for Ensembles (NEWS-e) from the 2016 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. The forecast sensitivity to IC uncertainty is assessed by repeating the simulations with the initial ensemble perturbations reduced to 50% and 25% of their original magnitudes. The object-oriented analysis focuses on significant supercell features, including the mid- and low-level mesocyclone, and rainfall. For a comprehensive analysis, supercell location and amplitude predictability of the aforementioned features are evaluated separately.

For all examined features and cases, forecast spread is greatly reduced by halving the IC spread. By reducing the IC spread from 50% to 25% of the original magnitude, forecast spread is still substantially reduced in two of the three cases. The practical predictability limit (PPL), or the lead time beyond which the forecast spread exceeds some prechosen threshold, is case and feature dependent. Comparing to past studies reveals that practical predictability of supercells is substantially improved by initializing once storms are well established in the ensemble analysis.

Full access