Search Results

You are looking at 1 - 10 of 30 items for

  • Author or Editor: Corey K. Potvin x
  • All content x
Clear All Modify Search
Corey K. Potvin

Abstract

Vortex detection algorithms are required for both research and operational applications in which data volume precludes timely subjective examination of model or analysis fields. Unfortunately, objective detection of convective vortices is often hindered by the strength and complexity of the flow in which they are embedded. To address this problem, a variational vortex-fitting algorithm previously developed to detect and characterize vortices observed by Doppler radar has been modified to operate on gridded horizontal wind data. The latter are fit to a simple analytical model of a vortex and its proximate environment, allowing the retrieval of important vortex characteristics. This permits the development of detection criteria tied directly to vortex properties (e.g., maximum tangential wind), rather than to more general kinematical properties that may poorly represent the vortex itself (e.g., vertical vorticity) when the background flow is strongly sheared. Thus, the vortex characteristic estimates provided by the technique may permit more effective detection criteria while providing useful information about vortex size, intensity, and trends therein. In tests with two simulated supercells, the technique proficiently detects and characterizes vortices, even in the presence of complex flow. Sensitivity tests suggest the algorithm would work well for a variety of vortex sizes without additional tuning. Possible applications of the technique include investigating relationships between mesocyclone and tornado characteristics, and detecting tornadoes, mesocyclones, and mesovortices in real-time ensemble forecasts.

Full access
Corey K. Potvin and Louis J. Wicker

Abstract

Kinematical analyses of mobile radar observations are critical to advancing the understanding of supercell thunderstorms. Maximizing the accuracy of these and subsequent dynamical analyses, and appropriately characterizing the uncertainty in ensuing conclusions about storm structure and processes, requires thorough knowledge of the typical errors obtained using different retrieval techniques. This study adopts an observing system simulation experiment (OSSE) framework to explore the errors obtained from ensemble Kalman filter (EnKF) assimilation versus dual-Doppler analysis (DDA) of storm-scale mobile radar data. The radar characteristics and EnKF model errors are varied to explore a range of plausible scenarios.

When dual-radar data are assimilated, the EnKF produces substantially better wind retrievals at higher altitudes, where DDAs are more sensitive to unaccounted flow evolution, and in data-sparse regions such as the storm inflow sector. Near the ground, however, the EnKF analyses are comparable to the DDAs when the radar cross-beam angles (CBAs) are poor, and slightly worse than the DDAs when the CBAs are optimal. In the single-radar case, the wind analyses benefit substantially from using finer grid spacing than in the dual-radar case for the objective analysis of radar observations. The analyses generally degrade when only single-radar data are assimilated, particularly when microphysical parameterization or low-level environmental wind errors are introduced. In some instances, this leads to large errors in low-level vorticity stretching and Lagrangian circulation calculations. Nevertheless, the results show that while multiradar observations of supercells are always preferable, judicious use of single-radar EnKF assimilation can yield useful analyses.

Full access
Alan Shapiro, Corey K. Potvin, and Jidong Gao

Abstract

The utility of the anelastic vertical vorticity equation in a weak-constraint (least squares error) variational dual-Doppler wind analysis procedure is explored. The analysis winds are obtained by minimizing a cost function accounting for the discrepancies between observed and analyzed radial winds, errors in the mass conservation equation, errors in the anelastic vertical vorticity equation, and spatial smoothness constraints. By using Taylor’s frozen-turbulence hypothesis to shift analysis winds to observation points, discrepancies between radially projected analysis winds and radial wind observations can be calculated at the actual times and locations the data are acquired. The frozen-turbulence hypothesis is also used to evaluate the local derivative term in the vorticity equation. Tests of the analysis procedure are performed with analytical pseudo-observations of an array of translating and temporally decaying counterrotating updrafts and downdrafts generated from a Beltrami flow solution of the Navier–Stokes equations. The experiments explore the value added to the analysis by the vorticity equation constraint in the common scenario of substantial missing low-level data (radial wind observations at heights beneath 1.5 km are withheld from the analysis). Experiments focus on the sensitivity of the most sensitive analysis variable—the vertical velocity component—to values of the weighting coefficients, volume scan period, number of volume scans, and errors in the estimated frozen-turbulence pattern-translation components. Although the vorticity equation constraint is found to add value to many of these analyses, the analysis can become significantly degraded if estimates of the pattern-translation components are largely in error or if the frozen-turbulence hypothesis itself breaks down. However, tests also suggest that these negative impacts can be mitigated if data are available in a rapid-scan mode.

Full access
Corey K. Potvin, Alan Shapiro, and Ming Xue

Abstract

One of the greatest challenges to dual-Doppler retrieval of the vertical wind is the lack of low-level divergence information available to the mass conservation constraint. This study examines the impact of a vertical vorticity equation constraint on vertical velocity retrievals when radar observations are lacking near the ground. The analysis proceeds in a three-dimensional variational data assimilation (3DVAR) framework with the anelastic form of the vertical vorticity equation imposed along with traditional data, mass conservation, and smoothness constraints. The technique is tested using emulated radial wind observations of a supercell storm simulated by the Advanced Regional Prediction System (ARPS), as well as real dual-Doppler observations of a supercell storm that occurred in Oklahoma on 8 May 2003. Special attention is given to procedures to evaluate the vorticity tendency term, including spatially variable advection correction and estimation of the intrinsic evolution. Volume scan times ranging from 5 min, typical of operational radar networks, down to 30 s, achievable by rapid-scan mobile radars, are considered. The vorticity constraint substantially improves the vertical velocity retrievals in our experiments, particularly for volume scan times smaller than 2 min.

Full access
Corey K. Potvin and Louis J. Wicker

Abstract

Under the envisioned warn-on-forecast (WoF) paradigm, ensemble model guidance will play an increasingly critical role in the tornado warning process. While computational constraints will likely preclude explicit tornado prediction in initial WoF systems, real-time forecasts of low-level mesocyclone-scale rotation appear achievable within the next decade. Given that low-level mesocyclones are significantly more likely than higher-based mesocyclones to be tornadic, intensity and trajectory forecasts of low-level supercell rotation could provide valuable guidance to tornado warning and nowcasting operations. The efficacy of such forecasts is explored using three simulated supercells having weak, moderate, or strong low-level rotation. The results suggest early WoF systems may provide useful probabilistic 30–60-min forecasts of low-level supercell rotation, even in cases of large radar–storm distances and/or narrow cross-beam angles. Given the idealized nature of the experiments, however, they are best viewed as providing an upper-limit estimate of the accuracy of early WoF systems.

Full access
Corey K. Potvin and Montgomery L. Flora

Abstract

The Warn-on-Forecast (WoF) program aims to deploy real-time, convection-allowing, ensemble data assimilation and prediction systems to improve short-term forecasts of tornadoes, flooding, lightning, damaging wind, and large hail. Until convection-resolving (horizontal grid spacing Δx < 100 m) systems become available, however, resolution errors will limit the accuracy of ensemble model output. Improved understanding of grid spacing dependence of simulated convection is therefore needed to properly calibrate and interpret ensemble output, and to optimize trade-offs between model resolution and other computationally constrained parameters like ensemble size and forecast lead time.

Toward this end, the authors examine grid spacing sensitivities of simulated supercells over Δx of 333 m–4 km. Storm environment and physics parameterization are varied among the simulations. The results suggest that 4-km grid spacing is too coarse to reliably simulate supercells, occasionally leading to premature storm demise, whereas 3-km simulations more often capture operationally important features, including low-level rotation tracks. Further decreasing Δx to 1 km enables useful forecasts of rapid changes in low-level rotation intensity, though significant errors remain (e.g., in timing).

Grid spacing dependencies vary substantially among the experiments, suggesting that accurate calibration of ensemble output requires better understanding of how storm characteristics, environment, and parameterization schemes modulate grid spacing sensitivity. Much of the sensitivity arises from poorly resolving small-scale processes that impact larger (well resolved) scales. Repeating some of the 333-m simulations with coarsened initial conditions reveals that supercell forecasts can substantially benefit from reduced grid spacing even when limited observational density precludes finescale initialization.

Full access
Alan Shapiro, Katherine M. Willingham, and Corey K. Potvin

Abstract

The spatially variable advection-correction/analysis procedure introduced in is tested using analytical reflectivity blobs embedded in a solid-body vortex, and Terminal Doppler Weather Radar (TDWR) and Weather Surveillance Radar-1988 Doppler (WSR-88D) data of a tornadic supercell thunderstorm that passed over central Oklahoma on 8 May 2003. In the TDWR tests, plan position indicator (PPI) data at two volume scan times are input to the advection-correction procedure, with PPI data from a third scan time, intermediate between the two input times, that is used to validate the results. The procedure yields analyzed reflectivity fields with lower root-mean-square errors and higher correlation coefficients than those reflectivity fields that were advection corrected with any constant advection speed.

Full access
Alan Shapiro, Stefan Rahimi, Corey K. Potvin, and Leigh Orf

Abstract

An advection correction procedure is used to mitigate temporal interpolation errors in trajectory analyses constructed from gridded (in space and time) velocity data. The procedure is based on a technique introduced by Gal-Chen to reduce radar data analysis errors arising for the nonsimultaneity of the data collection. Experiments are conducted using data from a high-resolution Cloud Model 1 (CM1) numerical model simulation of a supercell storm initialized within an environment representative of the 24 May 2011 El Reno, Oklahoma, tornadic supercell storm. Trajectory analyses using advection correction are compared to traditional trajectory analyses using linear time interpolation. Backward trajectories are integrated over a 5-min period for a range of data input time intervals and for velocity-pattern-translation estimates obtained from different analysis subdomain sizes (box widths) and first-guess options. The use of advection correction reduces trajectory end-point position errors for a large majority of the trajectories in the analysis domain, with substantial improvements for trajectories launched in the vicinity of the model storm’s gust front and in bands within the rear-flank downdraft. However, the pattern-translation components retrieved by this procedure may be nonunique if the data input time intervals are too large.

Full access
Derek R. Stratman, Corey K. Potvin, and Louis J. Wicker

Abstract

A goal of Warn-on-Forecast (WoF) is to develop forecasting systems that produce accurate analyses and forecasts of severe weather to be utilized in operational warning settings. Recent WoF-related studies have indicated the need to alleviate storm displacement errors in both analyses and forecasts. A potential solution to reduce these errors is the feature alignment technique (FAT), which mitigates displacement errors between observations and model fields while satisfying constraints. This study merges the FAT with a local ensemble transform Kalman filter (LETKF) and uses observing system simulation experiments (OSSEs) to vet the FAT as a potential alleviator of forecast errors arising from storm displacement errors. An idealized truth run of a supercell on a 250-m grid is used to generate pseudoradar observations, which are assimilated onto a 2-km grid using a 50-member ensemble to produce analyses and forecasts of the supercell. The FAT uses composite reflectivity to generate a 2D field of displacement vectors that is used to align the model variables with the observations prior to each analysis cycle. The FAT is tested by displacing the initial model background fields from the observations or modifying the environmental wind profile to create a storm motion bias in the forecast cycles. The FAT–LETKF performance is evaluated and compared to that of the LETKF alone. The FAT substantially reduces errors in storm intensity, location, and structure during data assimilation and subsequent forecasts. These supercell OSSEs provide the foundation for future experiments with real data and more complex events.

Full access
Corey K. Potvin, Louis J. Wicker, and Alan Shapiro

Abstract

Dual-Doppler wind retrieval is an invaluable tool in the study of convective storms. However, the nature of the errors in the retrieved three-dimensional wind estimates and subsequent dynamical analyses is not precisely known, making it difficult to assign confidence to inferred storm behavior. Using an Observing System Simulation Experiment (OSSE) framework, this study characterizes these errors for a supercell thunderstorm observed at close range by two Doppler radars. Synthetic radar observations generated from a high-resolution numerical supercell simulation are input to a three-dimensional variational data assimilation (3DVAR) dual-Doppler wind retrieval technique. The sensitivity of the analyzed kinematics and dynamics to the dual-Doppler retrieval settings, hydrometeor fall speed parameterization errors, and radar cross-beam angle and scanning strategy is examined.

Imposing the commonly adopted assumptions of spatially constant storm motion and intrinsically steady flow produces large errors at higher altitudes. On the other hand, reasonably accurate analyses are obtained at lower and middle levels, even when the majority of the storm lies outside the 30° dual-Doppler lobe. Low-level parcel trajectories initiated around the main updraft and rear-flank downdraft are generally qualitatively accurate, as are time series of circulation computed around material circuits. Omitting upper-level radar observations to reduce volume scan times does not substantially degrade the lower- and middle-level analyses, which implies that shallower scanning strategies should enable an improved retrieval of supercell dynamics. The results suggest that inferences about supercell behavior based on qualitative features in 3DVAR dual-Doppler and subsequent dynamical retrievals may generally be reliable.

Full access