Search Results

You are looking at 1 - 10 of 33 items for

  • Author or Editor: Kimberly L. Elmore x
  • Refine by Access: All Content x
Clear All Modify Search
Kimberly L. Elmore

Abstract

Rank histograms are a commonly used tool for evaluating an ensemble forecasting system’s performance. Because the sample size is finite, the rank histogram is subject to statistical fluctuations, so a goodness-of-fit (GOF) test is employed to determine if the rank histogram is uniform to within some statistical certainty. Most often, the χ 2 test is used to test whether the rank histogram is indistinguishable from a discrete uniform distribution. However, the χ 2 test is insensitive to order and so suffers from troubling deficiencies that may render it unsuitable for rank histogram evaluation. As shown by examples in this paper, more powerful tests, suitable for small sample sizes, and very sensitive to the particular deficiencies that appear in rank histograms are available from the order-dependent Cramér–von Mises family of statistics, in particular, the Watson and Anderson–Darling statistics.

Full access
Kimberly L. Elmore

Abstract

The National Severe Storms Laboratory (NSSL) has developed a hydrometeor classification algorithm (HCA) for use with the polarimetric upgrade of the current Weather Surveillance Radar-1988 Doppler (WSR-88D) network. The algorithm was developed specifically for warm-season convection, but it will run regardless of season, and so its performance on surface precipitation type during winter events is examined here. The HCA output is compared with collocated (in time and space) observations of precipitation type provided by the public. The Peirce skill score (PSS) shows that the NSSL HCA applied to winter surface precipitation displays little skill, with a PSS of only 0.115. Further analysis indicates that HCA failures are strongly linked to the inability of HCA to accommodate refreezing below the first freezing level and to errors in the melting-level detection algorithm. Entrants in the 2009 American Meteorological Society second annual artificial intelligence competition developed classification methods that yield a PSS of 0.35 using a subset of available radar data merged with limited environmental data. Thus, when polarimetric radar data and environmental data are appropriately combined, more information about winter surface precipitation type is available than from either data source alone.

Full access
William P. Mahoney III and Kimberly L. Elmore

Abstract

The structure and evolution of a microburst-producing cell were studied using dual-Doppler data collected in eastern Colorado during the summer of 1987. Eight volumes of multiple-Doppler data with a temporal resolution of 2.5 min were analyzed. The radar data were interpolated onto a Cartesian grid with horizontal and vertical spacing of 250 m and 200 m, respectively. The analysis of this dataset revealed that the 56 dBZ, storm produced two adjacent microbursts with different kinematic structures. The first microburst, which de-veloped a maximum velocity differential of 16 m s−1 over 2.5 km, was associated with a strong horizontal vortex (rotor) that developed new the surface at the precipitation edge. The second stronger micreburst obtained a velocity dilterential of 22 m s−1 over 3.2 km and was associated with a strengthening downdraft and collapse of the cell. Both microbursts developed ∼14 min after precipitation reached the surface.

Trajectory and equivalent potential temperature (θe) analyses were used to determine the history of the microburst-producing cell. These analyses indicate that the source region of air for the rotor-associated microburst was below cloud base and upwind of the precipitation shaft. Air entered the cell from the west at low levels, ascended over the horizontal rotor, and descended rapidly to the ground on the east side of the rotor. The source height of the air within the second microburst was well above cloud base. As the cell collapsed and the microburst developed, air accelerated into the downdraft at midlevels and descended to the surface. Features associated with this microburst included a descending reflectivity echo, convergence above cloud base, and the development and descent of strong vertical vorticity.

Full access
Kimberly L. Elmore and Michael B. Richman

Abstract

Eigentechniques, in particular principal component analysis (PCA), have been widely used in meteorological analyses since the early 1950s. Traditionally, choices for the parent similarity matrix, which are diagonalized, have been limited to correlation, covariance, or, rarely, cross products. Whereas each matrix has unique characteristic benefits, all essentially identify parameters that vary together. Depending on what underlying structure the analyst wishes to reveal, similarity matrices can be employed, other than the aforementioned, to yield different results. In this work, a similarity matrix based upon Euclidean distance, commonly used in cluster analysis, is developed as a viable alternative. For PCA, Euclidean distance is converted into Euclidean similarity. Unlike the variance-based similarity matrices, a PCA performed using Euclidean similarity identifies parameters that are close to each other in a Euclidean distance sense. Rather than identifying parameters that change together, the resulting Euclidean similarity–based PCA identifies parameters that are close to each other, thereby providing a new similarity matrix choice. The concept used to create Euclidean similarity extends the utility of PCA by opening a wide range of similarity measures available to investigators, to be chosen based on what characteristic they wish to identify.

Full access
Kimberly L. Elmore, Pamela L. Heinselman, and David J. Stensrud

Abstract

Prior work shows that Weather Surveillance Radar-1988 Doppler (WSR-88D) clear-air reflectivity can be used to determine convective boundary layer (CBL) depth. Based on that work, two simple linear regressions are developed that provide CBL depth. One requires only clear-air radar reflectivity from a single 4.5° elevation scan, whereas the other additionally requires the total, clear-sky insolation at the radar site, derived from the radar location and local time. Because only the most recent radar scan is used, the CBL depth can, in principle, be computed for every scan. The “true” CBL depth used to develop the models is based on human interpretation of the 915-MHz profiler data. The regressions presented in this work are developed using 17 summer days near Norman, Oklahoma, that have been previously investigated. The resulting equations and algorithms are applied to a testing dataset consisting of 7 days not previously analyzed. Though the regression using insolation estimates performs best, errors from both models are on the order of the expected error of the profiler-estimated CBL depth values. Of the two regressions, the one that uses insolation yields CBL depth estimates with an RMSE of 208 m, while the regression with only clear-air radar reflectivity yields CBL depth estimates with an RMSE of 330 m.

Full access
Valliappa Lakshmanan, Kimberly L. Elmore, and Michael B. Richman

No Abstract available.

Full access
Kimberly L. Elmore, David M. Schultz, and Michael E. Baldwin

Abstract

A previous study of the mean spatial bias errors associated with operational forecast models motivated an examination of the mechanisms responsible for these biases. One hypothesis for the cause of these errors is that mobile synoptic-scale phenomena are partially responsible. This paper explores this hypothesis using 24-h forecasts from the operational Eta Model and an experimental version of the Eta run with Kain–Fritsch convection (EtaKF).

For a sample of 44 well-defined upper-level short-wave troughs arriving on the west coast of the United States, 70% were underforecast (as measured by the 500-hPa geopotential height), a likely result of being undersampled by the observational network. For a different sample of 45 troughs that could be tracked easily across the country, consecutive model runs showed that the height errors associated with 44% of the troughs generally decreased in time, 11% increased in time, 18% had relatively steady errors, 2% were uninitialized entering the West Coast, and 24% exhibited some other kind of behavior. Thus, landfalling short-wave troughs were typically underforecast (positive errors, heights too high), but these errors tended to decrease as they moved across the United States, likely a result of being better initialized as the troughs became influenced by more upper-air data. Nevertheless, some errors in short-wave troughs were not corrected as they fell under the influence of supposedly increased data amount and quality. These results indirectly show the effect that the amount and quality of observational data has on the synoptic-scale errors in the models. On the other hand, long-wave ridges tended to be underforecast (negative errors, heights too low) over a much larger horizontal extent.

These results are confirmed in a more systematic manner over the entire dataset by segregating the model output at each grid point by the sign of the 500-hPa relative vorticity. Although errors at grid points with positive relative vorticity are small but positive in the western United States, the errors become large and negative farther east. Errors at grid points with negative relative vorticity, on the other hand, are generally negative across the United States. A large negative bias observed in the Eta and EtaKF over the southeast United States is believed to be due to an error in the longwave radiation scheme interacting with water vapor and clouds. This study shows that model errors may be related to the synoptic-scale flow, and even large-scale features such as long-wave troughs can be associated with significant large-scale height errors.

Full access
Travis M. Smith, Kimberly L. Elmore, and Shannon A. Dulin

Abstract

The problem of predicting the onset of damaging downburst winds from high-reflectivity storm cells that develop in an environment of weak vertical shear with Weather Surveillance Radar-1988 Doppler (WSR-88D) is examined. Ninety-one storm cells that produced damaging outflows are analyzed with data from the WSR- 88D network, along with 1247 nonsevere storm cells that developed in the same environments. Twenty-six reflectivity and radial velocity–based parameters are calculated for each cell, and a linear discriminant analysis was performed on 65% of the dataset in order to develop prediction equations that would discriminate between severe downburst-producing cells and cells that did not produce a strong outflow. These prediction equations are evaluated on the remaining 35% of the dataset. The datasets were resampled 100 times to determine the range of possible results. The resulting automated algorithm has a median Heidke skill score (HSS) of 0.40 in the 20–45-km range with a median lead time of 5.5 min, and a median HSS of 0.17 in the 45–80-km range with a median lead time of 0 min. As these lead times are medians of the mean lead times calculated from a large, resampled dataset, many of the storm cells in the dataset had longer lead times than the reported median lead times.

Full access
Kimberly L. Elmore, David J. Stensrud, and Kenneth C. Crawford

Abstract

A cloud model ensemble forecasting approach is developed to create forecasts that describe the range and distribution of thunderstorm lifetimes that may be expected to occur on a particular day. Such forecasts are crucial for anticipating severe weather, because long-lasting storms tend to produce more significant weather and have a greater impact on public safety than do storms with brief lifetimes. Eighteen days distributed over two warm seasons with 1481 observed thunderstorms are used to assess the ensemble approach. Forecast soundings valid at 1800, 2100, and 0000 UTC provided by the 0300 UTC run of the operational Meso Eta Model from the National Centers for Environmental Prediction are used to provide horizontally homogeneous initial conditions for a cloud model ensemble made up from separate runs of the fully three-dimensional Collaborative Model for Mesoscale Atmospheric Simulation. These soundings are acquired from a 160 km × 160 km square centered over the location of interest; they are shown to represent a likely, albeit biased, range of atmospheric states. A minimum threshold value for maximum vertical velocity of 8 m s−1 within the cloud model domain is used to estimate storm lifetime. Forecast storm lifetimes are verified against observed storm lifetimes, as derived from the Storm Cell Identification and Tracking algorithm applied to Weather Surveillance Radar—1988 Doppler (WSR-88D) data from the National Weather Service (reflectivity exceeding 40 dBZ e). Probability density functions (pdfs) are estimated from the storm lifetimes that result from the ensemble. When results from all 18 days are pooled, a vertical velocity threshold of 8 m s−1 is found to generate a forecast pdf of storm lifetime that most closely resembles the pdf that describes the collection of observed storm lifetimes. Standard 2 × 2 contingency statistics reveal that, on identifiable occasions, the ensemble model displays skill in comparison with the climatologic mean in locating where convection is most likely to occur. Contingency statistics also show that when storm lifetimes of at least 60 min are used as a proxy for severe weather, the ensemble shows considerable skill at identifying days that are likely to produce severe weather. Because the ensemble model has skill in predicting the range and distribution of storm lifetimes on a daily basis, the forecast pdf of storm lifetime is used directly to create probabilistic forecasts of storm lifetime, given the current age of a storm.

Full access
Kimberly L. Elmore, David J. Stensrud, and Kenneth C. Crawford

Abstract

As computational capacity has increased, cloud-scale numerical models are slowly being modified from pure research tools to forecast tools. Previous studies that used cloud-scale models as explicit forecast tools, in much the same way as a mesoscale model might be used, have met with limited success. Results presented in this paper suggest that this is due, at least in part, to the nature of cloud-scale models themselves. Results from over 700 cloud-scale model runs indicate that, in some cases, differences in the initial soundings that are smaller than can be measured by the current observing system result in unexpected differences in storm longevity. In other cases, easily measurable differences in the initial soundings do not result in significant differences in storm longevity. There unfortunately appears to be no set of parameters that can be used to determine whether the initial sounding is near some part of the cloud-model parameter space that displays this sensitivity. Because different cloud models share similar philosophies, if not similar design, this sensitivity to initial soundings places a fundamental limit on how well the current slate of cloud-scale models can be expected to perform as explicit forecast tools. Given these results, it is not clear that using state-of-the-art cloud-scale models as explicit forecasting tools is appropriate. However, cloud-model ensembles may help to address some inescapable problems with explicit forecasts from cloud models. The most useful application of cloud-scale models in operational forecasts may be a probabilistic one in which the models are used as members of ensembles, a process that has been demonstrated for models of larger-scale processes.

Full access