Search Results

You are looking at 1 - 6 of 6 items for :

  • Author or Editor: Michael Bell x
  • Journal of Applied Meteorology and Climatology x
  • All content x
Clear All Modify Search
Michael M. Bell and Wen-Chau Lee

Abstract

This study presents an extension of the ground-based velocity track display (GBVTD)-simplex tropical cyclone (TC) circulation center–finding algorithm to further improve the accuracy and consistency of TC center estimates from single-Doppler radar data. The improved center-finding method determines a TC track that ensures spatial and temporal continuities of four primary characteristics: the radius of maximum wind, the maximum axisymmetric tangential wind, and the latitude and longitude of the TC circulation center. A statistical analysis improves the consistency of the TC centers over time and makes it possible to automate the GBVTD-simplex algorithm for tracking of landfalling TCs. The characteristics and performance of this objective statistical center-finding method are evaluated using datasets from Hurricane Danny (1997) and Bret (1999) over 5-h periods during which both storms were simultaneously observed by two coastal Weather Surveillance Radar-1988 Doppler (WSR-88D) units. Independent single-Doppler and dual-Doppler centers are determined and used to assess the absolute accuracy of the algorithm. Reductions of 50% and 10% in the average distance between independent center estimates are found for Danny and Bret, respectively, over the original GBVTD-simplex method. The average center uncertainties are estimated to be less than 2 km, yielding estimated errors of less than 5% in the retrieved radius of maximum wind and wavenumber-0 axisymmetric tangential wind, and ~30% error in the wavenumber-1 asymmetric tangential wind. The objective statistical center-finding method can be run on a time scale comparable to that of a WSR-88D volume scan, thus making it a viable tool for both research and operational use.

Full access
Ronald D. Leeper, Jesse E. Bell, and Michael A. Palecki

Abstract

The interpretation of in situ or remotely sensed soil moisture data for drought monitoring is challenged by the sensitivity of these observations to local soil characteristics and seasonal precipitation patterns. These challenges can be overcome by standardizing soil moisture observations. Traditional approaches require a lengthy record (usually 30 years) that most soil monitoring networks lack. Sampling techniques that combine hourly measurements over a temporal window have been used in the literature to generate historical references (i.e., climatology) from shorter-term datasets. This sampling approach was validated on select U.S. Department of Agriculture Soil Climate Analysis Network (SCAN) stations using a Monte Carlo analysis, which revealed that shorter-term (5+ years) hourly climatologies were similar to longer-term (10+ year) hourly means. The sampling approach was then applied to soil moisture observations from the U.S. Climate Reference Network (USCRN). The sampling method was used to generate multiple measures of soil moisture (mean and median anomalies, standardized median anomaly by interquantile range, and volumetric) that were converted to percentiles using empirical cumulative distribution functions. Overall, time series of soil moisture percentile were very similar among the differing measures; however, there were times of year at individual stations when soil moisture percentiles could have substantial deviations. The use of soil moisture percentiles and counts of threshold exceedance provided more consistent measures of hydrological conditions than observed soil moisture. These results suggest that hourly soil moisture observations can be reasonably standardized and can provide consistent measures of hydrological conditions across spatial and temporal scales.

Full access
Xiao-Wei Quan, Martin P. Hoerling, Bradfield Lyon, Arun Kumar, Michael A. Bell, Michael K. Tippett, and Hui Wang

Abstract

The prospects for U.S. seasonal drought prediction are assessed by diagnosing simulation and hindcast skill of drought indicators during 1982–2008. The 6-month standardized precipitation index is used as the primary drought indicator. The skill of unconditioned, persistence forecasts serves as the baseline against which the performance of dynamical methods is evaluated. Predictions conditioned on the state of global sea surface temperatures (SST) are assessed using atmospheric climate simulations conducted in which observed SSTs are specified. Predictions conditioned on the initial states of atmosphere, land surfaces, and oceans are next analyzed using coupled climate-model experiments. The persistence of the drought indicator yields considerable seasonal skill, with a region’s annual cycle of precipitation driving a strong seasonality in baseline skill. The unconditioned forecast skill for drought is greatest during a region’s climatological dry season and is least during a wet season. Dynamical models forced by observed global SSTs yield increased skill relative to this baseline, with improvements realized during the cold season over regions where precipitation is sensitive to El Niño–Southern Oscillation. Fully coupled initialized model hindcasts yield little additional skill relative to the uninitialized SST-forced simulations. In particular, neither of these dynamical seasonal forecasts materially increases summer skill for the drought indicator over the Great Plains, a consequence of small SST sensitivity of that region’s summer rainfall and the small impact of antecedent soil moisture conditions, on average, upon the summer rainfall. The fully initialized predictions for monthly forecasts appreciably improve on the seasonal skill, however, especially during winter and spring over the northern Great Plains.

Full access
Bradfield Lyon, Michael A. Bell, Michael K. Tippett, Arun Kumar, Martin P. Hoerling, Xiao-Wei Quan, and Hui Wang

Abstract

The inherent persistence characteristics of various drought indicators are quantified to extract predictive information that can improve drought early warning. Predictive skill is evaluated as a function of the seasonal cycle for regions within North America. The study serves to establish a set of baseline probabilities for drought across multiple indicators amenable to direct comparison with drought indicator forecast probabilities obtained when incorporating dynamical climate model forecasts. The emphasis is on the standardized precipitation index (SPI), but the method can easily be applied to any other meteorological drought indicator, and some additional examples are provided. Monte Carlo resampling of observational data generates two sets of synthetic time series of monthly precipitation that include, and exclude, the annual cycle while removing serial correlation. For the case of no seasonality, the autocorrelation (AC) of the SPI (and seasonal precipitation percentiles, moving monthly averages of precipitation) decays linearly with increasing lag. It is shown that seasonality in the variance of accumulated precipitation serves to enhance or diminish the persistence characteristics (AC) of the SPI and related drought indicators, and the seasonal cycle can thereby provide an appreciable source of drought predictability at regional scales. The AC is used to obtain a parametric probability density function of the future state of the SPI that is based solely on its inherent persistence characteristics. In addition, a method is presented for determining the optimal persistence of the SPI for the case of no serial correlation in precipitation (again, the baseline case). The optimized, baseline probabilities are being incorporated into Internet-based tools for the display of current and forecast drought conditions in near–real time.

Full access
Michael M. Bell, Wen-Chau Lee, Cory A. Wolff, and Huaqing Cai

Abstract

An automated quality control preprocessing algorithm for removing nonweather radar echoes in airborne Doppler radar data has been developed. This algorithm can significantly reduce the time and experience level required for interactive radar data editing prior to dual-Doppler wind synthesis or data assimilation. The algorithm uses the editing functions in the Solo software package developed by the National Center for Atmospheric Research to remove noise, Earth-surface, sidelobe, second-trip, and other artifacts. The characteristics of these nonweather radar returns, the algorithm to identify and remove them, and the impacts of applying different threshold levels on wind retrievals are presented. Verification was performed by comparison with published Electra Doppler Radar (ELDORA) datasets that were interactively edited by different experienced radar meteorologists. Four cases consisting primarily of convective echoes from the Verification of the Origins of Rotation in Tornadoes Experiment (VORTEX), Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX), Hurricane Rainband and Intensity Change Experiment (RAINEX), and The Observing System Research and Predictability Experiment (THORPEX) Pacific Asian Regional Campaign (T-PARC)/Tropical Cyclone Structure-2008 (TCS08) field experiments were used to test the algorithm using three threshold levels for data removal. The algorithm removes 80%, 90%, or 95% of the nonweather returns and retains 95%, 90%, or 85% of the weather returns on average at the low-, medium-, and high-threshold levels. Increasing the threshold level removes more nonweather echoes at the expense of also removing more weather echoes. The low threshold is recommended when weather retention is the highest priority, and the high threshold is recommended when nonweather removal is the highest priority. The medium threshold is a good compromise between these two priorities and is recommended for general use. Dual-Doppler wind retrievals using the automatically edited data compare well to retrievals from interactively edited data.

Full access
Evan A. Kalina, Sergey Y. Matrosov, Joseph J. Cione, Frank D. Marks, Jothiram Vivekanandan, Robert A. Black, John C. Hubbert, Michael M. Bell, David E. Kingsmill, and Allen B. White

Abstract

Dual-polarization scanning radar measurements, air temperature soundings, and a polarimetric radar-based particle identification scheme are used to generate maps and probability density functions (PDFs) of the ice water path (IWP) in Hurricanes Arthur (2014) and Irene (2011) at landfall. The IWP is separated into the contribution from small ice (i.e., ice crystals), termed small-particle IWP, and large ice (i.e., graupel and snow), termed large-particle IWP. Vertically profiling radar data from Hurricane Arthur suggest that the small ice particles detected by the scanning radar have fall velocities mostly greater than 0.25 m s−1 and that the particle identification scheme is capable of distinguishing between small and large ice particles in a mean sense. The IWP maps and PDFs reveal that the total and large-particle IWPs range up to 10 kg m−2, with the largest values confined to intense convective precipitation within the rainbands and eyewall. Small-particle IWP remains mostly <4 kg m−2, with the largest small-particle IWP values collocated with maxima in the total IWP. PDFs of the small-to-total IWP ratio have shapes that depend on the precipitation type (i.e., intense convective, stratiform, or weak-echo precipitation). The IWP ratio distribution is narrowest (broadest) in intense convective (weak echo) precipitation and peaks at a ratio of about 0.1 (0.3).

Full access