Search Results

You are looking at 1 - 4 of 4 items for :

  • Author or Editor: Rebecca E. Morss x
  • Monthly Weather Review x
  • All content x
Clear All Modify Search
Thomas M. Hamill, Chris Snyder, and Rebecca E. Morss

Abstract

A perfect model Monte Carlo experiment was conducted to explore the characteristics of analysis error in a quasigeostrophic model. An ensemble of cycled analyses was created, with each member of the ensemble receiving different observations and starting from different forecast states. Observations were created by adding random error (consistent with observational error statistics) to vertical profiles extracted from truth run data. Assimilation of new observations was performed every 12 h using a three-dimensional variational analysis scheme. Three observation densities were examined, a low-density network (one observation ∼ every 202 grid points), a moderate-density network (one observation ∼ every 102 grid points), and a high-density network (∼ every 52 grid points). Error characteristics were diagnosed primarily from a subset of 16 analysis times taken every 10 days from a long time series, with the first sample taken after a 50-day spinup. The goal of this paper is to understand the spatial, temporal, and some dynamical characteristics of analysis errors.

Results suggest a nonlinear relationship between observational data density and analysis error; there was a much greater reduction in error from the low- to moderate-density networks than from moderate to high density. Errors in the analysis reflected both structured errors created by the chaotic dynamics as well as random observational errors. The correction of the background toward the observations reduced the error but also randomized the prior dynamical structure of the errors, though there was a dependence of error structure on observational data density. Generally, the more observations, the more homogeneous the errors were in time and space and the less the analysis errors projected onto the leading backward Lyapunov vectors. Analyses provided more information at higher wavenumbers as data density increased. Errors were largest in the upper troposphere and smallest in the mid- to lower troposphere. Relatively small ensembles were effective in capturing a large percentage of the analysis-error variance, though more members were needed to capture a specified fraction of the variance as observation density increased.

Full access
Thomas M. Hamill, Chris Snyder, and Rebecca E. Morss

Abstract

The statistical properties of analysis and forecast errors from commonly used ensemble perturbation methodologies are explored. A quasigeostrophic channel model is used, coupled with a 3D-variational data assimilation scheme. A perfect model is assumed.

Three perturbation methodologies are considered. The breeding and singular-vector (SV) methods approximate the strategies currently used at operational centers in the United States and Europe, respectively. The perturbed observation (PO) methodology approximates a random sample from the analysis probability density function (pdf) and is similar to the method performed at the Canadian Meteorological Centre. Initial conditions for the PO ensemble are analyses from independent, parallel data assimilation cycles. Each assimilation cycle utilizes observations perturbed by random noise whose statistics are consistent with observational error covariances. Each member’s assimilation/forecast cycle is also started from a distinct initial condition.

Relative to breeding and SV, the PO method here produced analyses and forecasts with desirable statistical characteristics. These include consistent rank histogram uniformity for all variables at all lead times, high spread/skill correlations, and calibrated, reduced-error probabilistic forecasts. It achieved these improvements primarily because 1) the ensemble mean of the PO initial conditions was more accurate than the mean of the bred or singular-vector ensembles, which were centered on a less-skilful control initial condition—much of the improvement was lost when PO initial conditions were recentered on the control analysis; and 2) by construction, the perturbed observation ensemble initial conditions permitted realistic variations in spread from day to day, while bred and singular-vector perturbations did not. These results suggest that in the absence of model error, an ensemble of initial conditions performs better when the initialization method is designed to produce random samples from the analysis pdf. The perturbed observation method did this much more satisfactorily than either the breeding or singular-vector methods.

The ability of the perturbed observation ensemble to sample randomly from the analysis pdf also suggests that such an ensemble can provide useful information on forecast covariances and hence improve future data assimilation techniques.

Full access
Rebecca E. Morss, Kathleen A. Miller, and Maxine S. Vasil

Abstract

Observations of the current state of the atmosphere are a major input to production of modern weather forecasts. As a result, investments in observations are a major component of public expenditures related to weather forecasting. Consequently, from both a meteorological and societal perspective, it is desirable to select an appropriate level of public investment in observations. Although the meteorological community has discussed optimal investment in observations for more than three decades, it still lacks a practical, systematic framework for analyzing this issue. This paper presents the basic elements of such a framework, using an economic approach. The framework is then demonstrated using an example for radiosonde observations and numerical weather forecasts. In presenting and demonstrating the framework, the paper also identifies gaps in existing knowledge that must be addressed before a more complete economic evaluation of investment in observations can be implemented.

Full access
Kathryn R. Fossell, David Ahijevych, Rebecca E. Morss, Chris Snyder, and Chris Davis

Abstract

The potential for storm surge to cause extensive property damage and loss of life has increased urgency to more accurately predict coastal flooding associated with landfalling tropical cyclones. This work investigates the sensitivity of coastal inundation from storm tide (surge + tide) to four hurricane parameters—track, intensity, size, and translation speed—and the sensitivity of inundation forecasts to errors in forecasts of those parameters. An ensemble of storm tide simulations is generated for three storms in the Gulf of Mexico, by driving a storm surge model with best track data and systematically generated perturbations of storm parameters from the best track. The spread of the storm perturbations is compared to average errors in recent operational hurricane forecasts, allowing sensitivity results to be interpreted in terms of practical predictability of coastal inundation at different lead times. Two types of inundation metrics are evaluated: point-based statistics and spatially integrated volumes. The practical predictability of surge inundation is found to be limited foremost by current errors in hurricane track forecasts, followed by intensity errors, then speed errors. Errors in storm size can also play an important role in limiting surge predictability at short lead times, due to observational uncertainty. Results show that given current mean errors in hurricane forecasts, location-specific surge inundation is predictable for as little as 12–24 h prior to landfall, less for small-sized storms. The results also indicate potential for increased surge predictability beyond 24 h for large storms by considering a storm-following, volume-integrated metric of inundation.

Full access