Search Results

You are looking at 1 - 6 of 6 items for

  • Author or Editor: Brian R. Hunt x
  • Refine by Access: All Content x
Clear All Modify Search
Aleksey V. Zimin, Istvan Szunyogh, Brian R. Hunt, and Edward Ott

Abstract

Previously developed techniques that have been used to extract envelopes of Rossby wave packets are based on the assumption of zonally propagating waves. In this note a method that does not require such an assumption is proposed. The advantages of the new technique, both on analytical and real-world examples, are demonstrated.

Full access
Seung-Jong Baek, Istvan Szunyogh, Brian R. Hunt, and Edward Ott

Abstract

Model error is the component of the forecast error that is due to the difference between the dynamics of the atmosphere and the dynamics of the numerical prediction model. The systematic, slowly varying part of the model error is called model bias. This paper evaluates three different ensemble-based strategies to account for the surface pressure model bias in the analysis scheme. These strategies are based on modifying the observation operator for the surface pressure observations by the addition of a bias-correction term. One estimates the correction term adaptively, while another uses the hydrostatic balance equation to obtain the correction term. The third strategy combines an adaptively estimated correction term and the hydrostatic-balance-based correction term. Numerical experiments are carried out in an idealized setting, where the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) model is integrated at resolution T62L28 to simulate the evolution of the atmosphere and the T30L7 resolution Simplified Parameterization Primitive Equation Dynamics (SPEEDY) model is used for data assimilation. The results suggest that the adaptive bias-correction term is effective in correcting the bias in the data-rich regions, while the hydrostatic-balance-based approach is effective in data-sparse regions. The adaptive bias-correction approach also has the benefit that it leads to a significant improvement of the temperature and wind analysis at the higher model levels. The best results are obtained when the two bias-correction approaches are combined.

Full access
D. A. S. Patil, Brian R. Hunt, and James A. Carton

Abstract

Computational modeling is playing an increasingly vital role in the study of atmospheric–oceanic systems. Given the complexity of the models a fundamental question to ask is, How well does the output of one model agree with the evolution of another model or with the true system that is represented by observational data? Since observational data contain measurement noise, the question is placed in the framework of time series analysis from a dynamical systems perspective. That is, it is desired to know if the two, possibly noisy, time series were produced by similar physical processes.

In this paper simple graphical representations of the time series and the errors made by a simple predictive model of the time series (known as residual delay maps) are used to extract information about the nature of the time evolution of the system (in this paper referred to as the dynamics). Two different uses for these graphical representations are presented in this paper. First, a test for the comparison of two competing models or of a model and observational data is proposed. The utility of this test is that it is based on comparing the underlying dynamical processes rather than looking directly at differences between two datasets. An example of this test is provided by comparing station data and NCEP–NCAR reanalysis data on the Australian continent.

Second, the technique is applied to the global NCEP–NCAR reanalysis data. From this a composite image is created that effectively identifies regions of the atmosphere where the dynamics are strongly dependent on low-dimensional nonlinear processes. It is also shown how the transition between such regions can be depicted using residual delay maps. This allows for the investigation of the conjecture of Sugihara et al.: sites in the midlatitudes are significantly more nonlinear than sites in the Tropics.

Full access
Aleksey V. Zimin, Istvan Szunyogh, D. J. Patil, Brian R. Hunt, and Edward Ott

Abstract

Packets of Rossby waves play an important role in the transfer of kinetic energy in the extratropics. The ability to locate, track, and detect changes in the envelope of these wave packets is vital to detecting baroclinic downstream development, tracking the impact of the analysis errors in numerical weather forecasts, and analyzing the forecast effects of targeted weather observations. In this note, it is argued that a well-known technique of digital signal processing, which is based on the Hilbert transform, should be used for extracting the envelope of atmospheric wave packets. This technique is robust, simple, and computationally inexpensive. The superiority of the proposed algorithm over the complex demodulation technique (the only technique previously used for this purpose in atmospheric studies) is demonstrated by examples. The skill of the proposed algorithm is also demonstrated by tracking wave packets in operational weather analyses from the National Centers for Environmental Prediction (NCEP) and analyzing the effects of targeted observations from the 2000 Winter Storm Reconnaissance (WSR00) field program.

Full access
Steven J. Greybush, Eugenia Kalnay, Takemasa Miyoshi, Kayo Ide, and Brian R. Hunt

Abstract

In ensemble Kalman filter (EnKF) data assimilation, localization modifies the error covariance matrices to suppress the influence of distant observations, removing spurious long-distance correlations. In addition to allowing efficient parallel implementation, this takes advantage of the atmosphere’s lower dimensionality in local regions. There are two primary methods for localization. In B localization, the background error covariance matrix elements are reduced by a Schur product so that correlations between grid points that are far apart are removed. In R localization, the observation error covariance matrix is multiplied by a distance-dependent function, so that far away observations are considered to have infinite error. Successful numerical weather prediction depends upon well-balanced initial conditions to avoid spurious propagation of inertial-gravity waves. Previous studies note that B localization can disrupt the relationship between the height gradient and the wind speed of the analysis increments, resulting in an analysis that can be significantly ageostrophic.

This study begins with a comparison of the accuracy and geostrophic balance of EnKF analyses using no localization, B localization, and R localization with simple one-dimensional balanced waves derived from the shallow-water equations, indicating that the optimal length scale for R localization is shorter than for B localization, and that for the same length scale R localization is more balanced. The comparison of localization techniques is then expanded to the Simplified Parameterizations, Primitive Equation Dynamics (SPEEDY) global atmospheric model. Here, natural imbalance of the slow manifold must be contrasted with undesired imbalance introduced by data assimilation. Performance of the two techniques is comparable, also with a shorter optimal localization distance for R localization than for B localization.

Full access
David Kuhl, Istvan Szunyogh, Eric J. Kostelich, Gyorgyi Gyarmati, D. J. Patil, Michael Oczkowski, Brian R. Hunt, Eugenia Kalnay, Edward Ott, and James A. Yorke

Abstract

In this paper, the spatiotemporally changing nature of predictability is studied in a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS), a state-of-the-art numerical weather prediction model. Atmospheric predictability is assessed in the perfect model scenario for which forecast uncertainties are entirely due to uncertainties in the estimates of the initial states. Uncertain initial conditions (analyses) are obtained by assimilating simulated noisy vertical soundings of the “true” atmospheric states with the local ensemble Kalman filter (LEKF) data assimilation scheme. This data assimilation scheme provides an ensemble of initial conditions. The ensemble mean defines the initial condition of 5-day deterministic model forecasts, while the time-evolved members of the ensemble provide an estimate of the evolving forecast uncertainties. The observations are randomly distributed in space to ensure that the geographical distribution of the analysis and forecast errors reflect predictability limits due to the model dynamics and are not affected by inhomogeneities of the observational coverage.

Analysis and forecast error statistics are calculated for the deterministic forecasts. It is found that short-term forecast errors tend to grow exponentially in the extratropics and linearly in the Tropics. The behavior of the ensemble is explained by using the ensemble dimension (E dimension), a spatiotemporally evolving measure of the evenness of the distribution of the variance between the principal components of the ensemble-based forecast error covariance matrix.

It is shown that in the extratropics the largest forecast errors occur for the smallest E dimensions. Since a low value of the E dimension guarantees that the ensemble can capture a large portion of the forecast error, the larger the forecast error the more certain that the ensemble can fully capture the forecast error. In particular, in regions of low E dimension, ensemble averaging is an efficient error filter and the ensemble spread provides an accurate prediction of the upper bound of the error in the ensemble-mean forecast.

Full access