Search Results
You are looking at 21 - 25 of 25 items for
- Author or Editor: Istvan Szunyogh x
- Refine by Access: All Content x
Abstract
The standard statistical model of data assimilation assumes that the background and observation errors are normally distributed, and the first- and second-order statistical moments of the two distributions are known or can be accurately estimated. Because these assumptions are never satisfied completely in practice, data assimilation schemes must be robust to errors in the underlying statistical model. This paper tests simple approaches to improving the robustness of data assimilation in tropical cyclone (TC) regions.
Analysis–forecast experiments are carried out with three types of data—Tropical Cyclone Vitals (TCVitals), DOTSTAR, and QuikSCAT—that are particularly relevant for TCs and with an ensemble-based data assimilation scheme that prepares a global analysis and a limited-area analysis in a TC basin simultaneously. The results of the experiments demonstrate that significant analysis and forecast improvements can be achieved for TCs that are category 1 and higher by improving the robustness of the data assimilation scheme.
Abstract
The standard statistical model of data assimilation assumes that the background and observation errors are normally distributed, and the first- and second-order statistical moments of the two distributions are known or can be accurately estimated. Because these assumptions are never satisfied completely in practice, data assimilation schemes must be robust to errors in the underlying statistical model. This paper tests simple approaches to improving the robustness of data assimilation in tropical cyclone (TC) regions.
Analysis–forecast experiments are carried out with three types of data—Tropical Cyclone Vitals (TCVitals), DOTSTAR, and QuikSCAT—that are particularly relevant for TCs and with an ensemble-based data assimilation scheme that prepares a global analysis and a limited-area analysis in a TC basin simultaneously. The results of the experiments demonstrate that significant analysis and forecast improvements can be achieved for TCs that are category 1 and higher by improving the robustness of the data assimilation scheme.
Abstract
Data assimilation approaches that use ensembles to approximate a Kalman filter have many potential advantages for oceanographic applications. To explore the extent to which this holds, the Estuarine and Coastal Ocean Model (ECOM) is coupled with a modern data assimilation method based on the local ensemble transform Kalman filter (LETKF), and a series of simulation experiments is conducted. In these experiments, a long ECOM “nature” run is taken to be the “truth.” Observations are generated at analysis times by perturbing the nature run at randomly chosen model grid points with errors of known statistics. A diverse collection of model states is used for the initial ensemble. All experiments use the same lateral boundary conditions and external forcing fields as in the nature run. In the data assimilation, the analysis step combines the observations and the ECOM forecasts using the Kalman filter equations. As a control, a free-running forecast (FRF) is made from the initial ensemble mean to check the relative importance of external forcing versus data assimilation on the analysis skill. Results of the assimilation cycle and the FRF are compared to truth to quantify the skill of each.
The LETKF performs well for the cases studied here. After just a few assimilation cycles, the analysis errors are smaller than the observation errors and are much smaller than the errors in the FRF. The assimilation quickly eliminates the domain-averaged bias of the initial ensemble. The filter accurately tracks the truth at all data densities examined, from observations at 50% of the model grid points down to 2% of the model grid points. As the data density increases, the ensemble spread, bias, and error standard deviation decrease. As the ensemble size increases, the ensemble spread increases and the error standard deviation decreases. Increases in the size of the observation error lead to a larger ensemble spread but have a small impact on the analysis accuracy.
Abstract
Data assimilation approaches that use ensembles to approximate a Kalman filter have many potential advantages for oceanographic applications. To explore the extent to which this holds, the Estuarine and Coastal Ocean Model (ECOM) is coupled with a modern data assimilation method based on the local ensemble transform Kalman filter (LETKF), and a series of simulation experiments is conducted. In these experiments, a long ECOM “nature” run is taken to be the “truth.” Observations are generated at analysis times by perturbing the nature run at randomly chosen model grid points with errors of known statistics. A diverse collection of model states is used for the initial ensemble. All experiments use the same lateral boundary conditions and external forcing fields as in the nature run. In the data assimilation, the analysis step combines the observations and the ECOM forecasts using the Kalman filter equations. As a control, a free-running forecast (FRF) is made from the initial ensemble mean to check the relative importance of external forcing versus data assimilation on the analysis skill. Results of the assimilation cycle and the FRF are compared to truth to quantify the skill of each.
The LETKF performs well for the cases studied here. After just a few assimilation cycles, the analysis errors are smaller than the observation errors and are much smaller than the errors in the FRF. The assimilation quickly eliminates the domain-averaged bias of the initial ensemble. The filter accurately tracks the truth at all data densities examined, from observations at 50% of the model grid points down to 2% of the model grid points. As the data density increases, the ensemble spread, bias, and error standard deviation decrease. As the ensemble size increases, the ensemble spread increases and the error standard deviation decreases. Increases in the size of the observation error lead to a larger ensemble spread but have a small impact on the analysis accuracy.
Abstract
In this paper, the spatiotemporally changing nature of predictability is studied in a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS), a state-of-the-art numerical weather prediction model. Atmospheric predictability is assessed in the perfect model scenario for which forecast uncertainties are entirely due to uncertainties in the estimates of the initial states. Uncertain initial conditions (analyses) are obtained by assimilating simulated noisy vertical soundings of the “true” atmospheric states with the local ensemble Kalman filter (LEKF) data assimilation scheme. This data assimilation scheme provides an ensemble of initial conditions. The ensemble mean defines the initial condition of 5-day deterministic model forecasts, while the time-evolved members of the ensemble provide an estimate of the evolving forecast uncertainties. The observations are randomly distributed in space to ensure that the geographical distribution of the analysis and forecast errors reflect predictability limits due to the model dynamics and are not affected by inhomogeneities of the observational coverage.
Analysis and forecast error statistics are calculated for the deterministic forecasts. It is found that short-term forecast errors tend to grow exponentially in the extratropics and linearly in the Tropics. The behavior of the ensemble is explained by using the ensemble dimension (E dimension), a spatiotemporally evolving measure of the evenness of the distribution of the variance between the principal components of the ensemble-based forecast error covariance matrix.
It is shown that in the extratropics the largest forecast errors occur for the smallest E dimensions. Since a low value of the E dimension guarantees that the ensemble can capture a large portion of the forecast error, the larger the forecast error the more certain that the ensemble can fully capture the forecast error. In particular, in regions of low E dimension, ensemble averaging is an efficient error filter and the ensemble spread provides an accurate prediction of the upper bound of the error in the ensemble-mean forecast.
Abstract
In this paper, the spatiotemporally changing nature of predictability is studied in a reduced-resolution version of the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS), a state-of-the-art numerical weather prediction model. Atmospheric predictability is assessed in the perfect model scenario for which forecast uncertainties are entirely due to uncertainties in the estimates of the initial states. Uncertain initial conditions (analyses) are obtained by assimilating simulated noisy vertical soundings of the “true” atmospheric states with the local ensemble Kalman filter (LEKF) data assimilation scheme. This data assimilation scheme provides an ensemble of initial conditions. The ensemble mean defines the initial condition of 5-day deterministic model forecasts, while the time-evolved members of the ensemble provide an estimate of the evolving forecast uncertainties. The observations are randomly distributed in space to ensure that the geographical distribution of the analysis and forecast errors reflect predictability limits due to the model dynamics and are not affected by inhomogeneities of the observational coverage.
Analysis and forecast error statistics are calculated for the deterministic forecasts. It is found that short-term forecast errors tend to grow exponentially in the extratropics and linearly in the Tropics. The behavior of the ensemble is explained by using the ensemble dimension (E dimension), a spatiotemporally evolving measure of the evenness of the distribution of the variance between the principal components of the ensemble-based forecast error covariance matrix.
It is shown that in the extratropics the largest forecast errors occur for the smallest E dimensions. Since a low value of the E dimension guarantees that the ensemble can capture a large portion of the forecast error, the larger the forecast error the more certain that the ensemble can fully capture the forecast error. In particular, in regions of low E dimension, ensemble averaging is an efficient error filter and the ensemble spread provides an accurate prediction of the upper bound of the error in the ensemble-mean forecast.
Abstract
A regionally enhanced global (REG) data assimilation (DA) method is proposed. The technique blends high-resolution model information from a single or multiple limited-area model domains with global model and observational information to create a regionally enhanced analysis of the global atmospheric state. This single analysis provides initial conditions for both the global and limited-area model forecasts. The potential benefits of the approach for operational data assimilation are (i) reduced development cost, (ii) reduced overall computational cost, (iii) improved limited-area forecast performance from the use of global information about the atmospheric flow, and (iv) improved global forecast performance from the use of more accurate model information in the limited-area domains. The method is tested by an implementation on the U.S. Navy’s four-dimensional variational global data assimilation system and global and limited-area numerical weather prediction models. The results of the monthlong forecast experiments suggest that the REG DA approach has the potential to deliver the desired benefits.
Abstract
A regionally enhanced global (REG) data assimilation (DA) method is proposed. The technique blends high-resolution model information from a single or multiple limited-area model domains with global model and observational information to create a regionally enhanced analysis of the global atmospheric state. This single analysis provides initial conditions for both the global and limited-area model forecasts. The potential benefits of the approach for operational data assimilation are (i) reduced development cost, (ii) reduced overall computational cost, (iii) improved limited-area forecast performance from the use of global information about the atmospheric flow, and (iv) improved global forecast performance from the use of more accurate model information in the limited-area domains. The method is tested by an implementation on the U.S. Navy’s four-dimensional variational global data assimilation system and global and limited-area numerical weather prediction models. The results of the monthlong forecast experiments suggest that the REG DA approach has the potential to deliver the desired benefits.