Browse
Abstract
Atmospheric numerical models using the spectral element method with cubed-sphere grids (CSGs) are highly scalable in terms of parallelization. However, there are no data assimilation systems for spectral element numerical models. The authors devised a spectral transformation method applicable to the model data on a CSG (STCS) for a three-dimensional variational data assimilation system (3DVAR). To evaluate the 3DVAR system based on the STCS, the authors conducted observing system simulation experiments (OSSEs) using Community Atmosphere Model with Spectral Element dynamical core (CAM-SE). They observed root-mean-squared error reductions: 24% and 34% for zonal and meridional winds (U and V), respectively; 20% for temperature (T); 4% for specific humidity (Q); and 57% for surface pressure (Ps) in analysis and 28% and 27% for U and V, respectively; 25% for T; 21% for Q; and 31% for Ps in 72-h forecast fields. In this paper, under the premise that the same number of grid points is set, the authors show that the use of a greater polynomial degree, np, produces better performance than use of a greater element count, ne, on equiangular coordinates in terms of the wave representation.
Abstract
Atmospheric numerical models using the spectral element method with cubed-sphere grids (CSGs) are highly scalable in terms of parallelization. However, there are no data assimilation systems for spectral element numerical models. The authors devised a spectral transformation method applicable to the model data on a CSG (STCS) for a three-dimensional variational data assimilation system (3DVAR). To evaluate the 3DVAR system based on the STCS, the authors conducted observing system simulation experiments (OSSEs) using Community Atmosphere Model with Spectral Element dynamical core (CAM-SE). They observed root-mean-squared error reductions: 24% and 34% for zonal and meridional winds (U and V), respectively; 20% for temperature (T); 4% for specific humidity (Q); and 57% for surface pressure (Ps) in analysis and 28% and 27% for U and V, respectively; 25% for T; 21% for Q; and 31% for Ps in 72-h forecast fields. In this paper, under the premise that the same number of grid points is set, the authors show that the use of a greater polynomial degree, np, produces better performance than use of a greater element count, ne, on equiangular coordinates in terms of the wave representation.
Abstract
Recent studies have shown that assimilating enhanced satellite-derived atmospheric motion vectors (AMVs) has improved mesoscale forecast of tropical cyclones (TC) track and intensity. The authors conduct data-denial experiments to understand where the TC analyses and forecasts benefit the most from the enhanced AMV information using an ensemble Kalman filter and the Weather Research and Forecasting Model. The Cooperative Institute for Meteorological Satellite Studies at the University of Wisconsin provides enhanced AMV datasets with higher density and temporal resolution using shorter-interval image triplets for the duration of Typhoon Sinlaku and Hurricane Ike (both 2008). These AMV datasets are then spatially and vertically subsetted to create six parallel cycled assimilation-forecast experiments for each TC: all AMVs; AMVs withheld between 100 and 350 hPa (upper layer), between 350 and 700 hPa (middle layer), and between 700 and 999 hPa (lower layer); and only AMVs within (interior) and outside (exterior) 1000-km radius of the TC center. All AMV subsets are found to be useful in some capacity. The interior and upper-layer AMVs are particularly crucial for improving initial TC position, intensity, and the three-dimensional wind structure along with their forecasts. Compared with denying interior or exterior AMVs, withholding AMVs in different tropospheric layers had less impact on TC intensity and size forecasts. The ensemble forecast is less certain (larger spread) in providing accurate TC track, intensity, and size when upper-layer AMVs or interior AMVs are withheld. This information could be useful to potential targeting scenarios, such as activating and focusing satellite rapid-scan operations, and decisions regarding observing system assessments and deployments.
Abstract
Recent studies have shown that assimilating enhanced satellite-derived atmospheric motion vectors (AMVs) has improved mesoscale forecast of tropical cyclones (TC) track and intensity. The authors conduct data-denial experiments to understand where the TC analyses and forecasts benefit the most from the enhanced AMV information using an ensemble Kalman filter and the Weather Research and Forecasting Model. The Cooperative Institute for Meteorological Satellite Studies at the University of Wisconsin provides enhanced AMV datasets with higher density and temporal resolution using shorter-interval image triplets for the duration of Typhoon Sinlaku and Hurricane Ike (both 2008). These AMV datasets are then spatially and vertically subsetted to create six parallel cycled assimilation-forecast experiments for each TC: all AMVs; AMVs withheld between 100 and 350 hPa (upper layer), between 350 and 700 hPa (middle layer), and between 700 and 999 hPa (lower layer); and only AMVs within (interior) and outside (exterior) 1000-km radius of the TC center. All AMV subsets are found to be useful in some capacity. The interior and upper-layer AMVs are particularly crucial for improving initial TC position, intensity, and the three-dimensional wind structure along with their forecasts. Compared with denying interior or exterior AMVs, withholding AMVs in different tropospheric layers had less impact on TC intensity and size forecasts. The ensemble forecast is less certain (larger spread) in providing accurate TC track, intensity, and size when upper-layer AMVs or interior AMVs are withheld. This information could be useful to potential targeting scenarios, such as activating and focusing satellite rapid-scan operations, and decisions regarding observing system assessments and deployments.
Abstract
The goal of this study is to improve an ensemble-based estimation for forecast sensitivity to observations that is straightforward to apply using existing products of any ensemble data assimilation system. Because of limited ensemble sizes compared to the large degrees of freedom in typical models, it is necessary to apply localization techniques to obtain accurate estimates. Fixed localization techniques do not guarantee accurate impact estimates, because as forecast time increases the error correlation structures evolve with the flow. Here a dynamical localization method is applied to improve the observation impact estimate. The authors employ a Monte Carlo “group filter” technique to limit the effects of sampling error via regression confidence factor (RCF). Experiments make use of the local ensemble transform Kalman filter (LETKF) with a simple two-layer primitive equation model and simulated observations. Results show that the shape, location, time dependency, and variable dependency of RCF localization functions are consistent with underlying dynamical processes of the model. Application of RCF localization to ensemble-estimated impact showed marked improvement especially for longer forecasts and at midlatitudes, when systematically verified against actual impact in RMSE and skill scores. The impact estimates near the equator were not as effective because of large discrepancies between the RCF function and the localization used at assimilation time. These latter results indicate that there exists an inherent relationship between the localization applied during the assimilation time and the proper localization choice for observation impact estimates. Application of RCF for automatically tuned localization is introduced and tested for a single observation experiment.
Abstract
The goal of this study is to improve an ensemble-based estimation for forecast sensitivity to observations that is straightforward to apply using existing products of any ensemble data assimilation system. Because of limited ensemble sizes compared to the large degrees of freedom in typical models, it is necessary to apply localization techniques to obtain accurate estimates. Fixed localization techniques do not guarantee accurate impact estimates, because as forecast time increases the error correlation structures evolve with the flow. Here a dynamical localization method is applied to improve the observation impact estimate. The authors employ a Monte Carlo “group filter” technique to limit the effects of sampling error via regression confidence factor (RCF). Experiments make use of the local ensemble transform Kalman filter (LETKF) with a simple two-layer primitive equation model and simulated observations. Results show that the shape, location, time dependency, and variable dependency of RCF localization functions are consistent with underlying dynamical processes of the model. Application of RCF localization to ensemble-estimated impact showed marked improvement especially for longer forecasts and at midlatitudes, when systematically verified against actual impact in RMSE and skill scores. The impact estimates near the equator were not as effective because of large discrepancies between the RCF function and the localization used at assimilation time. These latter results indicate that there exists an inherent relationship between the localization applied during the assimilation time and the proper localization choice for observation impact estimates. Application of RCF for automatically tuned localization is introduced and tested for a single observation experiment.
Abstract
Multimodel ensemble data assimilation may account for uncertainties of numerical models due to different dynamical cores and physics parameterizations. In the previous studies, the ensemble sizes for each model are prescribed subjectively, for example, uniformly distributed to each model. In this study, a Bayesian filter approach to a multimodel ensemble Kalman filter is adopted to objectively estimate the optimal combination of ensemble sizes for each model. An effective inflation method to make the discrete Bayesian filter work without converging to a single imperfect model was developed.
As a first step, the proposed approach was tested with the 40-variable Lorenz-96 model. Different values of the model parameter F are used to mimic the multimodel ensemble. The true F is first chosen to be
Abstract
Multimodel ensemble data assimilation may account for uncertainties of numerical models due to different dynamical cores and physics parameterizations. In the previous studies, the ensemble sizes for each model are prescribed subjectively, for example, uniformly distributed to each model. In this study, a Bayesian filter approach to a multimodel ensemble Kalman filter is adopted to objectively estimate the optimal combination of ensemble sizes for each model. An effective inflation method to make the discrete Bayesian filter work without converging to a single imperfect model was developed.
As a first step, the proposed approach was tested with the 40-variable Lorenz-96 model. Different values of the model parameter F are used to mimic the multimodel ensemble. The true F is first chosen to be
Abstract
Ensemble square root filters can either assimilate all observations that are available at a given time at once, or assimilate the observations in batches or one at a time. For large-scale models, the filters are typically applied with a localized analysis step. This study demonstrates that the interaction of serial observation processing and localization can destabilize the analysis process, and it examines under which conditions the instability becomes significant. The instability results from a repeated inconsistent update of the state error covariance matrix that is caused by the localization. The inconsistency is present in all ensemble Kalman filters, except for the classical ensemble Kalman filter with perturbed observations. With serial observation processing, its effect is small in cases when the assimilation changes the ensemble of model states only slightly. However, when the assimilation has a strong effect on the state estimates, the interaction of localization and serial observation processing can significantly deteriorate the filter performance. In realistic large-scale applications, when the assimilation changes the states only slightly and when the distribution of the observations is irregular and changing over time, the instability is likely not significant.
Abstract
Ensemble square root filters can either assimilate all observations that are available at a given time at once, or assimilate the observations in batches or one at a time. For large-scale models, the filters are typically applied with a localized analysis step. This study demonstrates that the interaction of serial observation processing and localization can destabilize the analysis process, and it examines under which conditions the instability becomes significant. The instability results from a repeated inconsistent update of the state error covariance matrix that is caused by the localization. The inconsistency is present in all ensemble Kalman filters, except for the classical ensemble Kalman filter with perturbed observations. With serial observation processing, its effect is small in cases when the assimilation changes the ensemble of model states only slightly. However, when the assimilation has a strong effect on the state estimates, the interaction of localization and serial observation processing can significantly deteriorate the filter performance. In realistic large-scale applications, when the assimilation changes the states only slightly and when the distribution of the observations is irregular and changing over time, the instability is likely not significant.
Abstract
An observing system simulation experiment (OSSE) has been carried out to evaluate the impact of a hybrid ensemble–variational data assimilation algorithm for use with the National Centers for Environmental Prediction (NCEP) global data assimilation system. An OSSE provides a controlled framework for evaluating analysis and forecast errors since a truth is known. In this case, the nature run was generated and provided by the European Centre for Medium-Range Weather Forecasts as part of the international Joint OSSE project. The assimilation and forecast impact studies are carried out using a model that is different than the nature run model, thereby accounting for model error and avoiding issues with the so-called identical-twin experiments.
It is found that the quality of analysis is improved substantially when going from three-dimensional variational data assimilation (3DVar) to a hybrid 3D ensemble–variational (EnVar)-based algorithm. This is especially true in terms of the analysis error reduction for wind and moisture, most notably in the tropics. Forecast impact experiments show that the hybrid-initialized forecasts improve upon the 3DVar-based forecasts for most metrics, lead times, variables, and levels. An additional experiment that utilizes 3DEnVar (100% ensemble) demonstrates that the use of a 25% static error covariance contribution does not alter the quality of hybrid analysis when utilizing the tangent-linear normal mode constraint on the total hybrid increment.
Abstract
An observing system simulation experiment (OSSE) has been carried out to evaluate the impact of a hybrid ensemble–variational data assimilation algorithm for use with the National Centers for Environmental Prediction (NCEP) global data assimilation system. An OSSE provides a controlled framework for evaluating analysis and forecast errors since a truth is known. In this case, the nature run was generated and provided by the European Centre for Medium-Range Weather Forecasts as part of the international Joint OSSE project. The assimilation and forecast impact studies are carried out using a model that is different than the nature run model, thereby accounting for model error and avoiding issues with the so-called identical-twin experiments.
It is found that the quality of analysis is improved substantially when going from three-dimensional variational data assimilation (3DVar) to a hybrid 3D ensemble–variational (EnVar)-based algorithm. This is especially true in terms of the analysis error reduction for wind and moisture, most notably in the tropics. Forecast impact experiments show that the hybrid-initialized forecasts improve upon the 3DVar-based forecasts for most metrics, lead times, variables, and levels. An additional experiment that utilizes 3DEnVar (100% ensemble) demonstrates that the use of a 25% static error covariance contribution does not alter the quality of hybrid analysis when utilizing the tangent-linear normal mode constraint on the total hybrid increment.
Abstract
This work describes the formulation of a hybrid four-dimensional ensemble--variational (4DEnVar) algorithm and initialization options utilized within the National Centers for Environmental Prediction global data assimilation system. Initialization schemes that are proposed for use are the tangent-linear normal mode constraint, weak constraint digital filter, and a combination thereof.
An observing system simulation experiment is carried out to evaluate the impact of utilizing hybrid 4DEnVar with various initialization techniques. The experiments utilize a dual-resolution configuration, where the ensemble is run at roughly half the resolution of the deterministic component. It is found that by going from 3D to 4D, analysis error is reduced for most variables and levels. The inclusion of a time-invariant static covariance when used without a normal mode–based strong constraint is found to have a small, positive impact on the analysis. The experiments show that the weak constraint digital filter degrades the quality of analysis, due to the use of hourly states to prescribe high-frequency noise. It is found that going from 3D to 4D ensemble covariances has a relatively larger impact in the extratropics, whereas the original inclusion of ensemble-based covariances was found to have the largest impact in the tropics. The improvements found in going from 3D to 4D covariances in the hybrid EnVar formulation are not as large as was found in Part I from the original introduction of the hybrid algorithm. The analyses generated by the 4D hybrid scheme are found to yield slightly improved extratropical height and wind forecasts, with smaller impacts on other variables and in general in the tropics.
Abstract
This work describes the formulation of a hybrid four-dimensional ensemble--variational (4DEnVar) algorithm and initialization options utilized within the National Centers for Environmental Prediction global data assimilation system. Initialization schemes that are proposed for use are the tangent-linear normal mode constraint, weak constraint digital filter, and a combination thereof.
An observing system simulation experiment is carried out to evaluate the impact of utilizing hybrid 4DEnVar with various initialization techniques. The experiments utilize a dual-resolution configuration, where the ensemble is run at roughly half the resolution of the deterministic component. It is found that by going from 3D to 4D, analysis error is reduced for most variables and levels. The inclusion of a time-invariant static covariance when used without a normal mode–based strong constraint is found to have a small, positive impact on the analysis. The experiments show that the weak constraint digital filter degrades the quality of analysis, due to the use of hourly states to prescribe high-frequency noise. It is found that going from 3D to 4D ensemble covariances has a relatively larger impact in the extratropics, whereas the original inclusion of ensemble-based covariances was found to have the largest impact in the tropics. The improvements found in going from 3D to 4D covariances in the hybrid EnVar formulation are not as large as was found in Part I from the original introduction of the hybrid algorithm. The analyses generated by the 4D hybrid scheme are found to yield slightly improved extratropical height and wind forecasts, with smaller impacts on other variables and in general in the tropics.
Abstract
The Met Office has developed an ensemble-variational data assimilation method (hybrid-4DEnVar) as a potential replacement for the hybrid four-dimensional variational data assimilation (hybrid-4DVar), which is the current operational method for global NWP. Both are four-dimensional variational methods, using a hybrid combination of a fixed climatological model of background error covariances with localized covariances from an ensemble of current forecasts designed to describe the structure of “errors of the day.” The fundamental difference between the methods is their modeling of the time evolution of errors within each data assimilation window: 4DVar uses a linear model and its adjoint and 4DEnVar uses a localized linear combination of nonlinear forecasts. Both hybrid-4DVar and hybrid-4DEnVar beat their three-dimensional versions, which are equivalent, in NWP trials. With settings based on the current operational system, hybrid-4DVar performs better than hybrid-4DEnVar. Idealized experiments designed to compare the time evolution of covariances in the methods are described: the basic 4DEnVar represents the evolution of ensemble errors as well as 4DVar. However, 4DVar also represents the evolution of errors from the climatological covariances, whereas 4DEnVar does not. This difference is the main cause of the superiority of hybrid-4DVar. Another difference is that the authors’ 4DVar explicitly penalizes rapid variations in the analysis increment trajectory, while the authors’ 4DEnVar contains no dynamical constaints on imbalance. The authors describe a four-dimensional incremental analysis update (4DIAU) method that filters out the high-frequency oscillations introduced by the poorly balanced 4DEnVar increments. Possible methods for improving hybrid-4DEnVar are discussed.
Abstract
The Met Office has developed an ensemble-variational data assimilation method (hybrid-4DEnVar) as a potential replacement for the hybrid four-dimensional variational data assimilation (hybrid-4DVar), which is the current operational method for global NWP. Both are four-dimensional variational methods, using a hybrid combination of a fixed climatological model of background error covariances with localized covariances from an ensemble of current forecasts designed to describe the structure of “errors of the day.” The fundamental difference between the methods is their modeling of the time evolution of errors within each data assimilation window: 4DVar uses a linear model and its adjoint and 4DEnVar uses a localized linear combination of nonlinear forecasts. Both hybrid-4DVar and hybrid-4DEnVar beat their three-dimensional versions, which are equivalent, in NWP trials. With settings based on the current operational system, hybrid-4DVar performs better than hybrid-4DEnVar. Idealized experiments designed to compare the time evolution of covariances in the methods are described: the basic 4DEnVar represents the evolution of ensemble errors as well as 4DVar. However, 4DVar also represents the evolution of errors from the climatological covariances, whereas 4DEnVar does not. This difference is the main cause of the superiority of hybrid-4DVar. Another difference is that the authors’ 4DVar explicitly penalizes rapid variations in the analysis increment trajectory, while the authors’ 4DEnVar contains no dynamical constaints on imbalance. The authors describe a four-dimensional incremental analysis update (4DIAU) method that filters out the high-frequency oscillations introduced by the poorly balanced 4DEnVar increments. Possible methods for improving hybrid-4DEnVar are discussed.
Abstract
The authors evaluated the effects of assimilating three-dimensional Doppler wind lidar (DWL) data on the forecast of the heavy rainfall event of 5 July 2010 in Japan, produced by an isolated mesoscale convective system (MCS) at a meso-gamma scale in a system consisting of only warm rain clouds. Several impact experiments using the nonhydrostatic four-dimensional variational data assimilation system (NHM-4DVAR) and the Japan Meteorological Agency nonhydrostatic model with a 2-km horizontal grid spacing were conducted in which 1) no observations were assimilated (NODA), 2) radar reflectivity and radial velocity determined by Doppler radar and precipitable water vapor determined by GPS satellite observations were assimilated (CTL), and 3) radial velocity determined by DWL were added to the CTL experiment (LDR) and five data denial and two observational error sensitivity experiments. Although both NODA and CTL simulated an MCS, only LDR captured the intensity, location, and horizontal scale of the observed MCS. Assimilating DWL data improved the wind direction and speed of low-level airflows, thus improving the accuracy of the simulated water vapor flux. The examination of the impacts of specific assimilations and assigned observation errors showed that assimilation of all data types is important for forecasting intense MCSs. The investigation of the MCS structure showed that large amounts of water vapor were supplied to the rainfall event by southerly flow. A midlevel inversion layer led to the production of exclusively liquid water particles in the MCS, and in combination with the humid airflow into the MCS, this inversion layer may be another important factor in its development.
Abstract
The authors evaluated the effects of assimilating three-dimensional Doppler wind lidar (DWL) data on the forecast of the heavy rainfall event of 5 July 2010 in Japan, produced by an isolated mesoscale convective system (MCS) at a meso-gamma scale in a system consisting of only warm rain clouds. Several impact experiments using the nonhydrostatic four-dimensional variational data assimilation system (NHM-4DVAR) and the Japan Meteorological Agency nonhydrostatic model with a 2-km horizontal grid spacing were conducted in which 1) no observations were assimilated (NODA), 2) radar reflectivity and radial velocity determined by Doppler radar and precipitable water vapor determined by GPS satellite observations were assimilated (CTL), and 3) radial velocity determined by DWL were added to the CTL experiment (LDR) and five data denial and two observational error sensitivity experiments. Although both NODA and CTL simulated an MCS, only LDR captured the intensity, location, and horizontal scale of the observed MCS. Assimilating DWL data improved the wind direction and speed of low-level airflows, thus improving the accuracy of the simulated water vapor flux. The examination of the impacts of specific assimilations and assigned observation errors showed that assimilation of all data types is important for forecasting intense MCSs. The investigation of the MCS structure showed that large amounts of water vapor were supplied to the rainfall event by southerly flow. A midlevel inversion layer led to the production of exclusively liquid water particles in the MCS, and in combination with the humid airflow into the MCS, this inversion layer may be another important factor in its development.
Abstract
This study proposes a variational approach to adaptively determine the optimum radius of influence for ensemble covariance localization when uncorrelated observations are assimilated sequentially. The covariance localization is commonly used by various ensemble Kalman filters to limit the impact of covariance sampling errors when the ensemble size is small relative to the dimension of the state. The probabilistic approach is based on the premise of finding an optimum localization radius that minimizes the distance between the Kalman update using the localized sampling covariance versus using the true covariance, when the sequential ensemble Kalman square root filter method is used. The authors first examine the effectiveness of the proposed method for the cases when the true covariance is known or can be approximated by a sufficiently large ensemble size. Not surprisingly, it is found that the smaller the true covariance distance or the smaller the ensemble, the smaller the localization radius that is needed. The authors further generalize the method to the more usual scenario that the true covariance is unknown but can be represented or estimated probabilistically based on the ensemble sampling covariance. The mathematical formula for this probabilistic and adaptive approach with the use of the Jeffreys prior is derived. Promising results and limitations of this new method are discussed through experiments using the Lorenz-96 system.
Abstract
This study proposes a variational approach to adaptively determine the optimum radius of influence for ensemble covariance localization when uncorrelated observations are assimilated sequentially. The covariance localization is commonly used by various ensemble Kalman filters to limit the impact of covariance sampling errors when the ensemble size is small relative to the dimension of the state. The probabilistic approach is based on the premise of finding an optimum localization radius that minimizes the distance between the Kalman update using the localized sampling covariance versus using the true covariance, when the sequential ensemble Kalman square root filter method is used. The authors first examine the effectiveness of the proposed method for the cases when the true covariance is known or can be approximated by a sufficiently large ensemble size. Not surprisingly, it is found that the smaller the true covariance distance or the smaller the ensemble, the smaller the localization radius that is needed. The authors further generalize the method to the more usual scenario that the true covariance is unknown but can be represented or estimated probabilistically based on the ensemble sampling covariance. The mathematical formula for this probabilistic and adaptive approach with the use of the Jeffreys prior is derived. Promising results and limitations of this new method are discussed through experiments using the Lorenz-96 system.