Browse
Abstract
Despite dramatic improvements over the last decades, operational NWP forecasts still occasionally suffer from abrupt drops in their forecast skill. Such forecast skill “dropouts” may occur even in a perfect NWP system because of the stochastic nature of NWP but can also result from flaws in the NWP system. Recent studies have shown that dropouts occur due not to a model’s deficiencies but to misspecified initial conditions, suggesting that they could be mitigated by improving the quality control (QC) system so that the observation-minus-background (O-B) innovations that would degrade a forecast can be detected and rejected. The ensemble forecast sensitivity to observations (EFSO) technique enables for the quantification of how much each observation has improved or degraded the forecast. A recent study has shown that 24-h EFSO can detect detrimental O-B innovations that caused regional forecast skill dropouts and that the forecast can be improved by not assimilating them. Inspired by that success, a new QC method is proposed, termed proactive QC (PQC), that detects detrimental innovations 6 h after the analysis using EFSO and then repeats the analysis and forecast without using them. PQC is implemented and tested on a lower-resolution version of NCEP’s operational global NWP system. It is shown that EFSO is insensitive to the choice of verification and lead time (24 or 6 h) and that PQC likely improves the analysis, as attested to by forecast improvements of up to 5 days and beyond. Strategies for reducing the computational costs and further optimizing the observation rejection criteria are also discussed.
Abstract
Despite dramatic improvements over the last decades, operational NWP forecasts still occasionally suffer from abrupt drops in their forecast skill. Such forecast skill “dropouts” may occur even in a perfect NWP system because of the stochastic nature of NWP but can also result from flaws in the NWP system. Recent studies have shown that dropouts occur due not to a model’s deficiencies but to misspecified initial conditions, suggesting that they could be mitigated by improving the quality control (QC) system so that the observation-minus-background (O-B) innovations that would degrade a forecast can be detected and rejected. The ensemble forecast sensitivity to observations (EFSO) technique enables for the quantification of how much each observation has improved or degraded the forecast. A recent study has shown that 24-h EFSO can detect detrimental O-B innovations that caused regional forecast skill dropouts and that the forecast can be improved by not assimilating them. Inspired by that success, a new QC method is proposed, termed proactive QC (PQC), that detects detrimental innovations 6 h after the analysis using EFSO and then repeats the analysis and forecast without using them. PQC is implemented and tested on a lower-resolution version of NCEP’s operational global NWP system. It is shown that EFSO is insensitive to the choice of verification and lead time (24 or 6 h) and that PQC likely improves the analysis, as attested to by forecast improvements of up to 5 days and beyond. Strategies for reducing the computational costs and further optimizing the observation rejection criteria are also discussed.
Abstract
In ensemble-based assimilation schemes for cloud-resolving models (CRMs), the precipitation-related variables have serious sampling errors. The purpose of the present study is to examine the sampling error properties and the forecast error characteristics of the operational CRM of the Japan Meteorological Agency (JMANHM) and to develop a sampling error damping method based on the CRM forecast error characteristics.
The CRM forecast error was analyzed for meteorological disturbance cases using 100-member ensemble forecasts of the JMANHM. The ensemble forecast perturbation correlation had a significant noise associated with the precipitation-related variables, because of sampling errors. The precipitation-related variables were likely to suffer this sampling error in most precipitating areas. An examination of the forecast error characteristics revealed that the CRM forecast error satisfied the assumption of the spectral localization, while the spatial localization with constant scales, or variable localization, were not applicable to the CRM.
A neighboring ensemble (NE) method was developed, which was based on the spectral localization that estimated the forecast error correlation at the target grid point, using ensemble members for neighboring grid points. To introduce this method into an ensemble-based variational assimilation scheme, the present study horizontally divided the NE forecast error into large-scale portions and deviations. As single observation assimilation experiments showed, this “dual-scale NE” method was more successful in damping the sampling error and generating plausible, deep vertical profile of precipitation analysis increments, compared to a simple spatial localization method or a variable localization method.
Abstract
In ensemble-based assimilation schemes for cloud-resolving models (CRMs), the precipitation-related variables have serious sampling errors. The purpose of the present study is to examine the sampling error properties and the forecast error characteristics of the operational CRM of the Japan Meteorological Agency (JMANHM) and to develop a sampling error damping method based on the CRM forecast error characteristics.
The CRM forecast error was analyzed for meteorological disturbance cases using 100-member ensemble forecasts of the JMANHM. The ensemble forecast perturbation correlation had a significant noise associated with the precipitation-related variables, because of sampling errors. The precipitation-related variables were likely to suffer this sampling error in most precipitating areas. An examination of the forecast error characteristics revealed that the CRM forecast error satisfied the assumption of the spectral localization, while the spatial localization with constant scales, or variable localization, were not applicable to the CRM.
A neighboring ensemble (NE) method was developed, which was based on the spectral localization that estimated the forecast error correlation at the target grid point, using ensemble members for neighboring grid points. To introduce this method into an ensemble-based variational assimilation scheme, the present study horizontally divided the NE forecast error into large-scale portions and deviations. As single observation assimilation experiments showed, this “dual-scale NE” method was more successful in damping the sampling error and generating plausible, deep vertical profile of precipitation analysis increments, compared to a simple spatial localization method or a variable localization method.
Abstract
Ensemble–variational data assimilation algorithms that can incorporate the time dimension (four-dimensional or 4D) and combine static and ensemble-derived background error covariances (hybrid) are formulated in general forms based on the extended control variable and the observation-space-perturbation approaches. The properties and relationships of these algorithms and their approximated formulations are discussed. The main algorithms discussed include the following: 1) the standard ensemble 4DVar (En4DVar) algorithm incorporating ensemble-derived background error covariance through the extended control variable approach, 2) the 4DEnVar neglecting the time propagation of the extended control variable (4DEnVar-NPC), 3) the 4D ensemble–variational algorithm based on observation space perturbation (4DEnVar), and 4) the 4DEnVar with no propagation of covariance localization (4DEnVar-NPL). Without the static background error covariance term, none of the algorithms requires the adjoint model except for En4DVar. Costly applications of the tangent linear model to localized ensemble perturbations can be avoided by making the NPC and NPL approximations. It is proven that En4DVar and 4DEnVar are mathematically equivalent, while 4DEnVar-NPC and 4DEnVar-NPL are mathematically equivalent. Such equivalences are also demonstrated by single-observation assimilation experiments with a 1D linear advection model. The effects of the non-flow-following or stationary localization approximations are also examined through the experiments.
All of the above algorithms can include the static background error covariance term to establish a hybrid formulation. When the static term is included, all algorithms will require a tangent linear model and an adjoint model. The first guess at appropriate time (FGAT) approximation is proposed to avoid the tangent linear and adjoint models. Computational costs of the algorithms are also discussed.
Abstract
Ensemble–variational data assimilation algorithms that can incorporate the time dimension (four-dimensional or 4D) and combine static and ensemble-derived background error covariances (hybrid) are formulated in general forms based on the extended control variable and the observation-space-perturbation approaches. The properties and relationships of these algorithms and their approximated formulations are discussed. The main algorithms discussed include the following: 1) the standard ensemble 4DVar (En4DVar) algorithm incorporating ensemble-derived background error covariance through the extended control variable approach, 2) the 4DEnVar neglecting the time propagation of the extended control variable (4DEnVar-NPC), 3) the 4D ensemble–variational algorithm based on observation space perturbation (4DEnVar), and 4) the 4DEnVar with no propagation of covariance localization (4DEnVar-NPL). Without the static background error covariance term, none of the algorithms requires the adjoint model except for En4DVar. Costly applications of the tangent linear model to localized ensemble perturbations can be avoided by making the NPC and NPL approximations. It is proven that En4DVar and 4DEnVar are mathematically equivalent, while 4DEnVar-NPC and 4DEnVar-NPL are mathematically equivalent. Such equivalences are also demonstrated by single-observation assimilation experiments with a 1D linear advection model. The effects of the non-flow-following or stationary localization approximations are also examined through the experiments.
All of the above algorithms can include the static background error covariance term to establish a hybrid formulation. When the static term is included, all algorithms will require a tangent linear model and an adjoint model. The first guess at appropriate time (FGAT) approximation is proposed to avoid the tangent linear and adjoint models. Computational costs of the algorithms are also discussed.
Abstract
The momentum variables of streamfunction and velocity potential are used as control variables in a number of operational variational data assimilation systems. However, in this study it is shown that, for limited-area high-resolution data assimilation, the momentum control variables ψ and χ (ψχ) pose potential difficulties in background error modeling and, hence, may result in degraded analysis and forecast when compared with the direct use of x and y components of wind (UV). In this study, the characteristics of the modeled background error statistics, derived from an ensemble generated from Weather Research and Forecasting (WRF) Model real-time forecasts of two summer months, are first compared between the two control variable options. Assimilation and forecast experiments are then conducted with both options for seven convective events in a domain that encompasses the Rocky Mountain Front Range using the three-dimensional variational data assimilation (3DVar) system of the WRF Model. The impacts of the two control variable options are compared in terms of their skills in short-term qualitative precipitation forecasts. Further analysis is performed for one case to examine the impacts when radar observations are included in the 3DVar assimilation. The main findings are as follows: 1) the background error modeling used in WRF 3DVar with the control variables ψχ increases the length scale and decreases the variance for u and υ, which causes negative impact on the analysis of the velocity field and on precipitation prediction; 2) the UV-based 3DVar allows closer fits to radar wind observations; and 3) the use of UV control variables improves the 0–12-h precipitation prediction.
Abstract
The momentum variables of streamfunction and velocity potential are used as control variables in a number of operational variational data assimilation systems. However, in this study it is shown that, for limited-area high-resolution data assimilation, the momentum control variables ψ and χ (ψχ) pose potential difficulties in background error modeling and, hence, may result in degraded analysis and forecast when compared with the direct use of x and y components of wind (UV). In this study, the characteristics of the modeled background error statistics, derived from an ensemble generated from Weather Research and Forecasting (WRF) Model real-time forecasts of two summer months, are first compared between the two control variable options. Assimilation and forecast experiments are then conducted with both options for seven convective events in a domain that encompasses the Rocky Mountain Front Range using the three-dimensional variational data assimilation (3DVar) system of the WRF Model. The impacts of the two control variable options are compared in terms of their skills in short-term qualitative precipitation forecasts. Further analysis is performed for one case to examine the impacts when radar observations are included in the 3DVar assimilation. The main findings are as follows: 1) the background error modeling used in WRF 3DVar with the control variables ψχ increases the length scale and decreases the variance for u and υ, which causes negative impact on the analysis of the velocity field and on precipitation prediction; 2) the UV-based 3DVar allows closer fits to radar wind observations; and 3) the use of UV control variables improves the 0–12-h precipitation prediction.
Abstract
A new coupled data assimilation (DA) system developed with the aim of improving the initialization of coupled forecasts for various time ranges from short range out to seasonal is introduced. The implementation here is based on a “weakly” coupled data assimilation approach whereby the coupled model is used to provide background information for separate ocean–sea ice and atmosphere–land analyses. The increments generated from these separate analyses are then added back into the coupled model. This is different from the existing Met Office system for initializing coupled forecasts, which uses ocean and atmosphere analyses that have been generated independently using the FOAM ocean data assimilation system and NWP atmosphere assimilation systems, respectively. A set of trials has been run to investigate the impact of the weakly coupled data assimilation on the analysis, and on the coupled forecast skill out to 5–10 days. The analyses and forecasts have been assessed by comparing them to observations and by examining differences in the model fields. Encouragingly for this new system, both ocean and atmospheric assessments show the analyses and coupled forecasts produced using coupled DA to be very similar to those produced using separate ocean–atmosphere data assimilation. This work has the benefit of highlighting some aspects on which to focus to improve the coupled DA results. In particular, improving the modeling and data assimilation of the diurnal SST variation and the river runoff should be examined.
Abstract
A new coupled data assimilation (DA) system developed with the aim of improving the initialization of coupled forecasts for various time ranges from short range out to seasonal is introduced. The implementation here is based on a “weakly” coupled data assimilation approach whereby the coupled model is used to provide background information for separate ocean–sea ice and atmosphere–land analyses. The increments generated from these separate analyses are then added back into the coupled model. This is different from the existing Met Office system for initializing coupled forecasts, which uses ocean and atmosphere analyses that have been generated independently using the FOAM ocean data assimilation system and NWP atmosphere assimilation systems, respectively. A set of trials has been run to investigate the impact of the weakly coupled data assimilation on the analysis, and on the coupled forecast skill out to 5–10 days. The analyses and forecasts have been assessed by comparing them to observations and by examining differences in the model fields. Encouragingly for this new system, both ocean and atmospheric assessments show the analyses and coupled forecasts produced using coupled DA to be very similar to those produced using separate ocean–atmosphere data assimilation. This work has the benefit of highlighting some aspects on which to focus to improve the coupled DA results. In particular, improving the modeling and data assimilation of the diurnal SST variation and the river runoff should be examined.
Abstract
Seasonal-to-decadal predictions are initialized using observations of the present climatic state in full field initialization (FFI). Such model integrations undergo a drift toward the model attractor due to model deficiencies that incur a bias in the model. The anomaly initialization (AI) approach reduces the drift by adding an estimate of the bias onto the observations at the expense of a larger initial error.
In this study FFI is associated with the fidelity paradigm, and AI is associated with an instance of the mapping paradigm, in which the initial conditions are mapped onto the imperfect model attractor by adding a fixed error term; the mapped state on the model attractor should correspond to the nature state. Two diagnosis tools assess how well AI conforms to its own paradigm under various circumstances of model error: the degree of approximation of the model attractor is measured by calculating the overlap of the AI initial conditions PDF with the model PDF; and the sensitivity to random error in the initial conditions reveals how well the selected initial conditions on the model attractor correspond to the nature states. As a useful reference, the initial conditions of FFI are subjected to the same analysis.
Conducting hindcast experiments using a hierarchy of low-order coupled climate models, it is shown that the initial conditions generated using AI approximate the model attractor only under certain conditions: differences in higher-than-first-order moments between the model and nature PDFs must be negligible. Where such conditions fail, FFI is likely to perform better.
Abstract
Seasonal-to-decadal predictions are initialized using observations of the present climatic state in full field initialization (FFI). Such model integrations undergo a drift toward the model attractor due to model deficiencies that incur a bias in the model. The anomaly initialization (AI) approach reduces the drift by adding an estimate of the bias onto the observations at the expense of a larger initial error.
In this study FFI is associated with the fidelity paradigm, and AI is associated with an instance of the mapping paradigm, in which the initial conditions are mapped onto the imperfect model attractor by adding a fixed error term; the mapped state on the model attractor should correspond to the nature state. Two diagnosis tools assess how well AI conforms to its own paradigm under various circumstances of model error: the degree of approximation of the model attractor is measured by calculating the overlap of the AI initial conditions PDF with the model PDF; and the sensitivity to random error in the initial conditions reveals how well the selected initial conditions on the model attractor correspond to the nature states. As a useful reference, the initial conditions of FFI are subjected to the same analysis.
Conducting hindcast experiments using a hierarchy of low-order coupled climate models, it is shown that the initial conditions generated using AI approximate the model attractor only under certain conditions: differences in higher-than-first-order moments between the model and nature PDFs must be negligible. Where such conditions fail, FFI is likely to perform better.
Abstract
The authors have developed an assimilation system toward coastal data assimilation around Japan, which consists of a four-dimensional variational (4DVAR) assimilation scheme with an eddy-resolving model in the western North Pacific (MOVE-4DVAR-WNP) and a fine-resolution coastal model covering the western part of the Japanese coastal region around the Seto Inland Sea (MOVE-Seto). The 4DVAR scheme is developed as a natural extension of the 3DVAR scheme used in the Meteorological Research Institute Multivariate Ocean Variational Estimation (MOVE) system. An initialization scheme of incremental analysis update (IAU) is incorporated into MOVE-4DVAR-WNP to filter out high-frequency noises. During the backward integration of the adjoint model, it works as an incremental digital filtering. MOVE-Seto, which is nested within MOVE-4DVAR-WNP, also employs IAU to initialize the interior of the coastal model using MOVE-4DVAR-WNP analysis fields. The authors conducted an assimilation experiment using MOVE-4DVAR-WNP, and results were compared with an additional experiment using the 3DVAR scheme. The comparison reveals that MOVE-4DVAR-WNP improves mesoscale variability. In particular, short-term variability such as small-scale Kuroshio fluctuations is much enhanced. Using MOVE-Seto and MOVE-4DVAR-WNP, the authors also performed a case study focused on an unusual tide event that occurred at the south coast of Japan in September 2011. MOVE-Seto succeeds in reproducing a significant sea level rise associated with this event, indicating the effectiveness of the newly developed system for coastal sea level variability.
Abstract
The authors have developed an assimilation system toward coastal data assimilation around Japan, which consists of a four-dimensional variational (4DVAR) assimilation scheme with an eddy-resolving model in the western North Pacific (MOVE-4DVAR-WNP) and a fine-resolution coastal model covering the western part of the Japanese coastal region around the Seto Inland Sea (MOVE-Seto). The 4DVAR scheme is developed as a natural extension of the 3DVAR scheme used in the Meteorological Research Institute Multivariate Ocean Variational Estimation (MOVE) system. An initialization scheme of incremental analysis update (IAU) is incorporated into MOVE-4DVAR-WNP to filter out high-frequency noises. During the backward integration of the adjoint model, it works as an incremental digital filtering. MOVE-Seto, which is nested within MOVE-4DVAR-WNP, also employs IAU to initialize the interior of the coastal model using MOVE-4DVAR-WNP analysis fields. The authors conducted an assimilation experiment using MOVE-4DVAR-WNP, and results were compared with an additional experiment using the 3DVAR scheme. The comparison reveals that MOVE-4DVAR-WNP improves mesoscale variability. In particular, short-term variability such as small-scale Kuroshio fluctuations is much enhanced. Using MOVE-Seto and MOVE-4DVAR-WNP, the authors also performed a case study focused on an unusual tide event that occurred at the south coast of Japan in September 2011. MOVE-Seto succeeds in reproducing a significant sea level rise associated with this event, indicating the effectiveness of the newly developed system for coastal sea level variability.
Abstract
A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random additive so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise, and multiplicative inflation.
Abstract
A square root approach is considered for the problem of accounting for model noise in the forecast step of the ensemble Kalman filter (EnKF) and related algorithms. The primary aim is to replace the method of simulated, pseudo-random additive so as to eliminate the associated sampling errors. The core method is based on the analysis step of ensemble square root filters, and consists in the deterministic computation of a transform matrix. The theoretical advantages regarding dynamical consistency are surveyed, applying equally well to the square root method in the analysis step. A fundamental problem due to the limited size of the ensemble subspace is discussed, and novel solutions that complement the core method are suggested and studied. Benchmarks from twin experiments with simple, low-order dynamics indicate improved performance over standard approaches such as additive, simulated noise, and multiplicative inflation.
Abstract
This study aims to illustrate a general procedure based on well-known information theory concepts to select the channels from advanced satellite sounders that are most advantageous to assimilate both in clear-sky and overcast conditions using an ensemble-based estimate of forecast uncertainty. To this end, the standard iterative channel selection method, which is used to select the most informative channels from advanced infrared sounders for operational assimilation, was revisited so as to allow its use with measurements that have correlated errors. The method was here applied to determine a 24-humidity-sensitive-channel set that is small in size relative to a total of 8461 channels that are available on the Infrared Atmospheric Sounding Interferometer (IASI) on board the EUMETSAT Polar System MetOp satellites. The selected channels can be used to perform all-sky data assimilation experiments, in addition to those currently used for operational data assimilation of IASI data at ECMWF. Care was taken to include in the observation uncertainty used for channel selection the contributions arising from imperfect knowledge of the concentration of contaminants (except for cloud) in a given spectral channel. Also, (cumulative) weighting functions that provide a vertically resolved picture of the (total) number of degrees of freedom for signal expressed by a given set of measurements were introduced, which allows for the definition of a novel channel selection merit function that can be used to select measurements that are most sensitive to variations of a given parameter over a given atmospheric region (e.g., in the troposphere).
Abstract
This study aims to illustrate a general procedure based on well-known information theory concepts to select the channels from advanced satellite sounders that are most advantageous to assimilate both in clear-sky and overcast conditions using an ensemble-based estimate of forecast uncertainty. To this end, the standard iterative channel selection method, which is used to select the most informative channels from advanced infrared sounders for operational assimilation, was revisited so as to allow its use with measurements that have correlated errors. The method was here applied to determine a 24-humidity-sensitive-channel set that is small in size relative to a total of 8461 channels that are available on the Infrared Atmospheric Sounding Interferometer (IASI) on board the EUMETSAT Polar System MetOp satellites. The selected channels can be used to perform all-sky data assimilation experiments, in addition to those currently used for operational data assimilation of IASI data at ECMWF. Care was taken to include in the observation uncertainty used for channel selection the contributions arising from imperfect knowledge of the concentration of contaminants (except for cloud) in a given spectral channel. Also, (cumulative) weighting functions that provide a vertically resolved picture of the (total) number of degrees of freedom for signal expressed by a given set of measurements were introduced, which allows for the definition of a novel channel selection merit function that can be used to select measurements that are most sensitive to variations of a given parameter over a given atmospheric region (e.g., in the troposphere).
Abstract
The modifications to the data assimilation component of the Regional Deterministic Prediction System (RDPS) implemented at Environment Canada operations during the fall of 2014 are described. The main change is the replacement of the limited-area four-dimensional variational data assimilation (4DVar) algorithm for the limited-area analysis and the associated three-dimensional variational data assimilation (3DVar) scheme for the synchronous global driver analysis by the four-dimensional ensemble–variational data assimilation (4DEnVar) scheme presented in the first part of this study. It is shown that a 4DEnVar scheme using global background-error covariances can provide RDPS forecasts that are slightly improved compared to the previous operational approach, particularly during the first 24 h of the forecasts and in the summertime convective regime. Further forecast improvements were also made possible by upgrades in the assimilated observational data and by introducing the improved global analysis presented in the first part of this study in the RDPS intermittent cycling strategy. The computational savings brought by the 4DEnVar approach are also discussed.
Abstract
The modifications to the data assimilation component of the Regional Deterministic Prediction System (RDPS) implemented at Environment Canada operations during the fall of 2014 are described. The main change is the replacement of the limited-area four-dimensional variational data assimilation (4DVar) algorithm for the limited-area analysis and the associated three-dimensional variational data assimilation (3DVar) scheme for the synchronous global driver analysis by the four-dimensional ensemble–variational data assimilation (4DEnVar) scheme presented in the first part of this study. It is shown that a 4DEnVar scheme using global background-error covariances can provide RDPS forecasts that are slightly improved compared to the previous operational approach, particularly during the first 24 h of the forecasts and in the summertime convective regime. Further forecast improvements were also made possible by upgrades in the assimilated observational data and by introducing the improved global analysis presented in the first part of this study in the RDPS intermittent cycling strategy. The computational savings brought by the 4DEnVar approach are also discussed.