Search Results
You are looking at 21 - 27 of 27 items for
- Author or Editor: Jacques Derome x
- Refine by Access: All Content x
Abstract
An algorithm based on the empirical normal mode analysis is used in a comparative study of the climatology and variability in dynamical-core experiments of the Global Environmental Multiscale model. The algorithm is proposed as a means to assess properties of the model's dynamical core and to establish objective criteria for model intercomparison studies. In this paper, the analysis is restricted to the upper troposphere and lower stratosphere. Two dynamical-core experiments are considered: one with the forcing proposed by Held and Suarez, later modified by Williamson et al. (called HSW experiment), and the other with a forcing inspired by the prescriptions of Boer and Denis (BD). Results are also compared with those of an earlier diagnosis of NCEP reanalyses. Normal modes and wave-activity spectra are similar to those found in the reanalysis data, although details depend on the forcing. For instance, wave-energy amplitudes are higher with the BD forcing, and an approximate energy equipartition is observed in the spectrum of wavenumber-1 modes in the NCEP data and the BD experiment but not in the HSW experiment. The HSW forcing has a relatively strong relaxation acting on the complete temperature field, whereas the BD forcing only acts on the zonal-mean temperature, letting the internal dynamics alone drive the wave-activity spectral cascade. This difference may explain why the BD forcing is more successful in reproducing the observed wave activity in the upper troposphere and lower stratosphere.
Abstract
An algorithm based on the empirical normal mode analysis is used in a comparative study of the climatology and variability in dynamical-core experiments of the Global Environmental Multiscale model. The algorithm is proposed as a means to assess properties of the model's dynamical core and to establish objective criteria for model intercomparison studies. In this paper, the analysis is restricted to the upper troposphere and lower stratosphere. Two dynamical-core experiments are considered: one with the forcing proposed by Held and Suarez, later modified by Williamson et al. (called HSW experiment), and the other with a forcing inspired by the prescriptions of Boer and Denis (BD). Results are also compared with those of an earlier diagnosis of NCEP reanalyses. Normal modes and wave-activity spectra are similar to those found in the reanalysis data, although details depend on the forcing. For instance, wave-energy amplitudes are higher with the BD forcing, and an approximate energy equipartition is observed in the spectrum of wavenumber-1 modes in the NCEP data and the BD experiment but not in the HSW experiment. The HSW forcing has a relatively strong relaxation acting on the complete temperature field, whereas the BD forcing only acts on the zonal-mean temperature, letting the internal dynamics alone drive the wave-activity spectral cascade. This difference may explain why the BD forcing is more successful in reproducing the observed wave activity in the upper troposphere and lower stratosphere.
Abstract
This study investigates the North Atlantic Oscillation (NAO) on an intraseasonal time scale. The authors investigate the question of how the characteristics of NAO events are influenced by the choice of its definitions using daily NCEP–NCAR reanalysis data spanning 51 boreal winters. Four different NAO indexes are used in this study, including one station/gridpoint–based index and three pattern-based indexes.
It is found that the NAO events obtained using pattern–based indexes are quite similar to each other, while some notable differences are observed when the NAO is defined using the station/gridpoint–based index (NAO1). The characteristics of the pattern-based NAO are found to be more antisymmetric for its two phases, including its time-averaged spatial structures, its lifetime distributions, and time-evolving spatial structures. The NAO1, on the other hand, reveals some asymmetric characteristics between the two phases. Emphasis is placed on comparing the characteristics of the NAO events obtained using the NAO1 index and one of the pattern-based indices, that is, NAO2. The time-averaged spatial structures for the NAO2 expand across more of the polar region than the NAO1. The positive NAO1 shows a wave train signal over the Pacific–North American region during the setup phase, while the negative NAO1 is found to develop more locally over northern Europe and the North Atlantic. The wave activity flux for the NAO2 is primarily in the zonal direction while for the NAO1, on the other hand, it is mostly concentrated over the North Atlantic with a pronounced southward component. The barotropic vorticity equation is used to examine the physical mechanisms that drive the life cycle of the NAO.
Abstract
This study investigates the North Atlantic Oscillation (NAO) on an intraseasonal time scale. The authors investigate the question of how the characteristics of NAO events are influenced by the choice of its definitions using daily NCEP–NCAR reanalysis data spanning 51 boreal winters. Four different NAO indexes are used in this study, including one station/gridpoint–based index and three pattern-based indexes.
It is found that the NAO events obtained using pattern–based indexes are quite similar to each other, while some notable differences are observed when the NAO is defined using the station/gridpoint–based index (NAO1). The characteristics of the pattern-based NAO are found to be more antisymmetric for its two phases, including its time-averaged spatial structures, its lifetime distributions, and time-evolving spatial structures. The NAO1, on the other hand, reveals some asymmetric characteristics between the two phases. Emphasis is placed on comparing the characteristics of the NAO events obtained using the NAO1 index and one of the pattern-based indices, that is, NAO2. The time-averaged spatial structures for the NAO2 expand across more of the polar region than the NAO1. The positive NAO1 shows a wave train signal over the Pacific–North American region during the setup phase, while the negative NAO1 is found to develop more locally over northern Europe and the North Atlantic. The wave activity flux for the NAO2 is primarily in the zonal direction while for the NAO1, on the other hand, it is mostly concentrated over the North Atlantic with a pronounced southward component. The barotropic vorticity equation is used to examine the physical mechanisms that drive the life cycle of the NAO.
Abstract
A simple GCM (SGCM) is constructed by adding empirically derived time-independent forcing terms to a dry primitive equation model. This yields a model with realistic time-mean jets and storm tracks. The SGCM is then used to study the equilibrium response to an imposed heating anomaly in the midlatitude Pacific, meant to represent an anomaly in the sea surface temperature. Using the SGCM’s own climatology as a basic state, the same model is then used to find the time-independent linear response to the same heating anomaly. The difference between the two responses is clearly attributed to the forcing due to anomalous transient eddies.
The sensitivity of the response to the strength and vertical profile of the heating, and to the presence of the wind speed in the surface flux parameterization, is explored. It is found that for a reasonable range of heating amplitude the transient eddy forcing is proportional to the heating and the responses to heating and cooling are almost antisymmetric. The antisymmetry breaks down at large amplitude. The vertical profile of heating has a small but systematic effect on the response: deeper heating leads to stronger equivalent barotropic features. The inclusion of wind speed in the surface flux parameterization alters the response mainly by virtue of altering the basic model climatology, rather than by any local effect on the heating.
The position of the heating anomaly is varied in both latitude and longitude to gain insight into the possible effects of systematic errors in GCMs. The time-independent linear response tends to move with the heating, but the eddy-driven nonlinear part remains relatively fixed and varies only in amplitude. The heating perturbation slightly modifies the first empirical orthogonal function of the model’s internal low frequency variability. The response projects strongly onto this pattern and the probability distribution function of the projection is significantly skewed.
Abstract
A simple GCM (SGCM) is constructed by adding empirically derived time-independent forcing terms to a dry primitive equation model. This yields a model with realistic time-mean jets and storm tracks. The SGCM is then used to study the equilibrium response to an imposed heating anomaly in the midlatitude Pacific, meant to represent an anomaly in the sea surface temperature. Using the SGCM’s own climatology as a basic state, the same model is then used to find the time-independent linear response to the same heating anomaly. The difference between the two responses is clearly attributed to the forcing due to anomalous transient eddies.
The sensitivity of the response to the strength and vertical profile of the heating, and to the presence of the wind speed in the surface flux parameterization, is explored. It is found that for a reasonable range of heating amplitude the transient eddy forcing is proportional to the heating and the responses to heating and cooling are almost antisymmetric. The antisymmetry breaks down at large amplitude. The vertical profile of heating has a small but systematic effect on the response: deeper heating leads to stronger equivalent barotropic features. The inclusion of wind speed in the surface flux parameterization alters the response mainly by virtue of altering the basic model climatology, rather than by any local effect on the heating.
The position of the heating anomaly is varied in both latitude and longitude to gain insight into the possible effects of systematic errors in GCMs. The time-independent linear response tends to move with the heating, but the eddy-driven nonlinear part remains relatively fixed and varies only in amplitude. The heating perturbation slightly modifies the first empirical orthogonal function of the model’s internal low frequency variability. The response projects strongly onto this pattern and the probability distribution function of the projection is significantly skewed.
Abstract
In this study, ensemble seasonal predictions of the Arctic Oscillation (AO) were conducted for 51 winters (1948–98) using a simple global atmospheric general circulation model. A means of estimating a priori the predictive skill of the AO ensemble predictions was developed based on the relative entropy (R) of information theory, which is a measure of the difference between the forecast and climatology probability density functions (PDFs). Several important issues related to the AO predictability, such as the dominant precursors of forecast skill and the degree of confidence that can be placed in an individual forecast, were addressed. It was found that R is a useful measure of the confidence that can be placed on dynamical predictions of the AO. When R is large, the prediction is likely to have a high confidence level whereas when R is small, the prediction skill is more variable. A small R is often accompanied by a relatively weak AO index. The value of R is dominated by the predicted ensemble mean. The relationship identified here, between model skills and the R of an ensemble prediction, offers a practical means of estimating the confidence level of a seasonal forecast of the AO using the dynamical model.
Through an analysis of the global sea surface temperature (SST) forcing, it was found that the winter AO-related R is correlated significantly with the amplitude of the SST anomalies over the tropical central Pacific and the North Pacific during the previous October. A large value of R is usually associated with strong SST anomalies in the two regions, whereas a poor prediction with a small R indicates that SST anomalies are likely weak in these two regions and the observed AO anomaly in the specific winter is likely caused by atmospheric internal dynamics.
Abstract
In this study, ensemble seasonal predictions of the Arctic Oscillation (AO) were conducted for 51 winters (1948–98) using a simple global atmospheric general circulation model. A means of estimating a priori the predictive skill of the AO ensemble predictions was developed based on the relative entropy (R) of information theory, which is a measure of the difference between the forecast and climatology probability density functions (PDFs). Several important issues related to the AO predictability, such as the dominant precursors of forecast skill and the degree of confidence that can be placed in an individual forecast, were addressed. It was found that R is a useful measure of the confidence that can be placed on dynamical predictions of the AO. When R is large, the prediction is likely to have a high confidence level whereas when R is small, the prediction skill is more variable. A small R is often accompanied by a relatively weak AO index. The value of R is dominated by the predicted ensemble mean. The relationship identified here, between model skills and the R of an ensemble prediction, offers a practical means of estimating the confidence level of a seasonal forecast of the AO using the dynamical model.
Through an analysis of the global sea surface temperature (SST) forcing, it was found that the winter AO-related R is correlated significantly with the amplitude of the SST anomalies over the tropical central Pacific and the North Pacific during the previous October. A large value of R is usually associated with strong SST anomalies in the two regions, whereas a poor prediction with a small R indicates that SST anomalies are likely weak in these two regions and the observed AO anomaly in the specific winter is likely caused by atmospheric internal dynamics.
Abstract
The prediction skill of the North Atlantic Oscillation (NAO) in boreal winter is assessed in the operational models of the WCRP/WWRP Subseasonal-to-Seasonal (S2S) prediction project. Model performance in representing the contribution of different processes to the NAO forecast skill is evaluated. The S2S models with relatively higher stratospheric vertical resolutions (high-top models) are in general more skillful in predicting the NAO than those models with relatively lower stratospheric resolutions (low-top models). Comparison of skill is made between different groups of forecasts based on initial condition characteristics: phase and amplitude of the NAO, easterly and westerly phases of the quasi-biennial oscillation (QBO), warm and cold phases of ENSO, and phase and amplitude of the Madden–Julian oscillation (MJO). The forecasts with a strong NAO in the initial condition are more skillful than with a weak NAO. Those with negative NAO tend to have more skillful predictions than positive NAO. Comparisons of NAO skill between forecasts during easterly and westerly QBO and between warm and cold ENSO show no consistent difference for the S2S models. Forecasts with strong initial MJO tend to be more skillful in the NAO prediction than weak MJO. Among the eight phases of MJO in the initial condition, phases 3–4 and phase 7 have better NAO forecast skills compared with the other phases. The results of this study have implications for improving our understanding of sources of predictability of the NAO. The situation dependence of the NAO prediction skill is likely useful in identifying “windows of opportunity” for subseasonal to seasonal predictions.
Abstract
The prediction skill of the North Atlantic Oscillation (NAO) in boreal winter is assessed in the operational models of the WCRP/WWRP Subseasonal-to-Seasonal (S2S) prediction project. Model performance in representing the contribution of different processes to the NAO forecast skill is evaluated. The S2S models with relatively higher stratospheric vertical resolutions (high-top models) are in general more skillful in predicting the NAO than those models with relatively lower stratospheric resolutions (low-top models). Comparison of skill is made between different groups of forecasts based on initial condition characteristics: phase and amplitude of the NAO, easterly and westerly phases of the quasi-biennial oscillation (QBO), warm and cold phases of ENSO, and phase and amplitude of the Madden–Julian oscillation (MJO). The forecasts with a strong NAO in the initial condition are more skillful than with a weak NAO. Those with negative NAO tend to have more skillful predictions than positive NAO. Comparisons of NAO skill between forecasts during easterly and westerly QBO and between warm and cold ENSO show no consistent difference for the S2S models. Forecasts with strong initial MJO tend to be more skillful in the NAO prediction than weak MJO. Among the eight phases of MJO in the initial condition, phases 3–4 and phase 7 have better NAO forecast skills compared with the other phases. The results of this study have implications for improving our understanding of sources of predictability of the NAO. The situation dependence of the NAO prediction skill is likely useful in identifying “windows of opportunity” for subseasonal to seasonal predictions.
Abstract
A simple GCM based on a primitive equation model with empirically derived time-independent forcing is used to make forecasts in the extended to seasonal range. The results are analyzed in terms of the response to a midlatitude Pacific sea surface temperature anomaly (SSTA), represented here by a heating perturbation. A set of 90-day, 30-member ensemble forecasts is made with 54 widely differing initial conditions, both with and without the SSTA. The development of the response, defined as the difference between ensemble means, is split into three 30-day averages: month 1, month 2, and month 3.
During month 1, ensemble members separate, and the local response and remote teleconnections are established. The local response is not very sensitive to the initial condition.
In month 2, the extended range, the responses are relatively strong and vary greatly from one initial condition to another. However, a linear analysis reveals that large variations in the response do not correlate strongly with large variations in the initial condition. The initial perturbations required to generate the observed variations in the response are relatively small, and may be difficult to isolate in a real forecasting situation.
In month 3, the seasonal range, variations between responses are much smaller. The initial condition loses its influence and the responses all start to resemble the equilibrium response discussed in Part I.
Abstract
A simple GCM based on a primitive equation model with empirically derived time-independent forcing is used to make forecasts in the extended to seasonal range. The results are analyzed in terms of the response to a midlatitude Pacific sea surface temperature anomaly (SSTA), represented here by a heating perturbation. A set of 90-day, 30-member ensemble forecasts is made with 54 widely differing initial conditions, both with and without the SSTA. The development of the response, defined as the difference between ensemble means, is split into three 30-day averages: month 1, month 2, and month 3.
During month 1, ensemble members separate, and the local response and remote teleconnections are established. The local response is not very sensitive to the initial condition.
In month 2, the extended range, the responses are relatively strong and vary greatly from one initial condition to another. However, a linear analysis reveals that large variations in the response do not correlate strongly with large variations in the initial condition. The initial perturbations required to generate the observed variations in the response are relatively small, and may be difficult to isolate in a real forecasting situation.
In month 3, the seasonal range, variations between responses are much smaller. The initial condition loses its influence and the responses all start to resemble the equilibrium response discussed in Part I.
Abstract
For many aspects of numerical weather prediction it is important to have good error statistics. Here one can think of applications as diverse as data assimilation, model improvement, and medium-range forecasting. In this paper, a method for producing these statistics from a representative ensemble of forecast states at the appropriate forecast time is proposed and examined. To generate the ensemble, an attempt is made to simulate the process of error growth in a forecast model. For different ensemble members the uncertain elements of the forecasts are perturbed in different ways.
First the authors attempt to obtain representative initial perturbations. For each perturbation, an independent 6-h assimilation cycle is performed. For this the available observations are randomly perturbed. The perturbed observations are input to the statistical interpolation assimilation scheme, giving a perturbed analysis. This analysis is integrated for 6 h with a perturbed version of the T63 forecast model, using perturbed surface fields, to obtain a perturbed first guess for the next assimilation. After cycling for 4 days it was found that the ensemble statistics have become stable.
To obtain perturbations to the model, different model options for the parameterization of horizontal diffusion, deep convection, radiation, gravity wave drag, and orography were selected. As part of the forecast error is due to model deficiencies, perturbing the model will lead to an improved ensemble forecast. This also creates the opportunity to use the ensemble forecast for model sensitivity experiments.
It is observed that the response, after several assimilation cycles, to the applied perturbations is strongly nonlinear. This fact makes it difficult to motivate the use of opposite initial perturbations. The spread in the ensemble of first-guess fields is validated against statistics available from the operational data assimilation scheme. It is seen that the spread in the ensemble is too small. Apparently, the simulation of the error sources is incomplete. In particular, we might have to generate less conventional perturbations to the model.
Abstract
For many aspects of numerical weather prediction it is important to have good error statistics. Here one can think of applications as diverse as data assimilation, model improvement, and medium-range forecasting. In this paper, a method for producing these statistics from a representative ensemble of forecast states at the appropriate forecast time is proposed and examined. To generate the ensemble, an attempt is made to simulate the process of error growth in a forecast model. For different ensemble members the uncertain elements of the forecasts are perturbed in different ways.
First the authors attempt to obtain representative initial perturbations. For each perturbation, an independent 6-h assimilation cycle is performed. For this the available observations are randomly perturbed. The perturbed observations are input to the statistical interpolation assimilation scheme, giving a perturbed analysis. This analysis is integrated for 6 h with a perturbed version of the T63 forecast model, using perturbed surface fields, to obtain a perturbed first guess for the next assimilation. After cycling for 4 days it was found that the ensemble statistics have become stable.
To obtain perturbations to the model, different model options for the parameterization of horizontal diffusion, deep convection, radiation, gravity wave drag, and orography were selected. As part of the forecast error is due to model deficiencies, perturbing the model will lead to an improved ensemble forecast. This also creates the opportunity to use the ensemble forecast for model sensitivity experiments.
It is observed that the response, after several assimilation cycles, to the applied perturbations is strongly nonlinear. This fact makes it difficult to motivate the use of opposite initial perturbations. The spread in the ensemble of first-guess fields is validated against statistics available from the operational data assimilation scheme. It is seen that the spread in the ensemble is too small. Apparently, the simulation of the error sources is incomplete. In particular, we might have to generate less conventional perturbations to the model.