Search Results
You are looking at 1 - 10 of 54 items for
- Author or Editor: T. N. Palmer x
- Refine by Access: All Content x
Abstract
Properties of the quasi-geostrophic Eliassen-Palm (EP) flux for planetary scale motions are discussed, in order to clarify how these properties generalize from their beta-plane counterparts when no restriction on the variation of the Coriolis parameter is imposed. These properties include the relationships between the divergence of the EP flux and the meridional flux of potential vorticity, and between the EP flux, group velocity and refractive index.
Abstract
Properties of the quasi-geostrophic Eliassen-Palm (EP) flux for planetary scale motions are discussed, in order to clarify how these properties generalize from their beta-plane counterparts when no restriction on the variation of the Coriolis parameter is imposed. These properties include the relationships between the divergence of the EP flux and the meridional flux of potential vorticity, and between the EP flux, group velocity and refractive index.
Abstract
The intense wavenumber-2 stratospheric warming of February 1979 is analyzed in a transformed Eulerian-mean formalism, and compared with diagnostics generated by the model warming of Dunkerton et al. (1981). Significant differences in the evolution of the zonal mean flow are found. The corresponding differences in wave, mean-flow interaction are examined by studying planetary wave activity in the troposphere and stratosphere, as measured by the Eliassen-Palm flux and its divergence. It is found that in the stratosphere, the direction of this flux changes several times during the warming. Zonal flow deceleration is most intense when the midlatitude stratospheric flux has positive poleward and upward components. Conversely, deceleration is smallest when the flux is directed equatorward. Some mechanisms that may account for this switching are discussed. However, unlike the model, the high-latitude zonal flow reversal does not arise from nonlinear critical layer interaction with the waves.
Abstract
The intense wavenumber-2 stratospheric warming of February 1979 is analyzed in a transformed Eulerian-mean formalism, and compared with diagnostics generated by the model warming of Dunkerton et al. (1981). Significant differences in the evolution of the zonal mean flow are found. The corresponding differences in wave, mean-flow interaction are examined by studying planetary wave activity in the troposphere and stratosphere, as measured by the Eliassen-Palm flux and its divergence. It is found that in the stratosphere, the direction of this flux changes several times during the warming. Zonal flow deceleration is most intense when the midlatitude stratospheric flux has positive poleward and upward components. Conversely, deceleration is smallest when the flux is directed equatorward. Some mechanisms that may account for this switching are discussed. However, unlike the model, the high-latitude zonal flow reversal does not arise from nonlinear critical layer interaction with the waves.
The physical basis for extended-range prediction is explored using the famous three-component Lorenz convection model, taken as a conceptual representation of the chaotic extratropical circulation, and extended by coupling to a linear oscillator to represent large-scale tropical–extratropical interactions. The model is used to analyze the roles of time averaging and ensemble forecasting, and, in extended form, the impact of both anomalous tropical sea surface temperature and anomalous extratropical sea surface temperature. The conceptual paradigms and analytic calculations presented are used to interpret results from numerical weather prediction and general circulation model experiments. Some remarks on the relevance of predictability studies for the climate change problem are given.
The physical basis for extended-range prediction is explored using the famous three-component Lorenz convection model, taken as a conceptual representation of the chaotic extratropical circulation, and extended by coupling to a linear oscillator to represent large-scale tropical–extratropical interactions. The model is used to analyze the roles of time averaging and ensemble forecasting, and, in extended form, the impact of both anomalous tropical sea surface temperature and anomalous extratropical sea surface temperature. The conceptual paradigms and analytic calculations presented are used to interpret results from numerical weather prediction and general circulation model experiments. Some remarks on the relevance of predictability studies for the climate change problem are given.
Abstract
A nonlinear dynamical perspective on climate prediction is outlined, based on a treatment of climate as the attractor of a nonlinear dynamical system D with distinct quasi-stationary regimes. The main application is toward anthropogenic climate change, considered as the response of D to a small-amplitude imposed forcing f.
The primary features of this perspective can be summarized as follows. First, the response to f will be manifest primarily in terms of changes to the residence frequency associated with the quasi-stationary regimes. Second, the geographical structures of these regimes will be relatively insensitive to f. Third, the large-scale signal will be most strongly influenced by f in rather localized regions of space and time. In this perspective, the signal arising from f will be strongly dependent of D’s natural variability.
A theoretical framework for the perspective is developed based on a singular vector decomposition of D’s tangent propagator. Evidence for the dyamical perspective is drawn from a number of observational and modeling studies of intraseasonal, interannual, and interdecadal variability, and from climate change integrations. It is claimed that the dynamical perspective might resolve the apparent discrepancy in global warming trends deduced from surface and free troposphere temperature measurements.
A number of specific recommendations for the evaluation of climate models are put forward, based on the ideas developed in this paper.
Abstract
A nonlinear dynamical perspective on climate prediction is outlined, based on a treatment of climate as the attractor of a nonlinear dynamical system D with distinct quasi-stationary regimes. The main application is toward anthropogenic climate change, considered as the response of D to a small-amplitude imposed forcing f.
The primary features of this perspective can be summarized as follows. First, the response to f will be manifest primarily in terms of changes to the residence frequency associated with the quasi-stationary regimes. Second, the geographical structures of these regimes will be relatively insensitive to f. Third, the large-scale signal will be most strongly influenced by f in rather localized regions of space and time. In this perspective, the signal arising from f will be strongly dependent of D’s natural variability.
A theoretical framework for the perspective is developed based on a singular vector decomposition of D’s tangent propagator. Evidence for the dyamical perspective is drawn from a number of observational and modeling studies of intraseasonal, interannual, and interdecadal variability, and from climate change integrations. It is claimed that the dynamical perspective might resolve the apparent discrepancy in global warming trends deduced from surface and free troposphere temperature measurements.
A number of specific recommendations for the evaluation of climate models are put forward, based on the ideas developed in this paper.
Meteorology is a wonderfully interdisciplinary subject. But can nonlinear thinking about predictability of weather and climate contribute usefully to issues in fundamental physics? Although this might seem extremely unlikely at first sight, an attempt is made to answer the question positively. The long-standing conceptual problems of quantum theory are outlined, focusing on indeterminacy and nonlocal causality, problems that led Einstein to reject quantum mechanics as a fundamental theory of physics (a glossary of some of the key terms used in this paper is given in the sidebar). These conceptual problems are considered in the light of both low-order chaos and the more radical (and less well known) paradigm of the finite-time predictability horizon associated with the self-similar upscale cascade of uncertainty in a turbulent fluid. The analysis of these dynamical systems calls into doubt one of the key pieces of logic used in quantum nonlocality theorems: that of counterfactual reasoning. By considering an idealization of the upscale cascade (which provides a novel representation of complex numbers and quaternions), a case is made for reinterpreting the quantum wave function as a set of intricately encoded binary sequences. In this reinterpretation, it is argued that the quantum world has no need for dice-playing deities, undead cats, multiple universes, or “spooky action at a distance.”
Meteorology is a wonderfully interdisciplinary subject. But can nonlinear thinking about predictability of weather and climate contribute usefully to issues in fundamental physics? Although this might seem extremely unlikely at first sight, an attempt is made to answer the question positively. The long-standing conceptual problems of quantum theory are outlined, focusing on indeterminacy and nonlocal causality, problems that led Einstein to reject quantum mechanics as a fundamental theory of physics (a glossary of some of the key terms used in this paper is given in the sidebar). These conceptual problems are considered in the light of both low-order chaos and the more radical (and less well known) paradigm of the finite-time predictability horizon associated with the self-similar upscale cascade of uncertainty in a turbulent fluid. The analysis of these dynamical systems calls into doubt one of the key pieces of logic used in quantum nonlocality theorems: that of counterfactual reasoning. By considering an idealization of the upscale cascade (which provides a novel representation of complex numbers and quaternions), a case is made for reinterpreting the quantum wave function as a set of intricately encoded binary sequences. In this reinterpretation, it is argued that the quantum world has no need for dice-playing deities, undead cats, multiple universes, or “spooky action at a distance.”
Carl-Gustaf Rossby's work leading to the dispersion equation for his eponymous atmospheric wave form was motivated by his quest to understand interrelationships between fluctuations in the zonal mean wind and the quasi-stationary waves. Rossby believed that climate variability on almost all timescales could be understood in terms of changes in the frequency of occurrence of states of high and low zonal index. Using modern-day terminology and ideas, Rossby's perception of climate variability can be viewed in terms of low-frequency changes to the probability distribution of the nonlinear weather regimes that characterize our chaotic climate attractor.
A perspective on possible future climate change is outlined, based on these ideas. One of the most basic notions to emerge is that even if such change is predominantly anthropogenically induced, its manifestation may be predominantly onto the natural “modes” of variability of the climate system.
Carl-Gustaf Rossby's work leading to the dispersion equation for his eponymous atmospheric wave form was motivated by his quest to understand interrelationships between fluctuations in the zonal mean wind and the quasi-stationary waves. Rossby believed that climate variability on almost all timescales could be understood in terms of changes in the frequency of occurrence of states of high and low zonal index. Using modern-day terminology and ideas, Rossby's perception of climate variability can be viewed in terms of low-frequency changes to the probability distribution of the nonlinear weather regimes that characterize our chaotic climate attractor.
A perspective on possible future climate change is outlined, based on these ideas. One of the most basic notions to emerge is that even if such change is predominantly anthropogenically induced, its manifestation may be predominantly onto the natural “modes” of variability of the climate system.
Abstract
A reduction of computational cost would allow higher resolution in numerical weather predictions within the same budget for computation. This paper investigates two approaches that promise significant savings in computational cost: the use of reduced precision hardware, which reduces floating point precision beyond the standard double- and single-precision arithmetic, and the use of stochastic processors, which allow hardware faults in a trade-off between reduced precision and savings in power consumption and computing time. Reduced precision is emulated within simulations of a spectral dynamical core of a global atmosphere model and a detailed study of the sensitivity of different parts of the model to inexact hardware is performed. Afterward, benchmark simulations were performed for which as many parts of the model as possible were put onto inexact hardware. Results show that large parts of the model could be integrated with inexact hardware at error rates that are surprisingly high or with reduced precision to only a couple of bits in the significand of floating point numbers. However, the sensitivities to inexact hardware of different parts of the model need to be respected, for example, via scale separation. In the last part of the paper, simulations with a full operational weather forecast model in single precision are presented. It is shown that differences in accuracy between the single- and double-precision forecasts are smaller than differences between ensemble members of the ensemble forecast at the resolution of the standard ensemble forecasting system. The simulations prove that the trade-off between precision and performance is a worthwhile effort, already on existing hardware.
Abstract
A reduction of computational cost would allow higher resolution in numerical weather predictions within the same budget for computation. This paper investigates two approaches that promise significant savings in computational cost: the use of reduced precision hardware, which reduces floating point precision beyond the standard double- and single-precision arithmetic, and the use of stochastic processors, which allow hardware faults in a trade-off between reduced precision and savings in power consumption and computing time. Reduced precision is emulated within simulations of a spectral dynamical core of a global atmosphere model and a detailed study of the sensitivity of different parts of the model to inexact hardware is performed. Afterward, benchmark simulations were performed for which as many parts of the model as possible were put onto inexact hardware. Results show that large parts of the model could be integrated with inexact hardware at error rates that are surprisingly high or with reduced precision to only a couple of bits in the significand of floating point numbers. However, the sensitivities to inexact hardware of different parts of the model need to be respected, for example, via scale separation. In the last part of the paper, simulations with a full operational weather forecast model in single precision are presented. It is shown that differences in accuracy between the single- and double-precision forecasts are smaller than differences between ensemble members of the ensemble forecast at the resolution of the standard ensemble forecasting system. The simulations prove that the trade-off between precision and performance is a worthwhile effort, already on existing hardware.
Abstract
Results from a set of nine-member ensemble seasonal integrations with a T63L19 version of the European Centre for Medium-Range Weather Forecasts (ECMWF) model are presented. The integrations are made using observed specified sea surface temperature (SST) from the 5-year period 1986–90, which included both warm and cold El Niño–Southern Oscillation (ENSO) events. The distributions of ensemble skill scores and internal ensemble consistency are studied. For years in which ENSO was strong, the model generally exhibits a relative high skill and high consistency in the Tropics. In the northern extratropics, the highest skill and consistency are found for the northern Pacific–North American region in winter, whereas for the northern Atlantic–European region the spring season appears to be both skillful and consistent. For years in which ENSO was weak, the distributions of ensemble skill and consistency are relatively broad and no clear distinction between Tropics and extratropics can be made.
Applying a t test to interannual fluctuations over various tropical and extratropical regions, estimates of a minimum useful ensemble size are made. Explicit calculations are done with ensemble size varying between three and nine members; estimates for larger sizes are made by extrapolating the t values. Based on an analysis of 2-m temperature and precipitation, the use of relatively large (approximately 20 members) ensembles for extratropical predictions is likely to be required; in the Tropics, smaller-sized ensembles may be adequate during years in which ENSO is strong, particularly for regions such as the Sahel.
The role of the SST forcing in a seasonal timescale ensemble is to bias the probability distribution function (PDF) of atmospheric states. Such PDFs can, in addition, be a convenient way of condensing a vast amount of data usually obtained from ensemble predictions. Interannual variability in PDFs of monsoon rainfall and regional geopotential height probabilities is discussed.
Abstract
Results from a set of nine-member ensemble seasonal integrations with a T63L19 version of the European Centre for Medium-Range Weather Forecasts (ECMWF) model are presented. The integrations are made using observed specified sea surface temperature (SST) from the 5-year period 1986–90, which included both warm and cold El Niño–Southern Oscillation (ENSO) events. The distributions of ensemble skill scores and internal ensemble consistency are studied. For years in which ENSO was strong, the model generally exhibits a relative high skill and high consistency in the Tropics. In the northern extratropics, the highest skill and consistency are found for the northern Pacific–North American region in winter, whereas for the northern Atlantic–European region the spring season appears to be both skillful and consistent. For years in which ENSO was weak, the distributions of ensemble skill and consistency are relatively broad and no clear distinction between Tropics and extratropics can be made.
Applying a t test to interannual fluctuations over various tropical and extratropical regions, estimates of a minimum useful ensemble size are made. Explicit calculations are done with ensemble size varying between three and nine members; estimates for larger sizes are made by extrapolating the t values. Based on an analysis of 2-m temperature and precipitation, the use of relatively large (approximately 20 members) ensembles for extratropical predictions is likely to be required; in the Tropics, smaller-sized ensembles may be adequate during years in which ENSO is strong, particularly for regions such as the Sahel.
The role of the SST forcing in a seasonal timescale ensemble is to bias the probability distribution function (PDF) of atmospheric states. Such PDFs can, in addition, be a convenient way of condensing a vast amount of data usually obtained from ensemble predictions. Interannual variability in PDFs of monsoon rainfall and regional geopotential height probabilities is discussed.
Abstract
The impact of ensemble size on the performance of the European Centre for Medium-Range Weather Forecasts ensemble prediction system (EPS) is analyzed. The skill of ensembles generated using 2, 4, 8, 16, and 32 perturbed ensemble members are compared for a period of 45 days—from 1 October to 15 November 1996. For each ensemble configuration, the skill is compared with the potential skill, measured by randomly choosing one of the 32 ensemble members as verification (idealized ensemble). Results are based on the analyses of the prediction of the 500-hPa geopotential height field. Various measures of performance are applied: skill of the ensemble mean, spread–skill relationship, skill of most accurate ensemble member, Brier score, ranked probability score, relative operating characteristic, and the outlier statistic.
The relation between ensemble spread and control error is studied using L 2, L 8, and L ∞ norms to measure distances between ensemble members and the control forecast or the verification. It is argued that the supremum norm is a more suitable measure of distance, given the strategy for constructing ensemble perturbations from rapidly growing singular vectors. Results indicate that, for the supremum norm, any increase of ensemble size within the range considered in this paper is strongly beneficial. With the smaller ensemble sizes, ensemble spread does not provide a reliable bound on control error in many cases. By contrast, with 32 members, spread provides a bound on control error in nearly all cases. It could be anticipated that further improvement could be achieved with higher ensemble size still. On the other hand, spread–skill relationship was not consistently improved with higher ensemble size using the L 2 norm.
The overall conclusion is that the extent to which an increase of ensemble size (particularly from 8 to 16, and 16 to 32 members) improves EPS performance, is strongly dependent on the measure used to assess performance. In addition to the spread–skill relationship, the measures most sensitive to ensemble size are shown to be the skill of the best ensemble member (particularly when evaluated on a point-wise basis) and the outlier statistic.
Abstract
The impact of ensemble size on the performance of the European Centre for Medium-Range Weather Forecasts ensemble prediction system (EPS) is analyzed. The skill of ensembles generated using 2, 4, 8, 16, and 32 perturbed ensemble members are compared for a period of 45 days—from 1 October to 15 November 1996. For each ensemble configuration, the skill is compared with the potential skill, measured by randomly choosing one of the 32 ensemble members as verification (idealized ensemble). Results are based on the analyses of the prediction of the 500-hPa geopotential height field. Various measures of performance are applied: skill of the ensemble mean, spread–skill relationship, skill of most accurate ensemble member, Brier score, ranked probability score, relative operating characteristic, and the outlier statistic.
The relation between ensemble spread and control error is studied using L 2, L 8, and L ∞ norms to measure distances between ensemble members and the control forecast or the verification. It is argued that the supremum norm is a more suitable measure of distance, given the strategy for constructing ensemble perturbations from rapidly growing singular vectors. Results indicate that, for the supremum norm, any increase of ensemble size within the range considered in this paper is strongly beneficial. With the smaller ensemble sizes, ensemble spread does not provide a reliable bound on control error in many cases. By contrast, with 32 members, spread provides a bound on control error in nearly all cases. It could be anticipated that further improvement could be achieved with higher ensemble size still. On the other hand, spread–skill relationship was not consistently improved with higher ensemble size using the L 2 norm.
The overall conclusion is that the extent to which an increase of ensemble size (particularly from 8 to 16, and 16 to 32 members) improves EPS performance, is strongly dependent on the measure used to assess performance. In addition to the spread–skill relationship, the measures most sensitive to ensemble size are shown to be the skill of the best ensemble member (particularly when evaluated on a point-wise basis) and the outlier statistic.
Abstract
The full set of kinetic energy singular values and singular vectors for the forward tangent propagator of a quasigeostrophic potential vorticity model is examined. In contrast to the fastest growing singular vectors, the fastest decaying vectors exhibit a downward and downscale transfer of energy and an eastward tilt with height. The near-neutral singular vectors resemble small-scale noise with no localized structure or coherence between levels.
Post-time forecast and analysis correction techniques are examined as a function of the number of singular vectors included in the representation of the inverse of the forward tangent propagator. It is found that for the case when the forecast error is known exactly, the best corrections are obtained when using the full inverse, which includes all of the singular vectors. It is also found that the erroneous projection of the analysis uncertainty onto the fastest decaying singular vectors has a significant detrimental effect on the estimation of analysis error. Therefore, for the more realistic case where the forecast error is known imperfectly, use of the full inverse will result in an inaccurate estimate of analysis errors, and the best corrections are obtained when using an inverse composed only of the growing singular vectors. Running the tangent equations with a negative time step is a very good approximation to using the full inverse of the forward tangent propagator.
Abstract
The full set of kinetic energy singular values and singular vectors for the forward tangent propagator of a quasigeostrophic potential vorticity model is examined. In contrast to the fastest growing singular vectors, the fastest decaying vectors exhibit a downward and downscale transfer of energy and an eastward tilt with height. The near-neutral singular vectors resemble small-scale noise with no localized structure or coherence between levels.
Post-time forecast and analysis correction techniques are examined as a function of the number of singular vectors included in the representation of the inverse of the forward tangent propagator. It is found that for the case when the forecast error is known exactly, the best corrections are obtained when using the full inverse, which includes all of the singular vectors. It is also found that the erroneous projection of the analysis uncertainty onto the fastest decaying singular vectors has a significant detrimental effect on the estimation of analysis error. Therefore, for the more realistic case where the forecast error is known imperfectly, use of the full inverse will result in an inaccurate estimate of analysis errors, and the best corrections are obtained when using an inverse composed only of the growing singular vectors. Running the tangent equations with a negative time step is a very good approximation to using the full inverse of the forward tangent propagator.