Search Results

You are looking at 1 - 10 of 12 items for

  • Author or Editor: Tim Palmer x
  • Refine by Access: All Content x
Clear All Modify Search
Stephan Juricke, Tim N. Palmer, and Laure Zanna

Abstract

In global ocean models, the representation of small-scale, high-frequency processes considerably influences the large-scale oceanic circulation and its low-frequency variability. This study investigates the impact of stochastic perturbation schemes based on three different subgrid-scale parameterizations in multidecadal ocean-only simulations with the ocean model NEMO at 1° resolution. The three parameterizations are an enhanced vertical diffusion scheme for unstable stratification, the Gent–McWilliams (GM) scheme, and a turbulent kinetic energy mixing scheme, all commonly used in state-of-the-art ocean models. The focus here is on changes in interannual variability caused by the comparatively high-frequency stochastic perturbations with subseasonal decorrelation time scales. These perturbations lead to significant improvements in the representation of low-frequency variability in the ocean, with the stochastic GM scheme showing the strongest impact. Interannual variability of the Southern Ocean eddy and Eulerian streamfunctions is increased by an order of magnitude and by 20%, respectively. Interannual sea surface height variability is increased by about 20%–25% as well, especially in the Southern Ocean and in the Kuroshio region, consistent with a strong underestimation of interannual variability in the model when compared to reanalysis and altimetry observations. These results suggest that enhancing subgrid-scale variability in ocean models can improve model variability and potentially its response to forcing on much longer time scales, while also providing an estimate of model uncertainty.

Full access
Sam Hatfield, Aneesh Subramanian, Tim Palmer, and Peter Düben

Abstract

A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.

Full access
Matthew Chantry, Tobias Thornes, Tim Palmer, and Peter Düben

Abstract

Attempts to include the vast range of length scales and physical processes at play in Earth’s atmosphere push weather and climate forecasters to build and more efficiently utilize some of the most powerful computers in the world. One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF’s Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities.

Full access
Sam Hatfield, Andrew McRae, Tim Palmer, and Peter Düben

Abstract

The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.

Free access
Aneesh Subramanian, Stephan Juricke, Peter Dueben, and Tim Palmer

Abstract

Numerical weather prediction and climate models comprise a) a dynamical core describing resolved parts of the climate system and b) parameterizations describing unresolved components. Development of new subgrid-scale parameterizations is particularly uncertain compared to representing resolved scales in the dynamical core. This uncertainty is currently represented by stochastic approaches in several operational weather models, which will inevitably percolate into the dynamical core. Hence, implementing dynamical cores with excessive numerical accuracy will not bring forecast gains, may even hinder them since valuable computer resources will be tied up doing insignificant computation, and therefore cannot be deployed for more useful gains, such as increasing model resolution or ensemble sizes. Here we describe a low-cost stochastic scheme that can be implemented in any existing deterministic dynamical core as an additive noise term. This scheme could be used to adjust accuracy in future dynamical core development work. We propose that such an additive stochastic noise test case should become a part of the routine testing and development of dynamical cores in a stochastic framework. The overall key point of the study is that we should not develop dynamical cores that are more precise than the level of uncertainty provided by our stochastic scheme. In this way, we present a new paradigm for dynamical core development work, ensuring that weather and climate models become more computationally efficient. We show some results based on tests done with the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) dynamical core.

Open access
E. Adam Paxton, Matthew Chantry, Milan Klöwer, Leo Saffin, and Tim Palmer

Abstract

Motivated by recent advances in operational weather forecasting, we study the efficacy of low-precision arithmetic for climate simulations. We develop a framework to measure rounding error in a climate model which provides a stress-test for a low-precision version of the model, and we apply our method to a variety of models including the Lorenz system; a shallow water approximation for ow over a ridge; and a coarse resolution spectral global atmospheric model with simplified parameterisations (SPEEDY). Although double precision (52 significant bits) is standard across operational climate models, in our experiments we find that single precision (23 sbits) is more than enough and that as low as half precision (10 sbits) is often sufficient. For example, SPEEDY can be run with 12 sbits across the code with negligible rounding error, and with 10 sbits if minor errors are accepted, amounting to less than 0.1 mm/6hr for average grid-point precipitation, for example. Our test is based on the Wasserstein metric and this provides stringent non-parametric bounds on rounding error accounting for annual means as well as extreme weather events. In addition, by testing models using both round-to-nearest (RN) and stochastic rounding (SR) we find that SR can mitigate rounding error across a range of applications, and thus our results also provide some evidence that SR could be relevant to next-generation climate models. Further research is needed to test if our results can be generalised to higher resolutions and alternative numerical schemes. However, the results open a promising avenue towards the use of low-precision hardware for improved climate modelling.

Restricted access
Antje Weisheimer, Daniel J. Befort, Dave MacLeod, Tim Palmer, Chris O’Reilly, and Kristian Strømmen

Abstract

Forecasts of seasonal climate anomalies using physically based global circulation models are routinely made at operational meteorological centers around the world. A crucial component of any seasonal forecast system is the set of retrospective forecasts, or hindcasts, from past years that are used to estimate skill and to calibrate the forecasts. Hindcasts are usually produced over a period of around 20–30 years. However, recent studies have demonstrated that seasonal forecast skill can undergo pronounced multidecadal variations. These results imply that relatively short hindcasts are not adequate for reliably testing seasonal forecasts and that small hindcast sample sizes can potentially lead to skill estimates that are not robust. Here we present new and unprecedented 110-year-long coupled hindcasts of the next season over the period 1901–2010. Their performance for the recent period is in good agreement with those of operational forecast models. While skill for ENSO is very high during recent decades, it is markedly reduced during the 1930s–1950s. Skill at the beginning of the twentieth century is, however, as high as for recent high-skill periods. Consistent with findings in atmosphere-only hindcasts, a midcentury drop in forecast skill is found for a range of atmospheric fields, including large-scale indices such as the NAO and the PNA patterns. As with ENSO, skill scores for these indices recover in the early twentieth century, suggesting that the midcentury drop in skill is not due to a lack of good observational data. A public dissemination platform for our hindcast data is available, and we invite the scientific community to explore them.

Free access
Matthew D. Palmer, Tim Boyer, Rebecca Cowley, Shoichi Kizu, Franco Reseghetti, Toru Suzuki, and Ann Thresher

Abstract

Time-varying biases in expendable bathythermograph (XBT) instruments have emerged as a key uncertainty in estimates of historical ocean heat content variability and change. One of the challenges in the development of XBT bias corrections is the lack of metadata in ocean profile databases. Approximately 50% of XBT profiles in the World Ocean database (WOD) have no information about manufacturer or probe type. Building on previous research efforts, this paper presents a deterministic algorithm for assigning missing XBT manufacturer and probe type for individual temperature profiles based on 1) the reporting country, 2) the maximum reported depth, and 3) the record date. The criteria used are based on bulk analysis of known XBT profiles in the WOD for the period 1966–2015. A basic skill assessment demonstrates a 77% success rate at correctly assigning manufacturer and probe type for profiles where this information is available. The skill rate is lowest during the early 1990s, which is also a period when metadata information is particularly poor. The results suggest that substantive improvements could be made through further data analysis and that future algorithms may benefit from including a larger number of predictor variables.

Open access
Filip Váňa, Peter Düben, Simon Lang, Tim Palmer, Martin Leutbecher, Deborah Salmond, and Glenn Carver

Abstract

Earth’s climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model’s physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.

Full access
Susanna Corti, Tim Palmer, Magdalena Balmaseda, Antje Weisheimer, Sybren Drijfhout, Nick Dunstone, Wilco Hazeleger, Jürgen Kröger, Holger Pohlmann, Doug Smith, Jin-Song von Storch, and Bert Wouters

Abstract

The impact of initial conditions relative to external forcings in decadal integrations from an ensemble of state-of-the-art prediction models has been assessed using specifically designed sensitivity experiments (SWAP experiments). They consist of two sets of 10-yr-long ensemble hindcasts for two initial dates in 1965 and 1995 using either the external forcings from the “correct” decades or swapping the forcings between the two decades. By comparing the two sets of integrations, the impact of external forcing versus initial conditions on the predictability over multiannual time scales was estimated as the function of lead time of the hindcast. It was found that over time scales longer than about 1 yr, the predictability of sea surface temperatures (SSTs) on a global scale arises mainly from the external forcing. However, the correct initialization has a longer impact on SST predictability over specific regions such as the North Atlantic, the northwestern Pacific, and the Southern Ocean. The impact of initialization is even longer and extends to wider regions when below-surface ocean variables are considered. For the western and eastern tropical Atlantic, the impact of initialization for the 700-m heat content (HTC700) extends to as much as 9 years for some of the models considered. In all models the impact of initial conditions on the predictability of the Atlantic meridional overturning circulation (AMOC) is dominant for the first 5 years.

Full access