Search Results

You are looking at 1 - 4 of 4 items for :

  • Author or Editor: Tim Palmer x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Matthew Chantry, Tobias Thornes, Tim Palmer, and Peter Düben

Abstract

Attempts to include the vast range of length scales and physical processes at play in Earth’s atmosphere push weather and climate forecasters to build and more efficiently utilize some of the most powerful computers in the world. One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF’s Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities.

Full access
Sam Hatfield, Aneesh Subramanian, Tim Palmer, and Peter Düben

Abstract

A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.

Full access
Sam Hatfield, Andrew McRae, Tim Palmer, and Peter Düben

Abstract

The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.

Free access
Filip Váňa, Peter Düben, Simon Lang, Tim Palmer, Martin Leutbecher, Deborah Salmond, and Glenn Carver

Abstract

Earth’s climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model’s physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.

Full access