Search Results
You are looking at 1 - 9 of 9 items for
- Author or Editor: Peter Düben x
- Refine by Access: All Content x
Abstract
Grids of variable resolution are of great interest in atmosphere and ocean modeling as they offer a route to higher local resolution and improved solutions. On the other hand there are changes in grid resolution considered to be problematic because of the errors they create between coarse and fine parts of a grid due to reflection and scattering of waves. On complex multidimensional domains these errors resist theoretical investigation and demand numerical experiments. With a low-order hybrid continuous/discontinuous finite-element model of the inviscid and viscous shallow-water equations a numerical study is carried out that investigates the influence of grid refinement on critical features such as wave propagation, turbulent cascades, and the representation of geostrophic balance. The refinement technique the authors use is static h refinement, where additional grid cells are inserted in regions of interest known a priori. The numerical tests include planar and spherical geometry as well as flows with boundaries and are chosen to address the impact of abrupt changes in resolution or the influence of the shape of the transition zone. For the specific finite-element model under investigation, the simulations suggest that grid refinement does not deteriorate geostrophic balance and turbulent cascades and the shape of mesh transition zones appears to be less important than expected. However, the results show that the static local refinement is able to reduce the local error, but not necessarily the global error and convergence properties with resolution are changed. The relatively simple tests already illustrate that grid refinement has to go along with a simultaneous change of the parameterization schemes.
Abstract
Grids of variable resolution are of great interest in atmosphere and ocean modeling as they offer a route to higher local resolution and improved solutions. On the other hand there are changes in grid resolution considered to be problematic because of the errors they create between coarse and fine parts of a grid due to reflection and scattering of waves. On complex multidimensional domains these errors resist theoretical investigation and demand numerical experiments. With a low-order hybrid continuous/discontinuous finite-element model of the inviscid and viscous shallow-water equations a numerical study is carried out that investigates the influence of grid refinement on critical features such as wave propagation, turbulent cascades, and the representation of geostrophic balance. The refinement technique the authors use is static h refinement, where additional grid cells are inserted in regions of interest known a priori. The numerical tests include planar and spherical geometry as well as flows with boundaries and are chosen to address the impact of abrupt changes in resolution or the influence of the shape of the transition zone. For the specific finite-element model under investigation, the simulations suggest that grid refinement does not deteriorate geostrophic balance and turbulent cascades and the shape of mesh transition zones appears to be less important than expected. However, the results show that the static local refinement is able to reduce the local error, but not necessarily the global error and convergence properties with resolution are changed. The relatively simple tests already illustrate that grid refinement has to go along with a simultaneous change of the parameterization schemes.
Abstract
A reduction of computational cost would allow higher resolution in numerical weather predictions within the same budget for computation. This paper investigates two approaches that promise significant savings in computational cost: the use of reduced precision hardware, which reduces floating point precision beyond the standard double- and single-precision arithmetic, and the use of stochastic processors, which allow hardware faults in a trade-off between reduced precision and savings in power consumption and computing time. Reduced precision is emulated within simulations of a spectral dynamical core of a global atmosphere model and a detailed study of the sensitivity of different parts of the model to inexact hardware is performed. Afterward, benchmark simulations were performed for which as many parts of the model as possible were put onto inexact hardware. Results show that large parts of the model could be integrated with inexact hardware at error rates that are surprisingly high or with reduced precision to only a couple of bits in the significand of floating point numbers. However, the sensitivities to inexact hardware of different parts of the model need to be respected, for example, via scale separation. In the last part of the paper, simulations with a full operational weather forecast model in single precision are presented. It is shown that differences in accuracy between the single- and double-precision forecasts are smaller than differences between ensemble members of the ensemble forecast at the resolution of the standard ensemble forecasting system. The simulations prove that the trade-off between precision and performance is a worthwhile effort, already on existing hardware.
Abstract
A reduction of computational cost would allow higher resolution in numerical weather predictions within the same budget for computation. This paper investigates two approaches that promise significant savings in computational cost: the use of reduced precision hardware, which reduces floating point precision beyond the standard double- and single-precision arithmetic, and the use of stochastic processors, which allow hardware faults in a trade-off between reduced precision and savings in power consumption and computing time. Reduced precision is emulated within simulations of a spectral dynamical core of a global atmosphere model and a detailed study of the sensitivity of different parts of the model to inexact hardware is performed. Afterward, benchmark simulations were performed for which as many parts of the model as possible were put onto inexact hardware. Results show that large parts of the model could be integrated with inexact hardware at error rates that are surprisingly high or with reduced precision to only a couple of bits in the significand of floating point numbers. However, the sensitivities to inexact hardware of different parts of the model need to be respected, for example, via scale separation. In the last part of the paper, simulations with a full operational weather forecast model in single precision are presented. It is shown that differences in accuracy between the single- and double-precision forecasts are smaller than differences between ensemble members of the ensemble forecast at the resolution of the standard ensemble forecasting system. The simulations prove that the trade-off between precision and performance is a worthwhile effort, already on existing hardware.
Abstract
Data storage and data processing generate significant cost for weather and climate modeling centers. The volume of data that needs to be stored and data that are disseminated to end users increases with increasing model resolution and the use of larger forecast ensembles. If precision of data is reduced, cost can be reduced accordingly. In this paper, three new methods to allow a reduction in precision with minimal loss of information are suggested and tested. Two of these methods rely on the similarities between ensemble members in ensemble forecasts. Therefore, precision will be high at the beginning of forecasts when ensemble members are more similar, to provide sufficient distinction, and decrease with increasing ensemble spread. To keep precision high for predictable situations and low elsewhere appears to be a useful approach to optimize data storage in weather forecasts. All methods are tested with data of operational weather forecasts of the European Centre for Medium-Range Weather Forecasts.
Abstract
Data storage and data processing generate significant cost for weather and climate modeling centers. The volume of data that needs to be stored and data that are disseminated to end users increases with increasing model resolution and the use of larger forecast ensembles. If precision of data is reduced, cost can be reduced accordingly. In this paper, three new methods to allow a reduction in precision with minimal loss of information are suggested and tested. Two of these methods rely on the similarities between ensemble members in ensemble forecasts. Therefore, precision will be high at the beginning of forecasts when ensemble members are more similar, to provide sufficient distinction, and decrease with increasing ensemble spread. To keep precision high for predictable situations and low elsewhere appears to be a useful approach to optimize data storage in weather forecasts. All methods are tested with data of operational weather forecasts of the European Centre for Medium-Range Weather Forecasts.
Abstract
The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.
Abstract
The use of single-precision arithmetic in ECMWF’s forecasting model gave a 40% reduction in wall-clock time over double-precision, with no decrease in forecast quality. However, using reduced-precision in 4D-Var data assimilation is relatively unexplored and there are potential issues with using single-precision in the tangent-linear and adjoint models. Here, we present the results of reducing numerical precision in an incremental 4D-Var data assimilation scheme, with an underlying two-layer quasigeostrophic model. The minimizer used is the conjugate gradient method. We show how reducing precision increases the asymmetry between the tangent-linear and adjoint models. For ill-conditioned problems, this leads to a loss of orthogonality among the residuals of the conjugate gradient algorithm, which slows the convergence of the minimization procedure. However, we also show that a standard technique, reorthogonalization, eliminates these issues and therefore could allow the use of single-precision arithmetic. This work is carried out within ECMWF’s data assimilation framework, the Object Oriented Prediction System.
Abstract
A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.
Abstract
A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.
Abstract
Attempts to include the vast range of length scales and physical processes at play in Earth’s atmosphere push weather and climate forecasters to build and more efficiently utilize some of the most powerful computers in the world. One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF’s Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities.
Abstract
Attempts to include the vast range of length scales and physical processes at play in Earth’s atmosphere push weather and climate forecasters to build and more efficiently utilize some of the most powerful computers in the world. One possible avenue for increased efficiency is in using less precise numerical representations of numbers. If computing resources saved can be reinvested in other ways (e.g., increased resolution or ensemble size) a reduction in precision can lead to an increase in forecast accuracy. Here we examine reduced numerical precision in the context of ECMWF’s Open Integrated Forecast System (OpenIFS) model. We posit that less numerical precision is required when solving the dynamical equations for shorter length scales while retaining accuracy of the simulation. Transformations into spectral space, as found in spectral models such as OpenIFS, enact a length scale decomposition of the prognostic fields. Utilizing this, we introduce a reduced-precision emulator into the spectral space calculations and optimize the precision necessary to achieve forecasts comparable with double and single precision. On weather forecasting time scales, larger length scales require higher numerical precision than smaller length scales. On decadal time scales, half precision is still sufficient precision for everything except the global mean quantities.
Abstract
Based on the principle “learn from past errors to correct current forecasts,” statistical postprocessing consists of optimizing forecasts generated by numerical weather prediction (NWP) models. In this context, machine learning (ML) offers state-of-the-art tools for training statistical models and making predictions based on large datasets. In our study, ML-based solutions are developed to reduce forecast errors of 2-m temperature and 10-m wind speed of the ECMWF’s operational medium-range, high-resolution forecasts produced with the Integrated Forecasting System (IFS). IFS forecasts and other spatiotemporal indicators are used as predictors after careful selection with the help of ML interpretability tools. Different ML approaches are tested: linear regression, random forest decision trees, and neural networks. Statistical models of systematic and random errors are derived sequentially where the random error is defined as the residual error after bias correction. In terms of output, bias correction and forecast uncertainty prediction are made available at any point from locations around the world. All three ML methods show a similar ability to capture situation-dependent biases leading to noteworthy performance improvements (between 10% and 15% improvement in terms of root-mean-square error for all lead times and variables), and a similar ability to provide reliable uncertainty predictions.
Abstract
Based on the principle “learn from past errors to correct current forecasts,” statistical postprocessing consists of optimizing forecasts generated by numerical weather prediction (NWP) models. In this context, machine learning (ML) offers state-of-the-art tools for training statistical models and making predictions based on large datasets. In our study, ML-based solutions are developed to reduce forecast errors of 2-m temperature and 10-m wind speed of the ECMWF’s operational medium-range, high-resolution forecasts produced with the Integrated Forecasting System (IFS). IFS forecasts and other spatiotemporal indicators are used as predictors after careful selection with the help of ML interpretability tools. Different ML approaches are tested: linear regression, random forest decision trees, and neural networks. Statistical models of systematic and random errors are derived sequentially where the random error is defined as the residual error after bias correction. In terms of output, bias correction and forecast uncertainty prediction are made available at any point from locations around the world. All three ML methods show a similar ability to capture situation-dependent biases leading to noteworthy performance improvements (between 10% and 15% improvement in terms of root-mean-square error for all lead times and variables), and a similar ability to provide reliable uncertainty predictions.
Abstract
We test the impact of changing numerical precision upon forecasts using the chaotic Lorenz’95 system. We find that in comparison with discretization and numerical rounding errors, the dominant source of errors are the initial condition errors. These initial condition errors introduced into the Lorenz’95 system grow exponentially at a rate according to the leading Lyapunov exponent. Given this information we show that the number of bits necessary to represent the system state can be reduced linearly in time without significantly affecting forecast skill. This is in addition to any initial reduction in precision to that of the initial conditions and also implies the potential to reduce some storage costs. An approach to vary precision locally within simulations, guided by the direction of eigenvectors of the growth and decay of forecast error (the “singular vectors”), did not show a satisfying impact upon forecast skill in relation to cost savings that could be achieved with a uniform reduction of precision. The error in a selection of ECMWF forecasts as a function of the number of bits used to store them indicates that precision might also be reduced in operational systems.
Abstract
We test the impact of changing numerical precision upon forecasts using the chaotic Lorenz’95 system. We find that in comparison with discretization and numerical rounding errors, the dominant source of errors are the initial condition errors. These initial condition errors introduced into the Lorenz’95 system grow exponentially at a rate according to the leading Lyapunov exponent. Given this information we show that the number of bits necessary to represent the system state can be reduced linearly in time without significantly affecting forecast skill. This is in addition to any initial reduction in precision to that of the initial conditions and also implies the potential to reduce some storage costs. An approach to vary precision locally within simulations, guided by the direction of eigenvectors of the growth and decay of forecast error (the “singular vectors”), did not show a satisfying impact upon forecast skill in relation to cost savings that could be achieved with a uniform reduction of precision. The error in a selection of ECMWF forecasts as a function of the number of bits used to store them indicates that precision might also be reduced in operational systems.
Abstract
Earth’s climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model’s physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.
Abstract
Earth’s climate is a nonlinear dynamical system with scale-dependent Lyapunov exponents. As such, an important theoretical question for modeling weather and climate is how much real information is carried in a model’s physical variables as a function of scale and variable type. Answering this question is of crucial practical importance given that the development of weather and climate models is strongly constrained by available supercomputer power. As a starting point for answering this question, the impact of limiting almost all real-number variables in the forecasting mode of ECMWF Integrated Forecast System (IFS) from 64 to 32 bits is investigated. Results for annual integrations and medium-range ensemble forecasts indicate no noticeable reduction in accuracy, and an average gain in computational efficiency by approximately 40%. This study provides the motivation for more scale-selective reductions in numerical precision.