A new approach for improving the accuracy of data assimilation, by trading numerical precision for ensemble size, is introduced. Data assimilation is inherently uncertain because of the use of noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower-precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, computational resources can be redistributed toward, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localization, lowering precision could actually permit an improvement in the accuracy of weather forecasts. Here, this idea is tested on an ensemble data assimilation system comprising the Lorenz ’96 toy atmospheric model and the ensemble square root filter. The system is run at double-, single-, and half-precision (the latter using an emulation tool), and the performance of each precision is measured through mean error statistics and rank histograms. The sensitivity of these results to the observation error and the length of the observation window are addressed. Then, by reinvesting the saved computational resources from reducing precision into the ensemble size, assimilation error can be reduced for (hypothetically) no extra cost. This results in increased forecasting skill, with respect to double-precision assimilation.
The skill of weather forecasts has improved dramatically over the past 30 years thanks to, among other things, improved model initialization through data assimilation. Modern, ensemble-based data assimilation techniques, such as the ensemble Kalman filter (EnKF) (Houtekamer and Zhang 2016), are able to estimate the flow dependence of covariances, in spite of nonlinearities in the model, so that observations and backgrounds can be more optimally combined. Further, sophisticated inflation techniques (Whitaker and Hamill 2012; Miyoshi 2011), efficient deterministic variations on the EnKF, such as the local ensemble transform Kalman filter (LETKF) (Hunt et al. 2007), and other developments have made the EnKF feasible for operational numerical weather prediction.
In the near future, increasingly accurate and widespread observational networks of satellites and surface instruments will allow a more accurate characterization of the atmospheric state (Bauer et al. 2015). If computing hardware keeps pace, the trends of increasing ensemble size, grid resolution, and model complexity will also continue to improve background estimates, leading to reduced initial error and further extensions of the forecast horizon (the lead time within which a forecast is still useful). However, the historical trend of increasing computing power cannot be relied on indefinitely. Moore’s Law, the prediction that transistor density doubles every two years, is beginning to decline and high-performance computers are increasingly relying on parallelization, with commensurate increases in power consumption (Sutter 2005). This predicament prompts a reevaluation of current computing paradigms, and motivates this study into numerical precision.
When simulating geophysical systems, it is common practice to use “double-precision” variables, which use 64 bits of storage to accurately represent real numbers. Lowering precision will introduce larger rounding errors, which can contaminate the simulation. However, lower-precision arithmetic is computationally cheaper. If the simulation is tolerant to the rounding errors introduced, then by reducing precision we can free up computational resources that can be invested into other areas of the system. Palmer (2014) suggested applying such a trade-off within climate modeling, a field of considerable uncertainty because of simplified parameterizations of unresolved phenomena. If a reduction in precision can be achieved without an unacceptable deterioration of model performance, then computing resources liberated by this procedure could be reinvested toward increasing the grid resolution, thereby reducing the dependence on parameterizations.
The subject of numerical precision within climate and weather models is one of increasing publicity. Previous studies have considered both idealized models (like the one presented below) and intermediate complexity models (Düben et al. 2014; Düben and Palmer 2014). Through emulation of hypothetical inexact hardware, it was found that large portions of the code can be run at a precision significantly less than the standard, double-precision, without strongly impacting the quality of either forecasts or the climatology. The European Centre for Medium-Range Weather Forecasts (ECMWF) has also considered a reduction in precision in their forecasting model, the Integrated Forecast System (IFS) (Váňa et al. 2017), albeit without data assimilation and at a lower resolution than the operational setup. Reducing the precision of almost all floating-point variables throughout the model (physics and dynamics) from double to single precision reduced the runtime of IFS by around 40% without noticeably degrading probabilistic skill scores. Two ensemble forecasts of an equal computational cost were then performed: double-precision with 34 members and single-precision with 50 members. By reinvesting the cost savings from reducing precision into the ensemble size, skill scores could be improved.
Data assimilation is also an inherently uncertain process, because of the presence of model and observational error. The length of time between subsequent observations for the ensemble Kalman filter is an additional source of uncertainty. We might therefore expect rounding errors introduced by reducing precision to be within the bounds of uncertainty already present. If this is the case, then using reduced precision arithmetic could free up computing resources and permit an improvement in other aspects of the data assimilation system. Houtekamer et al. (2014) listed a number of parameters that can be increased in an operational ensemble Kalman filter to improve analysis quality with a commensurate increase in computational cost, including ensemble size, observation count, and model resolution. Miyoshi et al. (2014) demonstrated the benefits of increasing ensemble size, in particular, through the use of a 10 240 member ensemble Kalman filter. With such a large ensemble the localization length scale can be extended (or localization can be removed entirely) (Kondo and Miyoshi 2016), allowing more distant observations to affect the analysis at a grid point. Reducing precision could therefore improve the analysis quality, by providing the computational resources necessary to increase the ensemble size. This trade-off, illustrated in Fig. 1 as an error/cost diagram similar to that presented in Houtekamer et al. (2014), is the focus of our study.
Existing literature on numerical precision within the context of data assimilation is minimal. Miyoshi et al. (2016) compared the performance of a regional data assimilation system with double- and single-precision forecasting models. The single-precision simulation allowed a reduction in compute time of around 32%, with respect to the double-precision simulation, a figure produced by averaging the compute time reduction of the dynamics (45%) and the physics (21%).
In this paper, we assess the performance of an ensemble data assimilation system, consisting of the Lorenz ’96 system and the ensemble square root filter, as precision is reduced from double-precision through to single- and half-precision. We investigate the sensitivity of these results to both the level of observation error and the length of the assimilation window. Additionally, we consider a trade-off between numerical precision and ensemble size. By reallocating the computational savings delivered by half- and single-precision arithmetic into the ensemble size, we can reduce the analysis error for, hypothetically, no extra cost, with respect to double-precision assimilation. This in turn gives higher-quality weather forecasts. The speed-up from reducing precision is difficult to predict, so we present results for a variety of speed-up assumptions. The precision of the underlying code represents another trade-off between analysis quality and computational speed, similar to grid resolution or ensemble size. We hope that the results presented here will encourage operational weather forecasting centers to consider numerical precision more closely when designing operational data assimilation systems. Section 2 describes our experimental setup, section 3 contains the results, and section 4 gives some conclusions.
2. Data assimilation setup
Observing system simulation experiments (OSSEs) allow different assimilation algorithms to be compared and tested with complete freedom over the quantity and quality of observations. A “nature run” is first generated through a single integration of the best possible model available (the “truth model”), and observations of this nature run are then generated and fed into the assimilation system. The output from the assimilation system can then be verified against the (perfectly characterized) nature run. We employ such a system here in order to characterize the effects of a reduction of numerical precision from the double-precision standard.
a. The Lorenz ’96 system
We use the Lorenz ’96 system for both the truth and forecast models (Lorenz and Emanuel 1998; Lorenz 2006). This system provides a simple and efficient method to generate spatiotemporally chaotic time series analogous to those displayed by prognostic variables in the atmosphere. Its simplicity allows good statistics to be obtained in a short period of time, even on a desktop computer. For these reasons, it is commonly employed as a test bed in data assimilation studies (Whitaker and Hamill 2002; Ott et al. 2004; Nerger 2015).
For the truth model, we use the three-tier variant of the Lorenz ’96 system, introduced in Thornes et al. (2017). The tiers are labeled X, Y, and Z and are analogous to large-, medium- and small-scale dynamics, respectively, in the atmosphere. Each tier comprises a line of grid points with periodic boundary conditions. Each X grid point is “connected” to a number of Y grid points and each Y grid point is connected to a number of Z grid points. The forecast model used in the ensemble square root filter is a two-tier variant of the Lorenz ’96 system with a parameterization of the tendency of the missing Z tier. In this way, the difference between the forecasting and truth models is intended to mimic the difference between a real forecasting model and the atmosphere itself. The truth and forecast models are illustrated in Fig. 2.
The three-tier system is described by the following equations:
The indices i, j, and k range from 1 up to I, J, and K for the Z, Y, and X tiers, respectively. The number of X variables is given by I, the number of Y variables is given by , and the number of Z variables is given by . In this case, I, J, and K are all equal to 8. There are therefore 584 grid points in total, as shown in Fig. 2a. The parameter F is a large-scale forcing term which determines the chaoticity of the model and is set to 20 in this case, placing the model firmly within the chaotic regime. The remaining parameters, h, c, b, e, d, and allow the frequency and amplitude of oscillation of the three tiers, as well as the coupling between tiers, to be tuned. These parameters are tuned such that the X tier oscillates slowly with a large amplitude, while the two tiers below vary more quickly at a lower amplitude. The parameter values are , , and .
If Eqs. (1), (2), and (3) are defined as the “truth” then a complementary simplified forecast model can be constructed. We use a two-tier variant of the Lorenz ’96 system similar to Eqs. (1) and (2), only the Z tendency is replaced with a parameterization term, :
The term is tuned to match the average tendency of the missing Z tier, for which we use “additive noise,” given by
where is a deterministic polynomial function of the resolved variable and is a random number generated from an autoregressive first-order stochastic process. For we use a fourth-order polynomial:
The first-order stochastic process is generated by
where ϕ is the lag-1 autocorrelation (characterizing the persistence of the random process), is the previous value of the process, and is a random number normally distributed with zero mean and variance (characterizing the amplitude of the noise). The parameters ϕ and are tuned by matching the average behavior of the two-tier system to the three-tier system, in a similar process to that described in Arnold et al. (2013). The values of the parameters used are shown in Table 1.
Both the truth and forecast models are integrated with a fourth-order Runge–Kutta scheme of time step 0.005 nondimensional model time units (MTUs).
b. The serial ensemble square root filter
To perform data assimilation, we use the serial ensemble square root filter (Serial-EnSRF) [see Whitaker and Hamill (2002) and Tippett et al. (2003) for a detailed description of the update equations]. The Serial-EnSRF updates the ensemble mean and ensemble perturbations from the mean separately, in analogy with the original Kalman filter equations where the expectation value and covariance matrix are updated separately. As for atmospheric data assimilation, the state is only partially observed; only observations of the Y variables are assimilated. This leaves the X tier for verification.
The observation error covariance matrix is defined to be diagonal, and so observation errors for differing Y grid points are uncorrelated. Therefore, observations of each Y grid point can be processed one at a time. After the first observation is assimilated, the resulting analysis ensemble is treated as the background for assimilation of the next observation. This process is repeated until all observations have been processed. The update equations simplify greatly when processing scalar observations, and the need for matrix inversions and square roots is eliminated (Gelb 1974, Anderson 2003).
1) Covariance preprocessing
At the analysis step of the Serial-EnSRF, the background covariance matrix is sampled from the background ensemble. The ensemble size used is typically suboptimal, and so this sample estimate suffers from sampling errors. To counter these errors, we use two covariance preprocessing techniques. The first technique is multiplicative covariance inflation, actually applied to the “scaled square root” of the background covariance matrix , where and N is the number of ensemble members, before the assimilation of observations:
where r is a number slightly larger than 1 (e.g., 1.01). Inflation is used to counter the “inbreeding effect” in ensemble data assimilation that is present when the same ensemble is used to estimate the Kalman gain and the analysis ensemble (Houtekamer and Mitchell 1998; Anderson and Anderson 1999; Hamill et al. 2001).
The second technique is covariance localization, which is used in order to filter out spurious long-range correlations in that tend to contaminate the analysis (Hamill et al. 2001; Houtekamer and Mitchell 2001). The spurious correlations are a result of sampling error from an inadequate ensemble size, and so the required localization reduces as the ensemble size increases. Before observations are assimilated, a “localization vector” is applied to the Kalman gain through an element-wise multiplication (a Schur product):
The localization vector has a value of 1 at the grid point where the observation currently being assimilated is located. Moving away from this grid point, the localization vector reduces in magnitude and reaches 0 after a finite distance, known as the “localization scale.” We use the “Gaspari–Cohn” function to achieve this, which is a polynomial approximation to a Gaussian with compact support [Gaspari and Cohn (1999), their Eq. (4.19)]. This localization technique is often referred to as “B localization” (“B” refers to another notation for the background covariance matrix (Greybush et al. 2011). Note that B localization is usually applied to the full covariance matrix instead of to . In the case of sequential observation processing, the two techniques are equivalent.
Localization is applied not only to the covariances between Y grid points, but also to the covariances between Y grid points and X grid points. The localization between Y grid points depends simply on the distance (the number of Y grid points) between them. When localizing between a Y grid point and an X grid point, first the X grid point associated with the Y grid point is determined. Then the localization is computed based on the distance separating the two X grid points. The localization scale is assumed to be the same function of ensemble size in both cases, and so both are tuned simultaneously through the parameter l, where the Y–Y localization scale is and the X–Y localization scale is . Figure 3 shows several examples of the localization vector , when assimilating the first Y variable.
Both the optimal inflation and the optimal localization are assumed to be functions of ensemble size, assimilation frequency, and observation error, and so the inflation and localization are retuned with every change of one of these parameters. An example of the tuning process is illustrated in Fig. 4. The gray region denotes parameter combinations that resulted in filter divergence. In this example, the optimal choice of inflation was 1.13 and the optimal localization length scale was 2.0, as the minimum error is located at this point, indicated by a star. On all occasions, we found that the optimal parameters for double-, single-, and half-precision assimilation were roughly the same. Therefore, we only tune the system at double-precision, and use the same parameters for single- and half-precision assimilation.
2) Choice of assimilation parameters
To make a fair comparison between full- and reduced-precision results, it is important to choose a reference scenario that accurately represents an operational setup. For the operational EnKF at the Canadian Meteorological Centre, for example, assimilation is typically performed every 6 h (Houtekamer et al. 2014). Equivalently, we assimilate every 0.05 MTUs. This time scale was determined by computing the error-doubling time of the large-scale (X) tier, and comparing with the error-doubling time of the real atmosphere [approximately 1.5 days (Lorenz 2006)]. It should be noted, however, that this time scale is only approximate, and the true value could be anywhere between 0.025 and 0.075 MTUs. We use 40 ensemble members for our reference scenario and double-precision for all calculations. Additionally, observation errors for the Y variables are drawn from a zero-mean normal distribution with a standard deviation σ of 0.1, which is approximately 30% of the natural root-mean-square (RMS) variability of these variables. Figure 5 shows an excerpt of the evolution of the first Y variable from a nature run and the observations derived from this run. Limited results will be given for two other levels of observation error, and , which are approximately 1% and 5% of the natural RMS variability, respectively.
c. Reducing precision
We consider three levels of numerical precision in our investigation: double (64 bits), single (32 bits), and half (16 bits). These are described in the appendix. Lacking the hardware to implement true half-precision variables, we emulate them using the reduced-precision emulator described in Dawson and Düben (2017). Half-precision values are stored in a double-precision container, but significand bits beyond the 10th bit are zeroed manually after every operation. In some calculations, an IEEE 754 compliant rounding is performed on the resulting number to avoid biases. Furthermore, the 5-bit half-precision exponent is implicitly emulated through range checking on the results of floating-point operations: variables are forced to overflow or underflow if they are too large or small, respectively, to be stored in a half-precision floating-point variable. We also perform some limited experiments with a fixed double-precision exponent size and flexible significand width, using the same emulator.
In the experiments that follow, all calculations in the main loop (forecast/update cycle) are performed in the specified precision. The nature run is always generated using double-precision. Thornes et al. (2017) found error metrics particularly sensitive to the numerical precision of the X tier in the same Lorenz ’96 system used here, reasoning that large-scale dynamics are more important than small-scale dynamics and therefore require more precise computations. Here, we do not consider such a scale-selective precision, and both tiers of the forecast model are performed at the same precision.
a. Reduced precision analyses
Figure 6 shows the analysis ensemble mean root-mean-square error (RMSE) averaged over 5000 analyses of a data assimilation cycle for three different ensembles—40, 80, and 160 members—as a function of , the number of significand bits. Assimilation is performed every 0.05 MTUs (every 10 time steps), so the total length of simulations is 250 MTUs or around 4 atmospheric years. The ensemble mean RMSE at time k is given by
where is the number of X variables (8 in this case) and and are the ith X variables of the analysis and truth at analysis time k, respectively. The RMSE is proportional to the distance between the analysis ensemble mean and the truth, and therefore gives the simplest measure of the performance of an assimilation system. No half-precision range checking is performed in this case, so the exponent is still represented by 11 bits. The significand width can be reduced to around 8 bits with no discernible increase in the ensemble mean RMSE. Beyond this point, the error explodes and the system becomes unstable. This indicates that half-precision assimilation may be possible, as the half-precision significand width is 10 bits. To verify this, it is necessary to also emulate a 5-bit exponent, by using half-precision range checking.
Figure 7 shows the analysis ensemble mean RMSE with respect to that of the double-precision, 40-member reference, “D40”, as a function of precision for three ensemble sizes and three levels of observation error. Values are stated as a fraction of the ensemble mean RMSE of the D40 reference, for each level of observation error. The values shown were obtained by averaging over 10 experiments of 250 MTUs (5000 analyses) each, and the error bars were computed from the standard deviation of the same 10 experiments. When the observation error is 30% of the natural RMS variability, there is no noticeable degradation in the quality of the analyses as precision is reduced from double- to half-precision, for either of the three ensemble sizes considered. When lowering the observation error to 5% of the natural RMS variability, the gains from increasing ensemble size diminish. Additionally, there is a slight, but significant, degradation in the analyses when using half-precision, which increases further as the observation error is lowered to 1% of the natural RMS variability. Our results are therefore sensitive to the observation error chosen. We consider observation errors of 5% and 1% to be unrealistic, and only consider observation errors of 30% from now on, as is done in other studies (Ott et al. 2004; Anderson et al. 2009). This may seem high compared with the error of a typical atmospheric instrument, but our study does not consider representativity error or errors in the observation operator. These two sources of error are far from negligible when doing atmospheric data assimilation (Janjić and Cohn 2006).
An assimilation system should be judged not just by mean-error statistics, but also by the reliability of the ensembles that it produces. Rank histograms allow us to measure this (Hamill et al. 2001). At each analysis time, the values of a given X variable reported by each analysis ensemble member are used to define bin limits. Thus, for an ensemble of 40 members there will be 41 bins. The truth at that time is then placed in a bin according to its value. This is repeated, in this case, for 5000 analyses (as before), and for all X variables, to produce a histogram. If this histogram is flat then the ensemble has an ideal spread—the truth is, on average, a plausible member of the ensemble. If the histogram slopes up at the edges, then the analyses are underdispersive, and if it slopes down at the edges then it is overdispersive. Figure 8 shows rank histograms of analyses generated from the same setups considered in Fig. 7 with an observation error of 30% the natural RMS variability. For the 80- and 160-member ensembles, 40 members are randomly sampled in order to determine the bin limits. Figure 8 indicates some underdispersiveness for all setups, with the degree of underdispersiveness decreasing as the ensemble size is increased, but crucially there is no apparent increase or decrease in dispersiveness as the numerical precision is altered.
b. Sensitivity to the length of the assimilation window
The results presented so far indicate that half-precision data assimilation is feasible, at least for a toy model. However, if the level of uncertainty in the system is reduced, the errors introduced by half-precision arithmetic become more noticeable. One way to reduce uncertainty is to reduce the length of the assimilation window or, in other words, to perform assimilation more frequently. In this regime, the model will be corrected by assimilation more frequently and the average analysis error will therefore be reduced. Figure 9 shows the analysis ensemble mean RMSE when the assimilation window is made 10 times shorter. Clearly, the analyses generated using half-precision arithmetic are substantially worse than those generated using single- or double-precision. In fact, even with 160 ensemble members, half-precision is worse than single-precision with only 40 members.
As previous studies showed, some operations are affected more than others when reducing precision (Düben et al. 2017; Dawson et al. 2017). Therefore, when a degradation occurs, it is worthwhile to look closer and try to find the most sensitive operations. For example, a precision reduction can have a large impact on additions involving numbers of considerably different magnitude. Consider the operation , where a and b are half-precision floating-point numbers with values 8 and , respectively. The next representable floating-point number above 8 is 8.007 812 5, which is larger than the true value of the sum, 8.001. The increment b will therefore be rounded to zero, and the addition will have no effect.
Düben et al. (2017) identified a similar problem within the time-stepping scheme of a cloud-resolving model. Consider the Euler time-stepping scheme, given by
where is a field at time t and is the time step and all variables are at half-precision. If the increment is smaller than the floating-point spacing then it will be rounded to zero and will be equal to . The field will therefore not advance in time, and significant errors will accrue.
One candidate for a precision-sensitive calculation in this particular setup is the Kalman filter update equation for the ensemble mean,
where and refer to the analysis and background ensemble mean state vectors, respectively; y is a scalar observation; is the observation operator; and is the Kalman gain. In a low-error assimilation regime (e.g., when assimilation is performed frequently) the analysis increments will be very small. If an increment is too small, it will be rounded to zero, and that observation will therefore have no effect. Figure 10 illustrates this. Over 100 forecast/assimilation cycles, the analysis increments [the individual elements of ] were divided based on whether they were resolved or unresolved (rounded to zero), and then plotted on a histogram (which is normalized by the total number of increments considered). Clearly a significant proportion of increments are rounded to zero. Note that some analysis increments were exactly zero already, because of the localization scheme, and are not shown on the histogram.
If the ensemble means, and , are kept at higher precision, however, the addition of the analysis increment to the ensemble mean can be performed with no rounding error. When doing this, the analysis increments are all resolved, as shown in Fig. 11. Only one variable, representing the background and analysis ensemble means, must be changed from half-precision to double-precision. Düben et al. (2017) found a similar solution for the cloud resolving model; the contribution of advection to the term could be performed in half-precision, as long as the field ψ itself was kept at a higher precision.
Figure 12 gives the analysis error for frequent assimilation with the correction just mentioned. The error is considerably lower at half-precision compared to the error in Fig. 9. However, the error is still slightly higher than at double- and single-precision. It may be that, in this low-error assimilation regime, sensitivities in the forecast model to numerical precision become more noticeable, and that the “scale selective” approach advocated by Thornes et al. (2017) is warranted. This would require keeping the X variables at a higher precision. In any case, the frequent assimilation case corresponds to assimilation every 30 min, approximately, which is far more frequent than the global assimilation performed in operational centers. Therefore, the results for frequent assimilation do not undermine those presented before in section 2c, as the scenario presented there is more realistic.
c. Trading numerical precision for ensemble size
The scaling of computational cost with numerical precision is difficult to predict. The speed-up obtained by moving from double- to single-precision, for example, will depend on the degree of vectorization and the arithmetic intensity of the model (the number of floating-point operations for every byte transferred from memory). To acknowledge the uncertainty in the cost estimate when reducing precision, we compare reduced precision data assimilation setups assuming a range of cost reductions. For single-precision, we assume that the speed-up will lie somewhere between 0% and 50%, and for half-precision, we consider speed-up factors between 50% and 75%.
Figure 13a shows the average analysis error, with respect to a double precision, 40-member reference experiment (“D40”), for a variety of assumptions about how the computational cost will fall when reducing precision. For each assumption, the ensemble size is increased in proportion to the reduction in computational cost, so that each reduced precision setup has the same computational cost as the reference, D40. For all ensemble sizes, we again use an observation error standard deviation of 0.1 and perform assimilation every 0.05 MTUs. Figure 13b shows the forecast RMSE at day 3 of single-member forecasts initialized using the same assimilation setups as in Fig. 13a. These forecasts were started from the ensemble mean of each analysis in an analysis cycle of 1000 MTUs (around 200 000 forecasts in total), excluding an assimilation spinup period of 10 MTUs. Note that the forecast error reaches its minimum at around 140 ensemble members, and therefore the slight increase in forecast errors for 160 members is simply a fluctuation from this saturation error.
Figure 13 allows one to estimate the gains from reinvesting precision into the ensemble size, for a given speed-up that is measured for the model on a given hardware architecture. For example, if we assume that single-precision computations would allow a reduction in wall-clock time of 20%, then we would be able to afford to use an additional 10 members, and the analysis error would fall by around 20% as a result. This would deliver a 5% reduction in day-3 RMSE of deterministic forecasts. Note that Fig. 13 presumes a linear scaling in the cost of the data assimilation system as ensemble size is increased. This is true for the ensemble forecast, as each member is independent, and it is also true for the EnSRF algorithm, as computational complexity arguments predict (Tippett et al. 2003). The cost of the full data assimilation system therefore also scales linearly with the number of ensemble members, and we measured the speed of our configuration to confirm that this is the case.
What degree of cost reduction should we expect for a real atmospheric data assimilation system? It can be argued that weather and climate models, being, in general, memory-bound, will display a linear scaling of compute time with the width of the floating-point number. This would give roughly a 50% reduction in wall-clock time when reducing precision from double to single, and another 50% reduction when moving from single- to half-precision. Experiments with a single-precision version of IFS at ECMWF found a reduction in execution time of 40%, which is close to a 50% cost saving (Váňa et al. 2017). Furthermore, recent developments in the graphics processing unit market suggest that another reduction of 50% in compute time from single- to half-precision can be achieved (NVIDIA 2017). However, to the best of our knowledge, no fluid dynamical model has been tested on this hardware, so naturally any results that follow from the linear scaling assumption should be considered hypothetical. Although some groups experimenting with single-precision data assimilation report a computation time reduction of as little as 10%–20% (J. Anderson 2017, personal communication), our results show that even a very small speed-up would be beneficial, if savings are reinvested. Also, note that, in our case, we do not realize any computational savings at half-precision, because we use a software emulator to emulate half-precision arithmetic. A final caveat of the discussion of cost savings is that data input and output is not considered, which can be significantly expensive in numerical weather predictions.
If a 50% reduction in wall-clock time could be realized by using single-precision instead of double-precision (in other words, if computations can be made 2 times faster), the analysis and forecast errors would fall by around 30% and 25%, respectively, if the savings were reinvested into the ensemble size. Furthermore, if half-precision computations could deliver a 75% reduction in wall-clock time, with respect to double-precision, the analysis and forecast errors could be reduced by 55% and 40%, respectively, through another ensemble size reinvestment. Though we cannot predict the exact reduced-precision speed-up of an operational data assimilation system, we feel that these results should encourage operational centers to consider this trade-off seriously.
Using a toy model of atmospheric dynamics we have carried out an initial investigation of the effects of reducing numerical precision from double-precision to half-precision on the performance of an ensemble data assimilation routine. We found that the analysis error was largely insensitive to a significant reduction in the width of the floating-point significand, with a tolerance to as few as 8 bits. We then considered three IEEE 754 types: half, single, and double. The half-precision floating-point variables were emulated. We again found no increase in the analysis error as the precision was reduced from double to half, for a regime designed to approximate an operational data assimilation scenario (e.g., using an observation error of 30% the natural RMS variability of the prognostic variables). These results were sensitive to the level of observation error: when reducing the noise in the observations, the analysis error increased for half-precision assimilation.
When the length of the assimilation window was reduced, half-precision assimilation gave a higher analysis error than single- and double-precision. We conjectured that this was because half-precision variables were unable to resolve the small analysis increments that result from such a low-error assimilation regime. However, the positive results from half-precision observed in the previous experiment can be partly regained by keeping just one key operation in double-precision: the Kalman filter update equation.
Finally, we considered a trade-off between numerical precision and ensemble size. By reducing precision and reinvesting the resulting cost saving into more ensemble members, the analysis and forecast errors can be reduced. The exact error reduction depends on one’s assumption about how much the cost will fall when reducing precision. For example, if single-precision computations are 20% faster, the analysis error is reduced by 20%, but if they are 50% faster, the error falls by 30%. Additionally, if half-precision computations are 75% faster, then the analysis error can be reduced by 55%. It remains to be seen, however, whether the cost of a real atmospheric model will scale so favorably when reducing to half-precision.
These results demonstrate that forecast models are subject to different constraints when used for data assimilation instead of for, say, medium-range forecasting. Thornes et al. (2017) reported “deleterious results” when computing the evolution of the X variables in the two-tier Lorenz ’96 system at half-precision, so that they must be promoted to single-precision (even though the Y variables could be kept at half-precision). We experienced no such degradation, finding half-precision to be sufficient for both scales of the Lorenz ’96 system. This suggests that model error can be higher within an assimilation context, perhaps because it is partially masked by the other inherent error sources, such as observation error.
The cost of the Serial-EnSRF update algorithm scales linearly with the ensemble size (Tippett et al. 2003). Therefore, the precision-ensemble size trade-off described in section 3c can deliver a significant increase in the ensemble size. However, not all algorithms scale so favorably. For example, the scaling of the cost of the local ensemble transform Kalman filter with ensemble size is between quadratic and cubic (Hunt et al. 2007), and therefore a reduction in precision may not deliver as significant gains as with the EnSRF. Nonetheless, if the algorithm can be made faster through a precision reduction, without impacting the quality of analyses, this would be beneficial for operational data assimilation, where the analyses must be computed within an allotted time.
The results presented here suggest that precision in data storage should also be reconsidered and adjusted to the minimal level that can be justified by the level of model error. If a model can run at half-precision, then the restart files could also be reduced to half-precision, halving the file size with respect to single-precision output.
In this paper, we have demonstrated that reducing precision could provide the computational resources necessary to boost the ensemble size, and that such a reinvestment procedure would improve weather forecast skill. In the future, we plan to extend this study to a more complicated data assimilation scenario, with a general circulation model (GCM) instead of the Lorenz ’96 system. We hope the results presented here will encourage operational forecasting centers to consider more closely the potential benefits of reducing precision and to make more variable-precision hardware available to modelers.
Many thanks are owed to Andrew Dawson, for his help in using the reduced precision emulator. The authors also thank three anonymous reviewers and Jeffrey Anderson for their very helpful reviews. Sam Hatfield is funded by NERC Grant NE/L002612/1. Peter Düben gratefully acknowledges the funding from the ESIWACE project under Grant 675191. Peter Düben, Tim Palmer, and Aneesh Subramanian received funding from an ERC grant (Towards the Prototype Probabilistic Earth-System Model for Climate Prediction, Project Reference 291406). The code used in this study is available online (Hatfield 2017).
The IEEE 754 Standard for Floating-Point Numbers
Under the IEEE 754 standard, a floating-point number x has three components: a sign, a significand, and an exponent, related according to
The sign is determined by a single bit s for all precision levels. The significand is the second term in the product in Eq. (A1), determined by bits with elements . The exponent is determined by bits with elements and is given by
Double-, single-, and half-precision variables have , , and , respectively.
The precision of a “float” is dependent on both and . For example, for a fixed exponent, the distance between two consecutive floats is . The exponent also determines the range of permissible values, outside of which a float will over or underflow. The properties of half-, single-, and double-precision floats are summarized in Table A1. Note that values below the given ranges are possible, but these so-called subnormal values are handled differently.