Two lines of research are combined in this study: first, the development of tools for the temporal disaggregation of precipitation, and second, some newer results on the exponential scaling of heavy short-term precipitation with temperature, roughly following the Clausius–Clapeyron (CC) relation. Having no extra temperature dependence, the traditional disaggregation schemes are shown to lack the crucial CC-type temperature dependence. The authors introduce a proof-of-concept adjustment of an existing disaggregation tool, the multiplicative cascade model of Olsson, and show that, in principal, it is possible to include temperature dependence in the disaggregation step, resulting in a fairly realistic temperature dependence of the CC type. They conclude by outlining the main calibration steps necessary to develop a full-fledged CC disaggregation scheme and discuss possible applications.
The environmental and societal importance of extreme precipitation is obvious, and so is the importance of obtaining sound projections of their future behavior under a warmer climate. Extreme precipitation events are usually understood as occurring within a time frame of a few days down to hours or even just a few minutes. The impact of an event is felt as flooding, on spatial scales that are (roughly) proportional to the scale of the duration: the smaller a catchment, the shorter the time scale that is typical for bringing the heaviest damage (e.g., Blöschl and Sivapalan 1995). As a consequence, rainfall events of subdaily duration (minutes to hours) present the most serious hazard for catchments with subdaily concentration time. This type of quick runoff concentration (flash flood) is typical for mountainous headwater catchments as well as urban catchments. Flash floods are particularly destructive because of high flow velocities, heavy debris loads, and a very low predictability (Collier 2007; Borga et al. 2011; Mueller and Pfister 2011).
Unfortunately, such subdaily events are also characterized by a considerable lack of both observations and models, which likely represents two sides of the same coin (given the need for model verification). While high-resolution observations do exist, digitized versions, if they exist at all, are not as easily accessible as daily observations. On the modeling side, considerable progress has recently been made with respect to weather forecasts. For example, Baldauf et al. (2011) employ a model with a grid resolution of only 2.8 km and an internal time step of 30 s. The dynamics of atmospheric convection are thus more realistically resolved and can be better predicted, including the occurrence of heavy storms and flash floods. However, reliable estimates of the climate of extremes require much longer simulations that are (as yet) not available.
Encountering scale gaps is not uncommon in climate modeling and will, for many readers, ring the bell of “downscaling” (see, e.g., Fowler et al. 2007). In the context of subdaily precipitation, downscaling usually appears in the form of temporal disaggregation. There is a large body of literature on the temporal disaggregation of rainfall, going back to the 1970s and ranging from simple to very sophisticated approaches (see Koutsoyiannis 2003 for a comprehensive review). In all cases, a statistical model is sought that operates between aggregated (e.g., daily) and unaggregated (e.g., hourly) data. Such a model is calibrated using sufficiently long series of high-resolution observations that are characteristic for the region, and the disaggregation is applied to nearby data that exist, observed or simulated, only in aggregated form. A typical disaggregation tool is the multiplicative cascade (MC) model of Olsson (1998), as described in section 2a. Disaggregation forms the first of two threads in this study.
The other thread is formed by the Clausius–Clapeyron (CC) relation, which is broadly known to describe the physics of phase transitions. In a meteorological context, it relates the saturated vapor pressure es to temperature T as
(in the so-called August–Roche–Magnus approximation, with T given in °C). It means that saturated vapor pressure increases roughly by 7% per degree temperature increase, or more formally, Δes/ΔT = 7% K−1. As Fig. 1 shows, this increase is not exactly constant but becomes weaker toward higher temperatures; for example, there is a 7% K−1 vapor increase near 10°C and a 6% K−1 increase near 30°C. Consequently, for the planet, the increase is generally higher at higher latitudes.
Early on in the literature on CO2-induced global warming, (global) precipitation increase was linked to the CC relation (Mitchell et al. 1987; Boer 1993; Allen and Ingram 2002; Held and Soden 2006). While es and T are so closely linked, the sensitivity of precipitation on temperature, ΔP/ΔT, is more complex on almost all scales. Precipitation amounts obviously depend not only on the prevailing temperature but more so on the large-scale atmospheric stability and circulation processes that advect moisture into the area. On the global average, ΔP/ΔT = 2% K−1, a rate that partly follows from the surface energy budget (Mitchell et al. 1987) and that is considerably below the CC relation.
The publication of Lenderink and van Meijgaard (2008) offered a fresh view on CC and its link to precipitation. By analyzing 50 years of hourly observations from Delft, the Netherlands, they indeed obtain a close link between short-duration local extreme events and daily mean temperature. The idea is to confine the analysis to sufficiently intense rainfall alone and completely neglect weak events. This way, the complex circulation conditions that may or may not lead to a rainfall event are bypassed. For the intensity of strong rainfall events, so goes the argument, the main constraint is the available moisture in the atmospheric column, which itself is constrained by the saturation level of the CC relation. In short: the most extreme events occur when the entire moisture content falls out from a saturated atmosphere, and that amount mainly depends on temperature. However, what Lenderink and van Meijgaard (2008) found was not the CC rate of 7% K−1, but a “super CC rate” of up to 14% K−1. Between these two, a whole range of numbers has been reported elsewhere (Pall et al. 2007; Haerter and Berg 2009; Haerter et al. 2010; Lenderink and van Meijgaard 2010; Utsumi et al. 2011; Berg and Haerter 2013; Berg et al. 2013). Hence, while constraining the analysis to short-term extreme events has “cleared” the T–P relation of all values below the CC rate, it opens considerable room for values above. This makes the use of the term “CC scaling” somewhat questionable, which is why we prefer to use the term ΔP/ΔT instead. Obviously, ΔP/ΔT > 7% K−1 indicates the presence of agents other than CC for the influence of temperature, with the main candidates being convective processes (Berg et al. 2013). Specifically, high surface temperatures enhance atmospheric instability in two ways: by increasing the environmental lapse rate and by decreasing the saturated adiabatic lapse rate (Salby 2012, 141–143). The resulting intensification of convective updrafts not only amplifies condensation rates but also favors the generation of large rain drops and, as a consequence, extreme subhourly rainfall intensities; similar results are described by Loriaux et al. (2013). Altogether, temperature effects of different origin appear to accumulate on very short time scales, eventually causing rates of ΔP/ΔT > 7% K−1. Apart from the above, the daily mean surface temperature is only a rough proxy for the short-term temperature profile that determines the saturated vapor pressure in the atmospheric column above.
Regardless of its specific nature, the temperature sensitivity of heavy subhourly rainfall is strong. Simulations thereof, whether for present or for future climate, should be able to reproduce that sensitivity. This is especially important for future simulations where climate is expected to be warmer. For daily disaggregation, therefore, this trickles down to the following questions:
Does disaggregation preserve ΔP/ΔT rates?
If not, how can one fix it?
In other words, for the first question, using the daily aggregates of the Delft record (or any other) and redisaggregating them, will the hourly values reveal the same ΔP/ΔT rates? Very likely not, because temperature is not among the drivers in these techniques. For the second question, one needs to find a handle to the disaggregation scheme where temperature might enter in a meaningful way, so that the original and disaggregated high-resolution series become statistically indistinguishable (ideally). To conduct the study, we had access to three sources of high-resolution precipitation data, two from the Alps (Austria and Switzerland) and one from the Westphalian Lowland (Germany), all of which contain several decades of high-resolution data.
2. Methods and data
Since our goal is not to improve the distinctive features of rainfall disaggregation but to add temperature as a driver (to any scheme), our choice of scheme is somewhat arbitrary. The availability of code was a necessary condition, and we selected the MC model of Olsson (1998) (J. Olsson 2008, internal report) as a classic example (see also section 4). We employ MC with a branching number of 2 and with exact conservation of mass. This means that, recursively, a given period is split in two and the corresponding precipitation is stochastically redistributed, either completely to one-half [left or right with equal probability p10, the so-called 1/0 division of Olsson (1998)] or to both halves using a weighting W that is picked from a probability distribution pxx(W). The variable p10 is actually regressed on the current time resolution (period length) r, whereas pxx(W) follows some predefined “cascade generator” distribution (such as beta). Specifically,
where B(a,b) denotes the beta function. It is generally assumed that the shape parameter a also depends on the resolution, as follows:
so that the disaggregation model (in this basic form) is determined by the four parameters c1, c2, aS, and H. The specific form of Eqs. (2) and (3) follows J. Olsson (2008, internal report); their validity, based on 2 years of 8-min data, had been estimated to be roughly between 1 week and 1 h, or even down to 8 min (Olsson 1995). For the purpose of this study, that is, introducing an influence of temperature on the disaggregated extremes, that validity range should be sufficient. Calibration of the model requires sufficiently long high-resolution times series. The series are successively aggregated and at each step relative frequencies are calculated. From these, the four parameters are estimated via beta distribution fits and regression. For our purpose, we have calibrated MC for each of the three regions separately by using the longest available series.
b. CC disaggregation
Temperature information can be incorporated into the MC model by altering the estimated probabilities. One way to do this is in the calibration step, to let the four parameters c1, c2, aS, and H depend on T. The most straightforward approach would be to include T as a regressor. But that model would optimize a T dependence for all rainfall events regardless of scale, not only for the most extreme events, and would therefore likely fail. The second simplest approach is that of trial and error by adjusting the probabilities directly. It is the model of our choice, and it provides a proof-of-concept method that is sufficient for the purpose of this study, that is, it answers the two questions posed in section 1.
To have strong events become sensitive to temperature, we adjust the probability of a full binary splitting, p10, relative to T accordingly. This is illustrated in Fig. 2. It shows a mapping between temperature and probability. Two parameters, q and , govern the way in which temperatures are translated into splitting probabilities: very cold events (T → −∞) never undergo a binary split, average events () are split according to the original p10, and very warm events (T → ∞) are split, for q = 0 with probability p10 and always for q = 1. Formally, that is
The temperature is the threshold above which p10 gets amplified, and q determines the amount of amplification. Accordingly, if is not too far from climatology, the correction for p10 is roughly zero on average. Setting gives the traditional disaggregation without CC effects. The required functional behavior can be achieved using sigmoid functions. With atanh as sigmoid, this gives
which is not exactly a simple term. We are not aware of any sigmoidal term that is both simple itself and whose parameters have a simple geometric interpretation. So we decided for the latter, with q and providing a direct interpretation of the p10 transformation. Temperature T is generally given in normalized units.
To keep things manageable, we abstained from altering pxx(W). We will generally refer to this temperature-enhanced MC model as MC+.
Calibration of (q, ) is solely based on trial and error. For each region, we used the parameters c1, c2, aS, and H as calibrated “classically” following the MC procedure outlined in section 2a, and we applied MC+ for the disaggregation of daily P and T. Starting with some reasonable choice of q and , for example, (q, ) = (0.5, 0), the disaggregated P data were iteratively compared with high-resolution observations, using a simple visual inspection of the resulting T dependence of P extremes (as shown later in Fig. 6). If the result, at any step, did not improve, we either stopped or continued by reverting the change along with a slight adjustment. Note that this is basically a manual form of the classic simplex optimization, and we found it sufficient for our proof-of-concept approach as it kept things simple.
We have used 15-min records from 16 sites in Austria, partly going back to the 1950s (www.uibk.ac.at/geographie/serac-cc) and fifteen 10-min records from Switzerland (IDAweb, www.meteosuisse.admin.ch/web/en/services/data_portal/idaweb.html) that start in 1980. Additionally, we used 10 series from North Rhine–Westphalia (NRW), Germany, one of which is a 1-min series extending back to 1930. For better comparison, all NRW series were aggregated to 10-min resolution. To have a maximum amount of data, for each region all records were lumped together for the subsequent analyses, which follows our general assumption that short-term precipitation events are governed by the CC relation and less by geography; whether this is in fact true will be investigated below. The effect of temporal resolution, even between 10- and 15-min data, should nevertheless be felt, so that independent analyses for the Austrian and Swiss data are indicated. An overview of all stations, in terms of the number of wet events per station versus station height, is shown in Fig. 3. Note the relatively little scatter of the Swiss data, which likely reflects the homogeneity of the IDAweb database.
The data processing generally follows the procedures applied in Lenderink and van Meijgaard (2008), that is, stratifying the high-resolution precipitation data based on equally spaced bins of average temperature of the corresponding day, and considers the 99.9% percentile of the distribution of wet events. For stations that did not have sufficient T records, we selected the nearest station that did, which was usually within a radius of a few kilometers; here we used either daily measurements directly or daily sums from subhourly measurements. For all daily T data, we employed 35 equidistant bins [instead of the 2°C of Lenderink and van Meijgaard (2008)] and confined each analysis to pairs with T > 0 and P > 0. The MC model was calibrated using the longest available record.
We start by comparing the observed high-resolution data with those disaggregated by MC and focus on extremes. It is not the general scale of events that is important here but the sensitivity to temperature. For that, we see a major difference between original and disaggregated data. Since the disaggregation is done via a binary tree, the closest time resolution for the disaggregated data is 1/128 day = 11 min. Figure 4 displays the 99.9% percentiles of both observed and disaggregated data. On average, that is, when disregarding temperature dependence, MC does a satisfactory job of representing the scale of extremes. This holds except for the Austrian case, where the disaggregated data are persistently larger than the observed intensities; this is perhaps due to the slight mismatch in temporal resolution (11 min versus 15 min). For the other two, the overall scale is well reproduced, with about 1.0 mm min−1 for the Swiss and 0.7 mm min−1 for the NRW data. The observed extremes show a clear temperature dependence that is above the CC rate of 7% K−1. This does not hold for the disaggregated extremes. While ΔP/ΔT is small but uniformly positive for NRW, the Austrian MC data even exhibit a nonuniform T dependence, with a peak-like shape and optimum temperatures somewhere between 10° and 15°C; the Swiss data are somewhat in between.
Since there is no explicit temperature dependence within the MC scheme, the shape of Fig. 4 must have been inherited from the daily data. In fact, the daily data analysis of Fig. 5 reveals a striking similarity in the shape of the T dependence of the three regions, with a flat but uniform dependence for NRW and a bell-like shape for Austria. Similar bell-shaped daily results are reported, for example, by Berg et al. (2009) and Utsumi et al. (2011), with an optimum somewhere between 10° and 20°C, after which P values drop again. The highest P values are observed for the two Alpine regions at temperatures of about 18° and 13°C for the Swiss and Austrian stations, respectively; the NRW values are smaller and have no T optimum. The drop for warmer days has been attributed to a decrease in the duration of storm events, rather than a weakening of intensity itself (Utsumi et al. 2011). But since we are looking at the daily time scale, the entire atmospheric machinery of rainfall formation comes into play. This likely creates differences between regions as well as the described decline with T. As indicated in section 1, this phenomenon will remain under research for some time.
The trial-and-error-estimated parameters for each region are shown in Table 1, and they should be taken with caution and not overinterpreted. They indicate a generally moderate expansion of probabilities q, especially for the Swiss region; the values are positive throughout, so that the probabilities are amplified only on very warm days; this is the case in particular for the Austrian region. The results for the MC+ disaggregated data are shown in Fig. 6. Compared to the MC results of Fig. 4, the extra effect of temperature is now evident. Despite some residual miscalibration of MC+, as, for example, revealed by the slight over- and underestimation of the Austrian and Swiss events, respectively (which, however, may be caused again by the time resolution mismatch), or by the almost linear behavior toward lower T for NRW, the general shapes of the simulation curves closely follow that of the observations; the average sensitivity of ΔP/ΔT > 7% K−1 is especially well reproduced for each region. What this means, specifically, is that for days with a temperature of, say, 25°C, a typical extreme event of 1 in 1000 (99.9% percentile) increases from about 1.0 mm min−1 as generated by MC (Fig. 4) to 1.3 mm min−1, or plus 30%, for MC+ (Fig. 6). In terms of risk, this represents a stark difference.
That extreme precipitation events are more intense with warmer temperatures follows partly from the relation of Clausius–Clapeyron and can be observed in records of daily and especially subdaily time resolution. Given the evidence that relative humidity remains fairly constant in a future climate (Solomon et al. 2007), for the projected warming this dependence directly implies an intensification of short-term extreme precipitation. Current disaggregation tools do not implement a direct influence of temperature and thus reflect only the influence from the original time scale, which in most cases is daily. Extreme daily precipitation, however, is complicated by circulation processes that distribute rainfall within the 24 h of the day and for which rainfall duration, which is not known to depend on temperature, is a major constraint. As a consequence, the influence of temperature is underestimated in these tools.
The aim of this study was twofold: first, to demonstrate that classic tools indeed fail to represent temperature influence adequately (for which they cannot be blamed, of course), and second, to provide a proof-of-concept enhancement of these tools, here the multiplicative cascade (MC) model of Olsson (1998), to implement a direct influence of temperature. It turned out, perhaps coincidentally, that the MC model was easily adaptable for the implementation of temperature dependence. The implementation was fairly straightforward and just required the dependence of the final result on the original (aggregated) value. We would suppose that this holds similarly for other disaggregation schemes, but we cannot say for sure as those things usually hinge on the details of the coded algorithm. The enhancement, MC+, is based on two parameters, q and , that control the sensitivity of extreme precipitation to daily temperature. Their calibration was done separately for three relatively different regions and in a purely manual way that serves to prove our concept. We have demonstrated that in all regions there is too little temperature influence for the classic tool MC and a fairly realistic influence for MC+, so our approach points in the right direction.
However, the present model setup is still in a rather incomplete and ad hoc state and must be refined before being applied, for example, to impact models. Most notably, the present form of MC+ introduces a bias in the climatologic probabilities and corresponding intensities. That bias comes from an imperfection in the guiding Eq. (4b): even if is taken as the long-term temperature mean, so that T − cancel out on average, Eq. (4b) is not fully symmetric in T − , which will slightly bias the single adjustments and eventually the overall rainfall probabilities. This can very likely be fixed by introducing additional parameters or by using a symmetric function right away (but probably losing the intuitive meaning of the parameters; see Fig. 2). But even that would leave the model in an ad hoc state, consisting of a mixture of fixed, MC-calibrated parameters and two additional parameters that require extra calibration.
That extra calibration, which we did manually, could certainly be automatized by defining appropriate cost functions for the mismatch that we saw in Fig. 6, plus, for example, additional terms for the aforementioned probability bias. However, a much more elegant solution would be to combine the MC calibration with that of MC+, leading to a unified calibration of the full parameter set. But, because the criteria for MC relate to all and those of MC+ relate only to extreme events, the most straightforward approach, to use temperature as an additional regressor, will not work. Considerably more work is needed to overcome that. Nevertheless, a few preliminary checks indicate that MC+ did at least not degrade the performance reported of MC (Olsson 1998): based on random 2-yr periods for every region, the rate of MC+-simulated zero values was comparable, if not better, than those of MC. We are fairly confident that this will stand when using a fully calibrated MC+ model.
It may be worth mentioning that even with the enlarged set of parameters—c1, c2, aS, H, q, and —there is little danger of overfitting MC+ and a corresponding drop of model quality when applied to independent data. Nevertheless, their estimation errors may not be independent, which potentially explains the regional variation of q and estimates as displayed in Table 1. To understand those requires a systematic analysis of the estimation error, something that can evidently not be done within a trial-and-error setting. In designing the unified model mentioned previously, it will be crucial to identify model parameters that are largely independent.
Some uncertainty remains with regard to the spatial structure of the sensitivities, ΔP/ΔT. While being beyond the scope of the present study, these sensitivities nevertheless affect all MC+ simulations as they dictate the target scale of precipitation extremes. Perhaps the most important question here is, how does ΔP/ΔT vary in space, and is it becoming spatially uniform for increasingly short intervals (minutes)?
Once refined and completed, the model can be applied like any other disaggregation tool. It should give more realistic results as long as temperature dependence is an issue, such as for the diurnal and seasonal cycles or, most notably, for simulations of a warmer future climate. Under the general assumption that future relative humidity stays roughly the same, the strong temperature influence on subdaily extreme precipitation should provide a more realistic assessment of future precipitation extremes. The other assumption is—as always for empirical models—that the MC+ parameters remain valid in the future, but that will remain an assumption for some time until finally (perhaps) sufficiently long dynamic future climate simulations of subdaily or even subhourly resolution become available.
This study was made possible through generous support from the Austrian Climate and Energy Fund as part of the Austrian Climate Research Programme. The Swiss data were freely available through the IDAweb service of MeteoSwiss; the Austrian data were kindly provided by Hydrographischer Dienst Tirol, Tiroler Wasserkraft AG, and Landeswasserbauamt Vorarlberg; and the German data were prepared by Angela Pfister (Emschergenossenschaft/Lippeverband). We are especially grateful to Jonas Olsson, who provided a version of his MC code.