Precipitation displays a remarkable variability in space and time. An important yet poorly documented aspect of this variability is intermittency. In this paper, a new way of quantifying intermittency based on the burstiness B and memory M of interamount times is proposed. The method is applied to a unique dataset of 325 high-resolution rain gauges in the United States and Europe. Results show that the M–B diagram provides useful insight into local precipitation patterns and can be used to study intermittency over a wide range of temporal scales. It is found that precipitation tends to be more intermittent in warm and dry climates with the largest observed values in the southwest of the United States (i.e., California, Nevada, Arizona, and Texas). Low-to-moderate values are reported for the northeastern United States, the United Kingdom, the Netherlands, and Germany. In the second half of the paper, the new metrics are applied to daily rainfall data for 1954–2013 to investigate regional trends in intermittency due to climate variability and global warming. No evidence is found of a global shift in intermittency but a weak trend toward burstier precipitation patterns and longer dry spells in the south of Europe (i.e., Portugal, Spain, and Italy) and an opposite trend toward steadier and more correlated precipitation patterns in Norway, Sweden, and Finland is observed.
Precipitation is a highly variable process in space and time. An important but often neglected aspect of this variability is intermittency. Intermittency limits water resources in space and time and directly affects streamflow, surface runoff, infiltration, soil moisture, and vegetation cover (e.g., Pitman et al. 1990; Baudena et al. 2007; Kletter et al. 2009; Nikolopoulos et al. 2011). It is characteristic of the local climatology and topography and strongly depends on the dominant synoptic conditions (e.g., Alyamani and Sen 1997; Cindrić et al. 2010; Haile et al. 2011; Ruiz-Sinoga et al. 2012). Despite its importance, intermittency is poorly understood and few methods have been proposed to quantify it objectively. Progress is hindered by the fact that many of the fluctuations in rainfall intensity and occurrence only become visible at high spatial and temporal resolutions. At the same time, long data records are necessary to adequately capture seasonal, annual, and decennial variations.
Finding appropriate and elegant ways of modeling and simulating intermittency across scales is a fascinating and challenging problem. Many techniques have been proposed, including Poisson cluster models, (multi)fractals, power spectral densities, wavelets, and geostatistics (e.g., Barancourt et al. 1992; Olsson et al. 1993; Kumar and Foufoula-Georgiou 1994; Schmitt et al. 1998; Pavlopoulos and Gritsis 1999; Molini et al. 2002; Kundu and Siddani 2011; Schleiss et al. 2011; Veneziano and Lepore 2012; Gires et al. 2013; Schleiss et al. 2014). The main goal of this paper is not to model intermittency, which is very difficult, but to summarize and quantify its essence using two easily assimilated and understood metrics. The approach is motivated by the growing demand for fast and simple diagnostic tools for assessing the outputs of numerical weather models and stochastic rainfall generators.
The most common way of measuring intermittency is to look at the statistical distribution of dry and wet periods. Major quantities of interest in this approach are the transition probabilities between dry and wet states and the length of dry–wet spells, that is, the number of consecutive days during which the precipitation amount remains below or above a certain threshold (e.g., Chatfield 1966; Alyamani and Sen 1997; Anagnostopoulou et al. 2003; Schmidli and Frei 2005; Cindrić et al. 2010; Deni et al. 2010; Zolina et al. 2013; Serra et al. 2013; Kutiel et al. 2015). Dry–wet spell analyses are useful for assessing drought and flood risks and studying climate extremes. Their main limitation is that the results strongly depend on the threshold used to separate dry and wet periods as well as the temporal resolution of the data (Ignaccolo et al. 2009; De Michele and Ignaccolo 2013; Mascaro et al. 2013).
Looking at the variation of rainfall intensity and occurrence at a fixed point in space, Rodríguez-Iturbe et al. (1987) proposed to represent intermittency using a stochastic point process model. In their approach, storms and rain cells with random durations and depths arrive according to a Poisson process. Subsequent applications and developments of this model, among many others, can be found in Cox and Isham (1988), Rodríguez-Iturbe et al. (1988), Cowpertwait (1995), Onof et al. (2000), Pegram and Clothier (2001), De Michele and Salvadori (2003), and Ramesh et al. (2013). Using the same point process formalism but sampling at higher temporal resolutions, Smith (1993) and Lavergnat and Golé (1998) showed that it is possible to characterize intermittency solely in terms of (marked) drop arrival times. Their approach is appealing but suffers from practical limitations, like the fact that most operational precipitation sensors do not have the capability to resolve single raindrops and only provide integrated quantities over coarser scales. This makes it hard to objectively define events and to separate dry periods from wet ones.
Although Poisson cluster models have proven useful for simulating and downscaling precipitation time series, they often involve too many parameters to efficiently quantify and compare intermittency across the globe. Universal multifractals (Schertzer and Lovejoy 2011), on the other hand, offer a more parsimonious way of analyzing intermittency across scales. But for the purpose of this paper, the three model parameters (i.e., the degree of nonconservation H, the codimension C1, and the multifractality α) are still too numerous and abstract to be easily compared. It is also worth reminding that such metrics rely on the strong assumption that rainfall is indeed fractal. This has been repeatedly challenged, especially at small spatial and temporal scales, where there is strong disagreement about how to correctly handle the large number of zero rain-rate values (e.g., Verrier et al. 2011; Gires et al. 2012b; Veneziano and Lepore 2012).
In this paper, we introduce a new and simple way of quantifying intermittency based on the burstiness B and memory M of interamount times. Unlike previous metrics, our approach has the advantage of being free of any model assumption and arbitrary dry–wet classification threshold(s).
This article is structured as follows. In section 2 we present the methodology and metrics used to quantify intermittency across scales. Section 3 applies the new tools to a set of 325 high-resolution rain gauges in the United States and Europe and presents the main results. The conclusions and some perspective for future work are given in section 4.
Consider a continuous time series of accumulated precipitation amounts at time :
where denotes the instantaneous precipitation rate at time u, and . Traditional ways of analyzing intermittency mainly focus on the rainfall occurrence process for a given sampling resolution and detection threshold :
where represents the binary occurrence process with respect to Δt and y0. Important quantities in this approach are the (time varying) transition probabilities between dry and wet states (for various time lags) as well as the statistical distributions of dry–wet spell lengths (e.g., Weiss 1964; Foufoula-Georgiou and Lettenmaier 1987; Jakubowski 1988; Serra et al. 2013; Hannachi 2014). Such analyses are rather straightforward to implement but must be interpreted carefully because of the strong dependence on the two parameters Δt and y0. Another limitation is the fact that regions with different rainfall amounts (e.g., 50 vs 3000 mm yr−1) cannot be compared because of the dependence of dry–wet probabilities on the average annual precipitation amount.
a. Definition of interamount times
In the following, we propose an alternative way of quantifying intermittency that does not involve an arbitrary dry–wet detection threshold and can be used to compare regions with different total rainfall amounts. The method is inspired by survival analysis and focuses on the waiting times between successive amounts of precipitation. Let denote a fixed precipitation amount. We define the series of interamounts times with respect to a as follows (where ℕ denotes the set of all natural numbers; i.e., 0, 1, 2, 3, 4, ...):
where denotes the times at which the total precipitation amount first exceeded k times a (where curly brackets indicate a set of values):
A steady rainfall pattern with constant intensity and occurrence has equal interamount times for all values of a. An intermittent pattern, on the other hand, is characterized by a more variable interamount time distribution (see Fig. 1 for an example). The idea of characterizing intermittency in terms of interamount times instead of rainfall amounts, intensities, or occurrences offers several advantages. Unlike rainfall amounts, interamount times are always positive. This greatly simplifies the statistical analysis and makes it unnecessary to separate dry periods from wet ones.
b. Normalized interamounts
Similarly to dry–wet probabilities, interamount times strongly depend on the total precipitation amount at the considered location. To overcome this scale dependence and compare intermittency for regions with different precipitation totals, one needs to normalize the interamount times with respect to a common time scale. A possible way to do this is to fix an average interamount time μ (e.g., 24 h) and determine the interamount aμ at this time scale:
where Y denotes the total rainfall amount at the considered location and L is the length of the studied time period. In other words, instead of comparing interamount times for a fixed accumulation, we choose the mean interamount time and compute aμ such that the series of interamount times has mean μ. Two locations with different rainfall totals (e.g., 50 vs 3000 mm yr−1) therefore have different normalized interamounts (e.g., 0.14 and 8.21 mm for μ = 24 h). For more details about the range of time scales that can be studied depending on the available data, the reader is referred to section 2d.
c. Burstiness and memory
The metrics used to quantify rainfall intermittency in this paper are inspired by the work of Goh and Barabási (2008). They were introduced as simple tools to summarize the properties of complex intermittent systems such as earthquakes, neuron activity, or e-mail patterns. Our approach is similar to that of Goh and Barabási (2008), but instead of separating the rainfall time series into arbitrary events, we use the more general notion of interamount times defined in Eq. (3). We also normalize each interamount with respect to a common time scale μ, as in Eq. (5).
Let be a series of normalized interamount times for a fixed mean interamount time (i.e., μ; see section 2b). The burstiness of the interamount times at time scale μ is defined as
where σμ denotes the standard deviation of the interamount times (for a known mean)
The variable B(μ) is a normalized measure of the dispersion of interamount times at time scale μ. It is bounded with a finite range between −1 (minimum) and 1 (maximum). A steady precipitation pattern with equal interamount times has a burstiness of −1. A Poisson process has an intermediate burstiness of 0 and a standard lognormal distribution has a burstiness of approximately 0.135. The longer and fatter the right tail of the interamount time distribution, the higher the burstiness value.
On its own, B(μ) is not sufficient to fully understand the origin of intermittency. The time ordering of the interamount times also needs to be considered. The temporal ordering of the interamount times is quantified by the memory at time scale μ:
where denotes the lag-1 Spearman rank autocorrelation of , that is, the standard correlation coefficient between the ranks of successive interamount times. Note that Goh and Barabási (2008) used the standard linear correlation. But because of the highly skewed distribution of interamount times, we prefer the more robust rank correlation. Also, autocorrelations at lags greater than 1 are not considered as they usually contain little additional information.
Similarly to the burstiness, M(μ) is also between −1 and 1. Positive memory means that short interamount times tend to be followed by short ones and long ones by long ones. Negative memory means that short interamount times tend to be followed by long ones. A homogeneous Poisson process, that is, a stationary process with independent and exponential interevent times, has zero memory and zero burstiness. Precipitation that is perceived as patchy tends to be characterized by larger memory values. Strongly positive or negative memory values can also be a sign of daily or seasonal cycles, depending on the time scale μ at which they are computed.
Given that the total intermittency of a precipitation time series can be attributed to at least two different origins, it can be informative to represent each series in an M–B diagram (for a fixed value of μ). As pointed out by Goh and Barabási (2008), natural phenomena like earthquakes and weather patterns tend to be dispersed around the diagonal in this plane, suggesting a strong link between burstiness and memory. Results presented in section 3 corroborate this idea but also show that the interplay between B(μ) and M(μ) strongly depends on the time scale μ.
d. Some practical considerations
While the estimation of burstiness and memory from sample time series is relatively straightforward, there are some practical considerations that need to be discussed here. One of these issues concerns the handling of missing values. In this paper, all datasets were preselected to limit the total number of missing values as well as the maximum length of these gaps (see sections 3a,b for more details). The few missing values were replaced by zeros, which is a fairly reasonable assumption at high temporal resolutions where most measured precipitation amounts are zero anyway. A sensitivity analysis conducted in appendix B shows that this particular strategy for dealing with data gaps leads to maximum relative errors on the order of 3.7% for B and 7.2% for M (see Fig. B1 in appendix B). Because these are worst-case scenarios for up to 5% missing values, we can state with high confidence that a few missing values (i.e., <1%) will not radically alter the results.
A slightly more elaborate way of dealing with missing values consists of correcting the estimated interamount times depending on the number and order in which the gaps occurred. For example, if it takes observations to exceed a fixed threshold but m of these observations were missing values, each missing value can be assigned an average value of and the interamount time can be recalculated on that basis. Because the order in which the missing values occurred plays an important role, results may vary from one case to another, even for identical pairs of n and m.
The second issue that needs to be addressed is the range over which B(μ) and M(μ) can be reliably estimated. Most precipitation time series have a fixed sampling resolution and a minimum detectable precipitation amount . Obviously, the interamount aμ needs to be larger than y0. This is equivalent to saying that the average interamount time must satisfy
where L still denotes the length of the time series and Y still denotes the total precipitation amount. At the same time, we have to take into account the fact that interamount times can only be measured by steps of at least Δt. To avoid major biases related to time discretization, the mean interamount time should be significantly larger than Δt (e.g., at least 5–10 times larger than the minimum measurable interamount time Δt). This is particularly important for M, which is more sensitive to discretization effects than B.
On the other hand, the length of the time series strongly affects the maximum time scale μ for which reliable results can be obtained. Fair estimates of B(μ)and M(μ) require a sample size on the order of 100. This represents about 8 years of data for μ = 30 days and 100 years of data for μ = 1 year. In summary, the interval over which B(μ) and M(μ) can be reliably estimated is approximately given by
which depends not only on the sensor characteristics but also on the total annual precipitation (where curly brackets indicate a set of values). As an example, note that for L = 10 years of data with Y/L = 1000 mm average annual precipitation, a temporal resolution of Δt = 1 h, and a minimum detectable precipitation amount of y0 = 0.1 mm, the range of reasonable values for μ is approximately between 10 h and 36 days. For daily data, the range is approximately between 10 days and 36 days.
In this section, we apply the new metrics defined in section 2 to various rain gauge datasets in the United States and parts of Europe. Most of the analyses are performed on high-resolution data (i.e., 5–15 years at hourly or subhourly resolutions), except for the last part, which focuses on long-term trends and is based on 60 years of data with a daily resolution. A short description of each dataset is provided below. For more details on these datasets, the reader is referred to appendix A.
a. High-resolution data
We consider a set of 325 high-resolution rain gauge time series in the United States, Switzerland, Germany, the United Kingdom, and the Netherlands. Each gauge is part of a bigger network of high-quality automatic weather stations operated by regional and national meteorological services. They are heated to prevent snow and ice buildup and are capable of measuring both solid and liquid precipitation. Each time series consists of accumulated precipitation amounts measured in millimeters, every 5–60 min with a minimum detectable amount of 0.1–0.2 mm (see appendix A for more details). The time series for the United States covers a 5-yr period between 2009 and 2013. The time series in Europe covers a 15-yr period between 1999 and 2013. The percentage of missing values for each station is less than 1%. For a summary of the most important information about this dataset, the reader is referred to Table 1. Figure 2 shows the location of each gauge in the contiguous United States and Fig. 3 shows the location of the gauges in Europe.
b. Low-resolution data
We consider a total of 552 daily rain gauge time series between 1954 and 2013 in the United States (321 gauges) and Europe (231 gauges). The data for the United States were extracted from the U.S. Historical Climatology Network (Hughes et al. 1992; Williams et al. 2006). The data for Europe were extracted from the European Climate Assessment & Dataset (Klein Tank et al. 2002). There are 66 gauges in Norway, 53 in Germany, 33 in Spain, 19 in France, 19 in Russia, 14 in Finland, 14 in Ukraine, 5 in Great Britain, 3 in Sweden, 2 in Italy, 2 in Latvia, and 1 in Portugal. Because this dataset is only used for preliminary analyses, slightly less strict selection criteria were applied. Specifically, each time series in the U.S. dataset has less than 5% missing values and no more than 31 missing values in a row. Each time series in the European dataset has less than 1% missing values and no more than 14 missing values in a row. This allows us to include more gauges in the analyses and have more representative results.
c. Burstiness and memory across time scales
Figure 4 shows the estimated B(μ) and M(μ) as a function of the time scale μ for two of the gauges in the high-resolution dataset: Hilo, Hawaii, and Darrington, Washington. The lines represent the 10%, 25%, 50%, 75%, and 90% quantiles of B(μ) and M(μ) obtained by considering all the gauges in the high-resolution dataset. With an average annual precipitation of 3253 mm during 2009–13, Hilo is one of the wettest places in the United States. Darrington is also a wet place, but its average annual rainfall amount (i.e., 2107 mm yr−1) is about 1.5 times smaller than in Hilo. To account for these differences, the interamounts (i.e., aμ) are 1.5 times smaller in Darrington than in Hilo (e.g., 5.8 vs 8.9 mm for μ = 24 h).
Figure 4 shows that B(μ) is consistently larger in Darrington than in Hilo across all time scales. This can be explained by the large seasonal variability of precipitation in Darrington, with July and August being very dry and November, December, and January being about 5–10 times wetter. These seasonal differences are responsible for a larger dispersion of interamount times that translates into more burstiness, regardless of the considered time scale μ. In contrast, the precipitation patterns in Hilo are much steadier and predictable (i.e., low burstiness and large memory). Most of the intermittency can be attributed to memory effects, that is, the clustering of precipitation in time, caused by a strong diurnal cycle and relatively small seasonal variations.
Comparing with the other gauges in the high-resolution dataset, we can see that Darrington has a larger-than-average burstiness on the order of the 75% quantile while Hilo has a significantly lower-than-average burstiness (i.e., below the 10% quantile). At the same time, Hilo also has one of the most predictable patterns across all time scales, with memory values above the 90% quantile. The interamount times in Darrington are also relatively correlated, with memory values in the upper 75% quantile for time scales between 0 and 7 days and above the 90% quantile at larger time scales, which is characteristic for a strong seasonal cycle.
In practice, the range of time scales over which B(μ) and M(μ) can be computed is limited (see section 2d for more details). Nevertheless, it is also interesting to think about what happens to B(μ) and M(μ) at very small and large time scales. The first case, , corresponds to waiting times between infinitively small rainfall amounts, that is, interdrop arrival times (or even fractions of drops). The second case, , corresponds to waiting times between infinitely large rainfall amounts. Assuming that interdrop arrival times are finite and positive, that is, , one can show that B(μ) admits the following limits:
The latter is obtained by assuming that σμ is dominated by μ for large values of μ, that is, .
The limit for M(μ) when μ tends to 0 is not well-defined, but we can assume that the interamount times at large time lags are uncorrelated:
In other words, the memory of the process naturally decreases when moving from fine scales to coarser ones, that is, from individual drops to rain cells, up to storm systems, and finally, seasonal variations and decennial oscillations.
d. Scatterplot of burstiness and memory
In the following, we adopt the approach proposed by Goh and Barabási (2008) and represent each rain gauge in the M–B diagram (for a fixed value of μ). Such scatterplots are useful to study the interplay between burstiness and memory and identify regions with similar intermittency patterns.
Figure 5 shows B(μ) versus M(μ) for a mean interamount time of μ = 24 h. The daily time scale was chosen because it corresponds to the smallest possible time scale at which reasonable estimates of B and M can be obtained for all the gauges in the high-resolution dataset (including the ones with small annual rainfall totals). Note how each country or state covers a different region in the M–B plane. Except for a few places (Hilo; Stovepipe Wells, California; Yuma, Arizona; and Mercury, Nevada), most points are fairly well aligned along the diagonal (correlation coefficient of 0.71). California, Nevada, Arizona, and Texas occupy the upper-right corner of the plot with large values of memory and burstiness. Precipitation patterns in North Dakota and Minnesota have similarly large memory but much lower burstiness. Oregon and Washington have moderate memory but relatively large burstiness. Germany, the United Kingdom, and the Netherlands have low burstiness and low-to-moderate memory. Although Switzerland is a relatively small country, it covers a remarkably large region in the M–B plane. This shows that the complex topography and mountainous terrain not only affect the total precipitation totals but also have a strong influence on the intermittency.
Figure 6 shows B(μ) versus M(μ) for a slightly larger time scale of μ = 168 h (1 week). Overall, the clustering is similar to Fig. 5 but the scatter is larger (correlation coefficient of 0.58). The median burstiness value is 0.03, suggesting the interamount times approximately obey a Tweedie distribution with the mean equal to the standard deviation (Tweedie 1984). Most interamount times are still positively correlated (the median of M is 0.17), which is a good indicator of daily and seasonal variability. This residual memory eventually vanishes at larger time scales, but the data record is not long enough to determine the exact time scale at which this happens for each gauge. However, assuming an exponential decrease, we can estimate that most of the memory will have disappeared at time scales between 1 month and 1 year.
Figure 7 shows a map of the burstiness values for μ = 24 h over the contiguous United States. It confirms what we have seen before, namely, that the southwest of the United States (e.g., California, Nevada, Arizona, New Mexico, and Texas) has the burstiest precipitation patterns while the north and northeast of the United States are characterized by steadier patterns comparable in magnitude to those found in Germany, the United Kingdom, and the Netherlands (see Fig. 8). Note that the memory map (not shown) at this time scale looks similar. For a list of the top 10 burstiness and memory values, the reader is referred to Tables 2 and 3.
e. Sensitivity of and to sampling resolution
Since the results presented above are based on gauges with different sampling resolutions (i.e., 5–60 min), it is worth verifying that the resolution of the data does not affect the estimates of B(μ) and M(μ) too much. To investigate this issue, a sensitivity analysis was performed using the U.S. rain gauge data, which has the highest sampling resolution (i.e., 5 min) among all available datasets. The sensitivity of B(μ) and M(μ) with respect to the sampling resolution was investigated by resampling each 5-min time series to a 60-min resolution, computing the corresponding burstiness and memory values, and comparing the results to those for the 5-min data.
Figure 9 shows the burstiness and memory values at 5- and 60-min temporal resolutions and the daily time scale (μ = 24 h). The absolute bias in B(μ) when going from 5- to 60-min resolution is very small (i.e., −0.0002 or −0.03%). Indeed, the burstiness is mostly determined by the large interamount times for which the sampling resolution does not play a big role. The memory, on the other hand, is more sensitive to the sampling resolution. It has an average estimation bias of −0.06 (i.e., −11.95%) when going from 5 to 60 min. The bias is primarily due to the left censoring of interamount times, that is, the fact that interamount times between 5 and 60 min become indistinguishable when viewed at an hourly resolution. This makes it harder to estimate the lag-1 autocorrelation and causes the memory to decrease. It also explains some of the small shifts in memory values between the different datasets in Fig. 5. Fortunately, the amount of left censoring, and thus the bias, rapidly vanishes at larger time scales. For example, the bias at the weekly time scale is only −0.3%. In conclusion, we can say that the sampling resolution only has a limited effect on M(μ) on the order of 10% (mostly for small time scales) and almost no effect on B(μ). Therefore, as long as μ stays within the interval defined in Eq. (10), the sensitivity of B(μ) and M(μ) with respect to the sampling resolution will remain fairly low.
f. Seasonal variability
The goal of this section is to analyze the seasonal variability of B(μ) and M(μ). To simplify the analysis, only two seasons are considered: summer (i.e., July–August) and winter (i.e., January–February). Because precipitation totals vary wildly from one season to another, two different interamounts (i.e., aμ), one for summer and one for winter, are considered for each gauge. This puts more emphasis on intermittency, that is, the way the precipitation is distributed over time, rather than variations in total rainfall amounts from one season to another.
Figure 10 shows the seasonal differences in burstiness and memory for μ = 24 h. Except for Switzerland, most gauges in Europe recorded slightly burstier patterns in summer than in winter. The memory values for Switzerland also appear to be larger during winter than summer. The magnitude of the seasonal differences in burstiness and memory in Europe are relatively small compared with certain locations in the United States (e.g., Coos Bay, Corvallis, Stovepipe Wells, the Everglades, Harrison, and Barrow). This suggests that most of the seasonal variability in intermittency in Europe can be explained by simple scaling of precipitation amounts rather than changes in occurrences. More importantly, there appears to be different ways intermittency can change over time, that is, either in burstiness, memory, or both. While this is well known from dry–wet spell analyses, it is often neglected when simulating or downscaling precipitation time series. This is a serious issue knowing that small-scale intermittency has been shown to be very relevant for urban water management and flash flood prediction (Ormsbee 1989; Gires et al. 2012a; Veneziano and Lepore 2012).
g. Intermittency versus total rainfall amount
Figure 11 shows B(μ) and M(μ) at the daily time scale (μ = 24 h) with respect to the average annual precipitation amount for each of the gauges in the high-resolution dataset. We can see a clear increase in burstiness with decreasing precipitation amounts (overall correlation of −0.41), especially for locations with less than 500 mm yr−1 (correlation of −0.69). By contrast, the memory at this time scale only weakly depends on the rainfall amount (−0.23 overall and 0.10 for the gauges with <500 mm yr−1). The relatively large scatter in memory and burstiness for a given rainfall amount is also a good reminder of the fact that there are many different ways to distribute a given amount of water over time. Some of these patterns are more dangerous than others as they increase the chances of droughts and floods. The burstiness and memory provide additional insight into the way precipitation is distributed over time (independently of the amount) and could therefore be used to better identify hazardous patterns. However, this is beyond the scope of this paper and will have to be addressed in more detail in the future.
h. Trends for 1954–2013
Climate change is likely to modify local precipitation patterns, both in intensity and frequency. Monitoring and understanding these changes is of primary importance for society and local authorities. While trends in precipitation intensities and amounts have received a lot of attention, intermittency remains poorly studied. Some models and numerical simulations suggest a general shift toward less frequent but more intense rainfall and greater contrast in intermittency between wet and dry regions in the future (e.g., Trenberth et al. 2003; Royer et al. 2008; Harding and Snyder 2014). This is partly supported by observational evidence of changing dry–wet spell durations (e.g., Schmidli and Frei 2005; Groisman and Knight 2008; Cindrić et al. 2010; Zolina et al. 2013; Rajah et al. 2014). In general, however, there is a lack of data and quantitative methods to objectively assess changes in intermittency, especially at hydrologically relevant scales.
In the following, we investigate recent trends in intermittency by analyzing the burstiness and memory of 552 daily rain gauges (321 in the United States and 231 in Europe) and a 60-yr time period from 1954 to 2013. Each gauge provides the daily precipitation amount with a resolution of 0.1–0.2 mm. For more information about this dataset, the reader is referred to section 3b. Because of the daily resolution of the data, the burstiness and memory were computed for a mean interamount time of μ = 240 h. This corresponds to normalized interamounts of roughly 1.4–82.1 mm (i.e., 50–3000 mm yr−1). To quantify changes in mean intermittency during the last 60 years, we decided to compute B(μ) and M(μ) for two successive 30-yr time intervals (1954–83 and 1984–2014). The values between 1984–2014 and 1954–83 were then compared to identify possible increases or decreases over time.
Figure 12 shows the increase or decrease of B(μ) and M(μ) between the summer and winter of 1984–2014 and 1954–83. We can see that the south of Europe, that is, Spain, Portugal, and Italy, experienced a slight shift toward higher values of burstiness (0.04 on average in winter and 0.03 in summer). Norway, Finland, and Sweden, on the other hand, seem to be headed in the opposite direction (−0.03 on average in winter and −0.01 in summer). However, none of these trends are significant at the 5% level. The largest increases in burstiness were observed at Svalbard Airport, Norway (0.148); Miles City Airport, Montana (0.133); and Bologna, Italy (0.114). The three largest decreases in burstiness were recorded at Lysebotn, Norway (−0.125); Poltava, Ukraine (−0.123); and Mount Mary College, Milwaukee, Wisconsin (−0.118).
Overall, neither the burstiness nor the memory seems to have changed significantly between 1984–2014 and 1954–83. Local changes can be observed, but there is not enough data to draw robust conclusions for the global scale yet. More detailed analyses will have to be performed to better highlight and understand these trends.
Precipitation is a highly variable and complex process in space and time. An important but often neglected aspect of this variability is intermittency. In this paper, a new method for quantifying intermittency based on the burstiness and memory of interamount times has been presented. An important aspect of the proposed method is the fact that it puts more emphasis on the way precipitation is spread over time rather than focusing on total amounts. This leads to a more meaningful and fair comparison of intermittency across a wide range of climatic regimes. Interamount times also offer several practical advantages over more traditional methods such as dry–wet analyses and discrete Markov chains. One of these advantages is that interamount times, unlike rain rates or amounts, are always positive. This makes it unnecessary to separate dry periods from wet ones and removes the complication of having to deal with mixed-type distributions. Measures based on interamount times are also less sensitive to sampling resolutions and do not depend on arbitrary dry–wet detection thresholds.
The analyses in section 3 showed that the burstiness and memory are useful metrics for understanding the nature of intermittency at various scales. They can be used to study differences in precipitation patterns independently of amounts and to investigate possible changes in rainfall occurrences due to global warming. The scatterplots of B(μ) and M(μ) for a large number of rain gauges in Europe and the United States showed that intermittency patterns vary wildly from one place to another and one season to another. Most importantly, intensities and occurrences do not necessarily vary in the same way. This is often not sufficiently considered in rainfall downscaling techniques and can lead to strong misrepresentations of intermittency at small scales.
The analysis of 60 years of daily precipitation data showed no evidence of a general increase or decrease in intermittency. Some dry places in the south of Europe might be headed toward more sporadic rainfall patterns in the future. At the same time, colder regions at higher latitudes might experience more frequent and regular rain. Hopefully, as time goes by, more high-resolution data will be available to better quantify these recent trends.
The main limitation of the proposed approach is that the burstiness and memory only offer a relatively simple, first-order description of intermittency. Moreover, the time scales over which relevant results can be obtained are limited by the sampling resolution and the average annual precipitation amount. In particular, dry regions are more difficult to analyze than wet ones, especially at small time scales. Although daily data can be used to analyze long-term trends, higher resolutions on the order of 5–15 min are required to retrieve useful information at hydrologically relevant scales.
Finally, while this paper exclusively focused on temporal intermittency, it is worth pointing out that a similar methodology could be used to study spatial intermittency over a given area. Future work will mainly focus on applying the same metrics to gridded precipitation datasets derived from weather radars, satellites, or numerical weather prediction models. We will also investigate how the proposed tools can be used to improve the representation of small-scale intermittency in stochastic rainfall simulators and assess how accurately disaggregation schemes reproduce intermittency patterns across different scales.
This work is a contribution to projects STORM and STORMS, funded by Grants P2ELP2_148878 and P300P2_158499 of the Swiss National Science Foundation. The authors thank the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss), the Met Office of the United Kingdom, the Deutscher Wetterdienst (DWD), the Royal Netherlands Meteorological Institute (KNMI), the European Climate Assessment & Dataset (ECA&D) program, and the National Oceanic and Atmospheric Administration (NOAA), and all people that helped collecting and providing the high-quality datasets used in this study. All data are available free of charge for research purposes. For more information, please contact the different agencies mentioned above.
a. High-resolution data for the United States
The data for the United States consist of 129 stations, including four in Alaska (Barrow, Fairbanks, Sitka, and St. Paul) and two in Hawaii (Hilo and Mauna Loa). They are taken from the U.S. Climate Reference Network (Diamond et al. 2013) and cover a 5-yr period between 2009 and 2013 with a temporal resolution of 5 min and a minimum recorded precipitation amount of 0.2 mm. The instrument used to measure the precipitation amounts is a heated Geonor T-200B precipitation weighing gauge. There are 98 gauges below 1000 m, 24 between 1000 and 2000 m, and 7 above 2000 m MSL. The lowest gauge is in the Everglades, Florida (1 m), and the highest is in Mauna Loa, Hawaii (3407 m).
b. High-resolution data for Switzerland
The data for Switzerland (65 stations) are taken from the automatic meteorological measurement network of MeteoSwiss (Suter et al. 2006). They cover a 15-yr period between 1999 and 2013 with a temporal resolution of 10 min and a minimum recorded precipitation amount of 0.1 mm. The instruments used to measure the precipitation amounts are Lambrecht (1518 H3 and 15188) tipping-bucket rain gauges and Pluvio2 weighing gauges produced by OTT Hydromet. There are 40 gauges below 1000 m, 21 between 1000 and 2000 m, and 4 above 2000 m MSL. The lowest station (Magadino–Cadenazzo) is at 203 m and the highest (Piz Corvatsch) is at 3305 m MSL.
c. High-resolution data for Germany
The data for Germany (61 stations) are taken from the German Meteorological Service [Deutscher Wetterdienst (DWD)], Climate Data Center (CDC). They cover a 15-yr period between 1999 and 2013 with a temporal resolution of 60 min and a minimum recorded precipitation amount of 0.1 mm. The instrument used to measure the precipitation amounts is a Pluvio2 weighing gauge produced by OTT Hydromet. The lowest station (Emden) is at 0 m and the highest (Hohenpeissenberg) is at 977 m MSL.
d. High-resolution data for the United Kingdom
The data for the United Kingdom (42 stations) are taken from the Met Office Integrated Data Archive System (MIDAS). They cover a 15-yr period between 1999 and 2013 with a temporal resolution of 60 min and a minimum recorded precipitation amount of 0.1 mm. The instruments used to measure the precipitation amounts vary from one location to another. Most of them are heated “Mk” tipping-bucket rain gauges. The lowest station (Kinloss, Scotland) is at 5 m and the highest (Tulloch Bridge, Scotland) is at 237 m MSL.
e. High-resolution data for the Netherlands
The data for the Netherlands (28 stations) are taken from the Royal Netherlands Meteorological Institute (KNMI). They cover a 15-yr period between 1999 and 2013 with a temporal resolution of 60 min and a minimum recorded precipitation amount of 0.1 mm. The instrument used to measure the precipitation amounts is a Pluvio2 weighting gauge produced by OTT Hydromet. The lowest station (Rotterdam) is at −4.8 m and the highest (Maastricht) is at 114 m MSL.
Sensitivity of and to missing values
To quantify the effect of missing values on B(μ) and M(μ), we consider a 15-yr time series of hourly precipitation amounts in Valkenburg, Netherlands (see appendix A). The time series consists of 131 495 hourly precipitation accumulations, 26 166 of which are positive and 105 329 that are equal to zero. The advantage of this station is that it has no missing values over the entire 15-yr period.
The effect of data gaps is assessed by randomly assigning missing values to a fixed number of measurements in the original time series (e.g., 0.1%, 1%, or 5%). The gaps are then replaced by zeros, which is the general strategy in this paper for dealing with missing values, and the sample estimates of burstiness and memory are compared with their true values derived using the original time series. For a more representative error distribution, the approach is repeated a million times for a fixed percentage of missing values. Because estimation errors depend both on the number of missing values and on their temporal ordering, the sensitivity is estimated by keeping the highest relative errors among all simulated scenarios.
Figure B1 shows the maximum relative error of B(μ) and M(μ) at the daily time scale (i.e., μ = 24 h and normalized interamount of 2.542 mm). The burstiness and memory values of the original time series without gaps are B(24) = 0.396 and M(24) = 0.353. The analyses show that the maximum relative errors affecting B(24) are between 2.8% and 3.7% (for up to 5% missing values). The memory seems to be slightly more sensitive with maximum relative errors between 4.4% and 7.2%. Further analyses (not shown here) show that, on average, B(μ) is overestimated and M(μ) is underestimated. The positive bias of B(μ) can be explained by the addition of artificial zeros into the time series, which results in slightly longer dry periods and more dispersed interamount times. The negative bias of M(μ) can be explained by the introduction of artificial zeros into an otherwise correlated time series. Additional analyses also show that the sensitivity of B(μ) and M(μ) to missing values decreases with time scale μ. Therefore, only the results for μ = 24 h are shown.