## 1. Introduction

Drop size distributions (DSDs) are crucial in many meteorological applications, for example, radar meteorology; or the parameterization of cloud physical processes; or in applied sciences, for example, to describe how rain erodes soil or how rain absorbs electromagnetic waves. It is hard to achieve the information about the constitution of hydrometeors, because the location of interest is difficult to reach. What we can easily get are measurements of raindrops as they reach the ground. In this sense falling raindrops are sedimenting information. Their DSDs especially may be treated as the output of a complicated process chain, such as the extraterrestrial radiation patterns that are evaluated by physicists to validate their theories about “what the world contains in its innermost heart and finer veins” (Goethe 1808). We can use the DSDs to test our understanding of the precipitation process.

Our situation is significantly worse than that of the physicists mentioned above: we lack an a priori knowledge of the analytical function and of the conditions that determine the DSD. Even our measurements are inexact because very small drops (typically those with diameter *D* < 0.3 mm) are not detected because of their insufficient signal-to-noise ratio (“truncation effect”), and large drops (*D* > ca. 3 mm) are detected with statistical insignificance because they appear rather rarely (“sampling problem”).

An analytic description of DSDs will make use of a set of parameters that are dependent on the respective rain event. Such a set of parameters should be

- as small as possible (There should be no more parameters than the observed DSDs have degrees of freedom.),
- linearly independent,
- allow us to describe all of the observed DSDs, and
- physically meaningful.

However, even without knowing the analytical form of the distribution, for several applications we are interested in the moments of the DSD. Cloud physicists use the number density (zeroth moment) and the liquid water content (proportional to the third moment) to describe DSDs (Seifert and Beheng 2006) and the microphysics. The liquid water content determines absorption of radar and telecomunication radiation. Radar measures reflectivity (sixth moment, Rayleigh approximation) and lidar is sensitive to the second moment of the DSD.

The moments themselves are not such a set of suitable parameters, because they correlate strongly. Nevertheless, they are often used to determine the DSD parameters. In addition to their physical meaning, they reduce noise because they are calculated by integrating the DSD. We have a certain degree of latitude when choosing the moments for calculations. We tend to use the moments that are most relevant for the current application. Because this is not necessarily the best choice for achieving accuracy, in this paper we study the following: How large are the errors (biases and RMS errors) that occur in the derived DSD parameters if we use a certain combination of moments? Which combination produces the smallest errors? Do the results depend on the DSD itself? Also, how does the relative importance of the truncation effect and the sampling problem depend on the rain event that we consider?

Throughout this paper, we presume that DSDs are gamma shaped. This is commonly assumed (e.g., Willis 1984; Testud et al. 2001; Illingworth and Blackman 2002; Smith and Kliche 2005; Kliche et al. 2008; Brawn and Upton 2008; Cao and Zhang 2009; Smith et al. 2009; Mallet and Barthes 2009, hereafter MB09), but is not a proven fact. There are other representations proposed (e.g., Feingold and Levin 1986; Torres et al. 1994), and some approaches even avoid assuming an analytical form of the DSDs as far as possible [e.g., by normalizing parameters of the distribution function by its appropriate moments (Illingworth and Blackman 2002; Willis 1984; Testud et al. 2001; Handwerker et al. 2004; Lee et al. 2004)]. Here, we do not discuss a physical interpretation of the calculated parameters, and therefore we do not need to normalize the distribution function.

It should be noted that we only use the gamma distribution as an example within this study. We had to choose an analytical description of the DSDs, and thus we took a frequently used form. Several findings from our study may be transferred directly to a different analytical shape of the DSDs.

To be able to compare derived parameters to the “truth” we need independent knowledge about the raindrop population. This is achieved through numerical experiments. We construct a random population of raindrops that obeys a predefined gamma-shaped DSD. From this population a sample is taken by emulating a disdrometer measurement; that is, we consider how each individual drop sediments and impacts the measuring surface, while at the same time ruling out collisions and the break up of drops.

The properties of our “numerical measuring device” are comparable to a Joss–Waldvogel disdrometer (Joss and Waldvogel 1967). We categorize drop counts in 20 diameter classes with diameters ranging between 0.3 and 5.5 mm. Smaller and larger drops are neglected. It should be noted that other types of impact disdrometers, for example, those based on an optical principle, also suffer from the same limitations. Therefore, the use of this study is not limited to a specific measuring device.

Impact disdrometers are not the only way to measure DSDs; there are different methods, for example, photographing the drops in a certain volume. In the latter case the measuring volume is independent of the drop size, whereas for impact disdrometers the measuring volume increases with drop size as the fall velocity increases with drop size. This could alleviate the sampling problem, because the rare large drops are better represented in the measurement. To quantify the differences between volume DSD measurements and impact flux measurements we also simulated volume measurements for a subset of cases.

We can show the alleviation of the sampling problem for large-order moments (and, in contrast, a little intensification of uncertainties in small-order moments), but in anticipating our results we can state that even for impact disdrometer measurements the sampling problem produces larger errors than the truncation problem.

Earlier studies (e.g., Smith and Kliche 2005) investigated the gamma parameter retrieval based on the moments method (MM) and reported that MM leads to biased results. The biases (and random errors) increase with an increasing order of applied moments. The authors recommend using moments 2, 3, and 4 (M234 hereafter) to minimize the errors. They avoid smaller moments because of the measuring limitations of disdrometers, especially at the low drop size end. The simulated DSDs are very finely resolved in the 150 classes between diameter 0 and 3 times the reference diameter.

In Kliche et al. (2008) similar DSDs are used to compare results from M234 with the maximum likelihood method (ML) and L-moments method (LM; Hosking 1990). If we have full-scale DSD, ML, and LM perform much better than M234, especially in the case of small sample sizes. However, when DSDs are truncated at the small-drops end, results from ML and LM are strongly affected.

The calculations of Brawn and Upton (2008) are based on simulated impact disdrometer data. They assume that the fraction of the two (generalized) moments *M _{k}* and

*M*

_{k}_{−1}(where

*k*and

*k*− 1 are the orders of the moments) is well approximated by the fraction of the sample estimates of these moments, even for truncated DSDs. They propose a new retrieval method using this fraction and show that it outperforms M234; moments method 2, 4, and 6 (M246); moments method 3, 4, and 6 (M346); and moments method 4, 5, and 6 (M456); even if drops smaller than 0.6 mm are ignored.

Measurements of small drops are more affected by (relative) measuring errors than are measurements of large drops. Cao and Zhang (2009) increase the (statistical) error of small drop counts to simulate this effect. Comparing moments methods 0, 1, and 2 (M012), M234, M246, M346, and M456 with ML and LM, they find that M234 has the smallest estimated bias of the shape parameter. ML and LM have larger errors. The authors conclude that ML and LM estimators depend heavily on the accuracy of the measurement of the number of small drops.

The dependency of errors on the population size and on the shape parameter is discussed by Smith et al. (2009). Using M234, M246, and M346 based on DSDs that are only truncated at the upper drop size end, they find that the errors increase with a decreasing shape parameter (and with decreasing sample size). A small shape parameter corresponds to a larger number of small and large drops, whereas a large shape parameter describes a narrow DSD. In special cases the results might even be beyond the range of possible physical values.

None of the above-mentioned retrieval algorithms takes the truncation of real DSD measurements explicitly into account. Even when the DSDs, which are fed into the analyses, are truncated, the algorithms are formulated for complete DSD measurements. The authors either hope or assume that errors resulting from undetected drops are negligible. If necessary, they avoid the use of low-order moments.

An exception to this strategy is presented by MB09. They compare M234 to ML and the least squares fit (LS), based on (synthetic) impact disdrometer measurements. In contrast to the present study they additionally consider measurement uncertainties and vary the size of the instruments’ active measuring area. They find that the ML performs best in determining the intercept parameter of the distribution, and it provides the smallest standard deviation in determining the shape parameter. The smallest bias for the shape parameter is achieved by M234. The results of MB09 may be compared nearly directly to our results. This comparison classifies ML and the LS method relative to the best moments method applied in this study.

As MB09, we reproduced the truncation of DSDs in our retrieval algorithm. We can show that this reduces the errors in DSD parameters significantly. Neglecting the truncation leads to a (negative) bias in all of the derived moments. Taking the truncation into account removes the bias and we only get an increased random error (compared to the perfect measurement of all drops), because we only estimated the contribution of the small drops rather than measuring it.

We unconditionally compare all of the possible combinations of three moments for the moments method. The results prove that the increased noise in the moments, mentioned above, is not very significant. Using the lowest-order moments often leads to good or even the best possible results. We can show that M012 performs comparably to ML as presented by MB09, if truncation is regarded in the retrieval.

One could argue that inadequate instrument responses to very small (though measured) drops can pose difficulties (e.g., Smith et al. 2009). Here we extend our argument and say: The retrieval probably can take these difficulties into account to overcome them. The difficulties are motivation to improve the retrieval, not an exclusion criterion for small-order moments.

The next section lists the assumptions that we made as a basis for our investigations. The list is followed by a mathematical description of the DSDs and their measurement. We describe our method of simulating the measurements in section 4. This procedure ends in random samples taken from well-described populations. The further investigations are based on these samples. We evaluate them just as we would evaluate disdrometer measurements: We choose a triplet of moments, calculate them, and derive the parameters of the gamma distribution. Here, we perform these calculations with each possible triplet of moments. This method is described in section 5.

In section 6, we quantify the errors that occur in the determination of the gamma parameters and discuss which combination of moments minimizes these errors. Finally, we give a summary of our investigations.

## 2. Assumptions

Our experiment emulates real measurements. Nevertheless, several uncertainties inherent to reality, partly intentionally and partly out of necessity, are not reproduced in this study. The idealizations we are aware of are as follows:

- DSDs are gamma shaped. This is a strong prerequisite for our study.
- Terminal fall velocity is given by Atlas et al. (1973). Fall velocities depend solely on the drop diameter. Neither a vertical wind component, break up, nor drop collision immediately above the instrument is taken into account.
- The instrument works perfectly. The numerical instrument works without any noise (each particle is counted in the right size class), it has no dead time (each particle is individually counted), and it has no problems resulting from particles hitting only the edge of the active area. The only restriction of the instrument is its limited measuring range.
- There is no wind and no drop sorting. There is no impact of any airflow on the drops, neither during sedimentation (see above) nor in the vicinity of the measuring device.
- The rain events are homogeneous. Our precipitation is stable (i.e., homogeneous rain, whose statistical properties do not vary with time).

These are idealizations that are needed to reduce the efforts. Real precipitation and real instruments bring in their own additional perturbations that have to be ruled out in this study.

## 3. Analytical description of the measurement

*n*(

*D*) is assumed to be described by a gamma distributiondetermined by intercept parameter

*N*

_{0}, shape parameter

*μ*, and slope parameter

*λ*. It describes the number of drops within a unit volume and a certain diameter range.

*c*(

*D*) =

*n*(

*D*)

*υ*(

*D*) is coupled to the DSDs by the fall velocity

*υ*(

*D*) of the drops. We assume

*υ*(

*D*) given by the relation from Atlas et al. (1973),with

*υ*

_{max}= 9.65 m s

^{−1},

*υ*

_{1}= 10.3 m s

^{−1}, and

*α*= 0.6 mm

^{−1}.

*c*(

*D*), disdrometers measure discrete values. The unitless number of drops

*C*counted in the

_{j}*j*th drop size class [i.e., between diameter

*D*− (Δ

_{j}*D*)

*/2 and*

_{j}*D*+ (Δ

_{j}*D*)

*/2)] is proportional to the spectral number flux, the active area*

_{j}*A*of the instrument, the integration time interval

*T*, and the width of the drop size class (Δ

*D*)

*, and is given byConversely, the DSD is estimated from the drop counts by approximating*

_{j}*n*(

*D*) and

*υ*(

*D*) to be constant within each drop size class as

## 4. Numerical simulation of disdrometer measurements

This section describes the generation of the artificial disdrometer measurements. The idea is to have virtual precipitation events whose DSDs are prescribed and thus known. Each individual drop gets a random diameter and a random height above the measuring instrument. The size distribution is gamma shaped and the heights are equally distributed.

The virtual disdrometer measurement generates a sample from the raindrop population. The size distribution of the raindrops in the sample does not necessarily correspond to the prescribed gamma shape. As for real disdrometer measurements the representativity of the sample for the population is limited.

In this way we produce 1000 independent, randomized samples for each prescribed drop size distribution. From each of the 1000 samples the parameters of the prescribed gamma distribution of the population will be estimated afterward using different combinations of moments. A comparison of the derived parameters with the prescribed ones allows for the appraisal of the quality that is achieved with each combination of moments.

First, we have to define which parameters shall be prescribed for the numerical experiment. The gamma distribution is defined by three parameters [cf. Eq. (1): *N*_{0}, *μ*, and *λ*]. We assume that these parameters are not the best choice for our simulations for the following two reasons: (i) They are (partly) not very intuitive. The shape and slope parameters are correlated, and together they describe the mean drop diameter of the distribution. It seems to be much more straightforward to use a reference drop diameter directly. (ii) The uncertainties are not strongly coupled to these parameters. The amount of independent knowledge on the population is proportional to the size of the sample. Thus, the errors should decrease with increasing sample size. Furthermore, the size of the sample is easily calculated from the measurement. These considerations lead us to use (i) the shape parameter *μ*, (ii) the reference diameter *D _{m}*, and (iii) the sample size 2

*as independent prescribed parameters.*

^{η}*η*as a measure of the sample size, so that(cf. Eq. 3) is the total number of drops (

*K*is the number of drop size classes). If measurement time

*T*and active area

*A*of the instrument are given,

*η*determines the last free parameter of the precipitation (

*N*

_{0}). Nevertheless,

*η*directly describes the sample without assumptions regarding either the instrument or the temporal resolution.

The reference diameter *D _{m}* is varied between 0.3 and 2.5 mm in 0.1-mm steps (23 values), the shape parameter

*μ*is varied between −0.75 and 30.00 in steps of 0.25 (124 values), and the sample size parameter

*η*is varied between 2 and 12 in steps of 1 (11 values); thus, the smallest samples contain 4 drops, and the largest contain 4096 drops. The above-mentioned parameters cover a range that is wider than that of real measurements. All together we have 23 × 124 × 11 different parameter combinations, and for each combination 1000 samples were calculated by the procedure described as follows.

The samples are generated in three steps, which are performed for each prescribed tuple of *D _{m}*,

*μ*, and

*η*:

- First, a random population of drops obeying the prescribed gamma distribution are generated, that is, we generate random numbers that are interpreted as drop diameters. The number of generated drops is initially 10 times 2
. In step 3 below, we will decide if the size of the population was large enough. Otherwise, the procedure is repeated with a doubled number of drops in the population.^{η} - Drops having a diameter outside the measuring range (
*D*_{min}…*D*_{max}) are then removed from the population. - Second, for each remaining drop a second random number is generated, representing the initial height of the drop above the active measuring area of the device. These heights are equally distributed between 0 and
*υ*_{max}*T*^{★}, with*υ*_{max}from Eq. (2) (the maximum terminal fall velocity). The (arbitrary) time unit*T*^{★}is taken as 1 (e.g., 1 min). Thus, every drop is specified by two random numbers: its diameter and its height. - Third, for each drop the arrival time is calculated from its diameter and its initial height using Eq. (2). Then the drops are sorted by their arrival time and the first 2
drops are taken as one measured sample.^{η} - Precaution is taken that the last drop must not arrive later than
*T*^{★}: At time*T*^{★}all generated large drops (*D*≈*D*_{max}) have reached the instrument. From this time on the large drops will be missing. The measurement would be warped. If the arrival time of the last drop is later than*T*^{★}, the procedure is restarted with the first step, increasing the number of initially generated drops by a factor of 2.

This is the way, measurements of impact disdrometers are reproduced. For a subset of cases we also generated volume measurements. To this end simply the lowest 2* ^{η}* drops are taken as the sample.

The drops in each sample are categorized in 20 size classes whose limits vary between *D*_{min} = 0.3 mm and *D*_{max} = 5.5 mm. The interval limits of the drop size classes are given in Table 1. Note that the class width varies from 0.1 up to 0.5 mm in discrete steps. The 20 counted numbers *C _{j}* [cf. Eq. (3)] represent one sample from the population. We again emphasize that the drop size distribution of a sample will deviate from the prescribed size distribution of the population by chance.

Bin interval limits of the 20 drop size classes (mm) used in this study.

## 5. The moments evaluation procedure

These randomized samples are evaluated in the same way as real measurements are. The evaluation procedure is most intuitively described by an example. The results of the study show that the rough number of drops that is needed to get good estimates of the gamma parameters is in the order of 100. Thus, we choose a sample with *η* = 7, that is, 128 drops, to demonstrate the procedure. As further parameters we choose *μ* = 2 and *D _{m}* = 1.0 mm (i.e.,

*λ*= 6.0 mm

^{−1}). This spectrum will be referred to as the

*standard case*and will be analyzed in the following section, not because of a special meteorological interest in such precipitation, but as a case that appears in the area between “easily evaluated” and “too noisy.” Assuming an active area of the instrument of 50 cm

^{2}and a measuring time

*T*of 60 s the standard case corresponds to a radar reflectivity of 20 dB

*Z*and a rain intensity of nearly 0.5 mm h

^{−1}.

According to the prescribed parameters 1000 samples (“numerical measurements”) are generated as described in section 4 and shown in Fig. 1 as light gray lines. The discontinuity near a diameter of 0.8 mm is caused by the considerable change in bin width (cf. Table 1) at that diameter. Further bin size changes cannot be seen because the number of counted drops is too small then.

The average of these 1000 samples is shown as the solid black line in Fig. 1. The dashed lines indicate one standard deviation. The gray dots that are coinciding with the average spectrum indicate what is expected analytically [Eq. (3) with normalization to 2* ^{η}* drops]. On average, the random samples reproduce the analytical expectations, whereas each single realization deviates randomly from it.

*i*th moment of the DSD is defined byConsequently, the zeroth moment is the total number of drops, the first moment is the sum of all of the drop diameters, the second moment is proportional to the sum of the drops’ surfaces (all of which are normalized by the unit volume), the third moment is proportional to the liquid water content, and the sixth moment is the radar reflectivity factor (in Rayleigh approximation). There is no intuitive interpretation of the fourth and fifth moments, albeit the fourth moment is close to rain rate.

*M*,

_{k}*M*, and

_{l}*M*(

_{m}*k*<

*l*<

*m*) allows us to solve Eq. (7) for

*μ*,

*λ*, and

*N*

_{0}as

*complete moments*hereafter. In contrast, measurements are limited to certain diameters ranging from

*D*

_{min}=

*D*

_{1}− (Δ

*D*)

_{1}/2 to

*D*

_{max}=

*D*

_{K}+ (Δ

*D*)

_{K}/2 (K is the number of drop size classes).

^{1}Accordingly, in this study we introduce the measured portion of the

*i*th moment

*M*

_{meas,i}, which is defined asas so-called

*measured moments*.

*complete moment M*intowhere

_{i}*M*

_{missed,i}denotes the fraction that could not be measured because the corresponding drops have diameters outside of the measuring range of the instrument, and the

*truncated moments M*

_{tr,i}. Taking the measured moments from Eq. (11) as an estimation of the

*truncated moments*determines the

*approximated moments M*

_{approx,i}asHere

*γ*is the incomplete gamma function [cf. Eqs. (10) and (13) in MB09].

To solve Eq. (14) for *M*_{approx,i} we need to know *μ* and *λ* (respectively, *μ* and *D _{m}*), though these are the parameters we want to calculate afterward via Eqs. (8) and (9) using the

*approximated moments.*There are several methods that are used to solve such a mutual dependency. One procedure is to apply genetic algorithms to find

*μ*and

*λ*solving the system of equations. Another approach is to solve the problem iteratively, as it is done here. We start by approximating

*M*

_{approx,i}≈

*M*

_{meas,i}, determine

*μ*and

*λ*, and find an improved approximation for

*M*

_{approx,i}. This iteration is repeated until relative variations of

*μ*and

*λ*are both smaller than 1% from one iteration to the next.

The evaluation starts by calculating the measured moments *M*_{meas,i} [Eq. (11)] for each of the 1000 samples that were generated for a certain tuple of prescribed *D _{m}*,

*μ*, and

*η*. Moments of higher orders than 6 are not considered here, and it will be shown subsequently that it is not useful to do so. Thus, we only investigate methods that make use of the first seven moments of the distribution. A triple of moments applied in the calculation will be called a

*method*from here on. There are 35 different methods and each individual moment is used within 15 of them. As noted in the introduction, a certain method is identified by the letter M and the numbers of the three applied moments (e.g., M234).

The values of the individual measured moments for each of the 1000 realizations of the standard case are connected by the light gray curves in Fig. 2. The black solid line connects the average of the measured moments and the dashed lines indicate the standard deviation. The thick gray line (which is nearly invisible behind the black solid line) shows the truncated moments [Eqs. (12) and (13)]. Again the correspondence of the average realization and the analytical expectation is very good. It is eye-catching that the scatter of the small-order moments is comparably small, whereas the high-order moments deviate partly significantly from the average (up to a factor of 5).

All 35 methods are applied to the 1000 samples produced for a certain tuple of *D _{m}*,

*μ*, and

*η*. As a result we get 35 000 estimates of both

*μ*and

*λ*. To distinguish the prescribed

*μ*and

*λ*from the estimated ones the latter are indicated as

*M*for each

_{i}*i*. The averages of 1000 approximated moments for each of the 15 methods applied on the standard case are shown as gray circles in Fig. 2. The dark gray line connects the complete moments, calculated analytically [Eq. (7)].

The dilemma of parameter determination is now visible in Fig. 2: The measured moments of the 1000 individual samples [1000 thin gray lines, see Eq. (11)] show a large scatter for high-order moments. For low-order moments the scatter is hardly visible. The higher the order of a moment, the more severe the sampling problem becomes. The mean measured moments (solid black line) always corresponds very well to the truncated moment [thick gray line, Eq. (13)], but the standard deviation (dashed black lines) increases with an increasing order of moment.

Then again, the approximated moments [dark gray circles, Eq. (14)] show scatter only for the low-order moments, for high orders they are unique and coincide with the (analytical) complete moments [dark gray line, Eq. (7)]. For low orders the correspondence of the approximated and complete moments is not convincing. Low-order moments are more affected by the truncation problem than high-order moments.

The more important output of the iterative process is the estimates of *μ* = 2, *D _{m}* = 1 mm,

*λ*= 6 mm

^{−1}) as a function of the applied method. Light gray lines again show the results for the 1000 individual measurements, black lines are averages and standard deviations, and the straight gray lines mark the predefined values. Whereas

*D*is determined with high accuracy nearly independent of the applied method, we see local maxima in the errors of

_{m}*μ*and

*λ*where the sixth moment is applied.

One recognizes from Fig. 3 that M346, which is applied frequently to deduce the parameters in literature, leads to strongly biased results for

During the further discussion we will describe the statistics of the derived parameters mainly in terms of biases and root-mean-square errors (RMSEs). The bias is the deviation of the mean derived value from the prescribed one, for example,

The procedure described so far for the standard case is applied on each of the 23 × 124 × 11 different combinations of prescribed parameters (cf. section 4), producing a frequency distribution of the derived values for *.*

## 6. Attainable precision and optimum choice of moments

There is no unambiguous, objective criterion to identify the best way to determine the gamma distribution parameters. To give an example: How should one objectively weight biases in *D _{m}* compared to the RMSE of

*μ*? For this reason we will discuss the errors occurring in estimating the different parameters separately.

For each prescribed DSD, that is, for each prescribed tuple of *D _{m}*,

*μ*, and

*η*, and for each applied method we determined the biases and RMSEs in estimating the parameters

*best method*. Note that we get up to four different best methods for each tuple of prescribed DSD parameters for the smallest bias and smallest RMSE of

The unavoidable error, that is, the error occurring when applying the best method, will be given as well as information on the best method. To reduce the complexity of the figures we do not show the best method, but instead the highest order of a moment applied within the best method. There are 35 different methods but only 5 different possible highest orders of a moment (from the second to the sixth). If, for example, M134 is the best method to estimate a certain parameter, then we will indicate the highest moment to be 4, making it impossible to distinguish between M014, M024, M034, M124, M134, and M234. Nevertheless, our impression is that the highest applied moment determines the quality of the estimates.

In the following we start to discuss results from the standard case, introducing the procedure. Afterward, we discuss the quality of estimates for *D _{m}*, followed by the results for

*μ*. Finally, we compare our results both to those published by other authors and to volume measurements.

### a. Results for the standard case

The results for the standard case are displayed in Fig. 3. We see that *D _{m}* is determined with low bias and low RMSE, nearly independent of the applied method. Biases vary between −9 and +12

*μ*m. The minimum absolute bias is reached by M023 (15 nm), although this result is by chance (see the RMSE values below). We would call M023 the best method to determine

*D*with a low bias for the standard case.

_{m}Equally, the RMSE is nearly independent of the method; it is always small, nevertheless it is always significant larger than the bias. The RMSE varies between 65 (M012) and 77 (M456) *μ*m.

Errors in *μ* and *λ* are dependent on the applied method. They correlate obviously, so in the determination of *D _{m}* the errors in

*μ*and

*λ*cancel each other. We only discuss the results for

*μ*.

The bias in *μ* is always positive. It varies between 0.25 (M012) and 2.23 (M456). The RMSE reaches values from 0.93 (M012) up to 3.32 (M456). All together M012 produces the best results, except for the bias in *D _{m}*, which is always negligibly small.

The frequency distributions of the derived values for *D _{m}* are symmetric and unbiased, and the standard deviations are nearly independent of the applied method (results from M012, M234, and M346 are shown in Fig. 4). The frequency distributions of derived slope and shape parameters are more complicated, that is, asymmetric and biased. This tendency holds true for nearly the total dataset. Equally, errors in

*μ*and

*λ*correlate not only for the standard case, so in the determination of

*D*the errors in

_{m}*μ*and

*λ*cancel each other.

### b. Deriving the reference diameter D_{m}

As mentioned above, reference diameters *D _{m}* are derived with negligible biases and comparably low RMSE. Although

*D*is not directly derived from the measurements but is calculated from

_{m}*λ*and

*μ*, its accuracy is better than that of the latter two. Errors in

*λ*and

*μ*compensate.

Especially for very small samples (*η* ≤ 3) and for small reference diameters (*D _{m}* ≤ 0.5 mm) the retrieval algorithm often did not converge fast enough. Thus, we only have results for 81.5% of all of the cases. The absolute bias in

*D*based on the best method is in 99.9% of these cases smaller than 0.1 mm and in 76.4% smaller than 0.01 mm. The best method to determine

_{m}*D*with a low bias is in 29.3% of all cases M056, followed by M012 (25.4%) and M456 (12.3%). Every other method is best only in 3% or less of all of the cases. Deriving

_{m}*D*is possible without a distinct bias. Thus, no further discussion is presented here.

_{m}Deriving *D _{m}* with a small RMSE is a different task and leads to different best methods. In 97.7% of all cases the RMSE of

*D*is smaller than 0.3 mm and in 76.1% it is smaller than 0.1 mm. The mean value is 0.07 mm. The RMSE of

_{m}*D*is larger than 0.1 mm, especially for small samples (

_{m}*η*≤ 5) and simultaneous small

*μ*(depending on

*η*, where

*η*= 5,

*μ*≤ 4 or

*η*= 4,

*μ*≤ 8 are examples) and for large prescribed

*D*and simultaneous small

_{m}*μ*(e.g.,

*D*= 2 mm and

_{m}*μ*≤ 5). Small values of

*μ*mean a broad DSD, so it is not surprising that determining

*D*is more difficult in these cases.

_{m}The best method to determine *D _{m}* with a small RMSE is in 55.2% of all cases M012, in 11.5% M056, in 9% M013, and in 5.4% M456. Every other method is best in less than 2% of all cases.

If we use only one fixed method for the total dataset, the mean RMSE to determine *D _{m}* varies between 0.073 and 0.090 mm for the most methods. Larger mean RMSE (0.138 … 0.188 mm) occur only with methods that rely solely on the third to the sixth moment.

We conclude this analysis by stating that *D _{m}* can be determined without a (relevant) bias and with a reasonably small RMSE.

### c. Deriving the shape parameter μ

As mentioned above, errors in *μ* and *λ* compensate, leading to a good estimation of *D _{m}* via Eq. (5). Hence, we can concentrate on the determination of

*μ*. The results for

*λ*follow correspondingly.

The errors in *μ* prove to be more serious than those in *D _{m}*. They depend strongly on the number of drops per sample.

First, we have to exclude samples with a prescribed reference diameter of *D _{m}* = 0.3 mm from our study. In most cases the retrieval algorithm did not converge for these samples. Nevertheless, some large samples (

*η*≥ 9) for large prescribed values of

*μ*(>18) could be evaluated, leading to extraordinarily large estimates of

For these cases the bias of *μ* determined by the best method varies between −4 and +25. The large (absolute) biases nearly exclusively occur for small samples. Nearly all estimates of *μ* for samples consisting of either 4 or 8 drops have absolute biases larger than 3, and half of the samples with 16 drops reach this large biases. Samples containing 32 or more drops (*η* ≥ 5) reach such large biases in less than 1% of all cases.

Restricting the analysis further to samples with 32 drops or more, 91.8% of the absolute biases are smaller than 1.0, and the mean absolute bias is 0.32 if the best method is applied. In 34.1% of these cases M012 performs best. M456 follows with 19.9%, M123 with 6.3%, and M013 with 5.4%. All of the other methods perform best in 3% or less of the cases.

An overview of the distribution of the biases is given in Fig. 5. The left column shows results for samples with a prescribed *D _{m}* = 1 mm, the right column shows results for samples consisting of 128 drops (

*η*= 7).

The biases that result from the particular best method are shown in the second row. For low *η* they are all positive and larger than 1. For 128 and more drops (*η* ≥ 7) there is a large (gray) area with absolute biases smaller than 0.1. For larger samples with large values of *μ*, the biases get negative. Samples containing 128 drops show a slight positive bias in most cases. Only for very small drops (*D _{m}* ≤ 0.7 mm) do biases reach absolute values of 1.

The upper row indicates the best method by the highest moment of which it makes use. For large areas the dark blue color indicates the highest order of a moment to be 2, which unambiguously indicates M012 as being the best method. Nevertheless, there are large areas where higher-order moments should be applied. Comparing the results from the best method in that area (second row) to those from M012 (third row) allows for the estimation of the additional bias.

The third and fourth row of the figure show the biases resulting from M012 (third row) and M234 (fourth row), allowing for the comparison of the results. The differences are partially subtle. For example, for *D _{m}* = 1 mm (left column) and

*η*= 6 the biases for the best method as well as for M012 fall below 1, whereas for M234 they reach higher values. In the right column we see for large areas that the biases for M234 are (absolute) larger than those for M012. Nevertheless, especially for large values of

*μ*as well as for small values of

*D*, M012 performs significantly worse than the best method.

_{m}Analyzing the results for the RMSE in *μ* we again first exclude samples with *D _{m}* = 0.3 mm. It is self-evident that RMSEs are large for small sample sizes. For eight or less drops per sample, nearly all RMSEs in

*μ*are larger than 5. The same is valid for 80% of the samples with 16 drops and 52% of the samples with 32 drops. We conclude that one needs at least 64 drops in a sample to be able to derive

*μ*with an acceptable RMSE. Under this precondition 97.6% of the RMSE values are below 5 and 42.8% are below 1. All of these results are valid for the particular best method, which is in 54.7% M012, followed by M013 (23.8%), M023 (4.6%), and M123 (3.5%). All of the other methods perform best in 3% or less of all cases.

The distribution of the RMSE in *μ* is given in Fig. 6. The first row again shows the highest moment from the best method, the second row shows the RMSE determined by the best method, and the third and fourth rows show the results from M012 and M234, respectively. The left column is for *D _{m}* = 1 mm, and the right column shows samples with 128 drops (

*η*= 7). An RMSE around 1 is emphasized.

In contrast to the distribution of the biases we see the RMSE getting systematically smaller with increasing sample size (left column). For the best method and for M012 it is sufficient to have *η* = 7 (128 drops) to have a chance that RMSE of *μ* falls below 1 (for small *μ*). We need twice the number of drops for M234. Nearly all of the best methods apply only to the zeroth to the third moments.

For reference drop diameters above 1 mm the RMSE of *μ* is nearly independent of *D _{m}*; for samples containing mainly very small drops the RMSE increases (see right column). In all of the subfigures we see that RMSE of

*μ*increases with

*μ*. The relative error

There are almost no differences between the images in the second row (best method) and the third row (M012); that is, applying M012 always introduces only very weak additional deterioration compared to the best method. In 90% of all cases the additional error is below 0.09, in 98% it is below 0.94. Applying M234 instead of the best method increases the RMSE by more than 0.09 in 90% of all cases. These details are valid for samples containing 64 drops or more.

Thus far we can recommend that, if one wants to determine *μ* (and *λ*) from measurements, then applying moments of orders above 3 should be avoided. The sampling error in the higher-order moments exceeds the problems of undetected small drops.

### d. Classifying the results

The presented results shall finally be compared to those from other studies. The most similar study is MB09, who compared results from M234 with those from ML and LS. As in the present study they regarded the truncation in their retrievals.

MB09 prescribe the measuring time (60 s), the active area of the disdrometer (mostly 100 cm^{2}), and the parameters *D _{m}*,

*N*

_{0}(more precisely

*μ*. The number of drops in a sample is therefore varying, whereas in the present study it is held fixed. For each triple of

*D*,

_{m}*μ*they produced 50 realizations and derive biases, standard deviations, and maximum errors.

For three cases (called *q*_{1}:{*D _{m}* = 1.4 mm;

*μ*= 5;

*μ*. The estimated number of drops per sample are given as 283 (

*q*

_{1}), 1130 (

*q*

_{2}), and 3133 (

*q*

_{3}), respectively. A comparison of these three cases with the best suiting cases of this study is given in Table 2, although our investigation does not use precisely the same values. Note that our results are based on 1000, not 50 realizations, making outliers significantly more probable (cf. maximum error in Table 2).

Bias, standard deviation, and maximum observed error in estimating the shape parameter *μ* for different prescribed DSDs using different estimating methods. The first line of each pair reports the data from MB09, the second line is the best-fitting case from this study. M012, M234, and M346 are the moments method, ML is maximum likelihood and LS the least squares method.

The direct comparison of MB09 and this study is possible for M234. It shows that our results are comparable to those presented by MB09. Only the very low biases for *q*_{2} and *q*_{3} are conspicuous. Our formulation of the recursive procedure was probably a little more successful. MB09 found that the moments method results in the smallest biases (compared to ML and LS). We find that using M012 instead of M234 further improves the good results of the latter. In addition, it reaches nearly the same small standard deviations as the maximum likelihood method. The larger maximum errors may be caused by the larger number of realizations.

As another example we compare the scatter of the derived parameters *μ* and *λ* from these examples (Fig. 7). For each of the three cases (the first) 50 out of 1000 values are marked for M012, M234, and M346, respectively. A strong correlation of errors in *μ* and *λ*, nearly along the line of the constant value of *D _{m}*, is obvious. This explains the very low errors appearing in the estimation of

*D*. Figure 7 allows a direct comparison to Fig. 2 in MB09. The moments method shows a more symmetric scattering around the prescribed values than ML and LS (cf. MB09, their Fig. 2), corresponding to the small biases. The region of scattered data increases from M012 over M234 to M346.

_{m}Many studies do not use impact disdrometer measurements but volume-sampled data. For the latter the measurement volume is constant, independent of the drop size. In impact measurements the larger (i.e., faster) drops are collected from a larger volume compared to the smaller (i.e., slower) drops. The sampling problem of larger drops is thus reduced in impact measurements compared to volume measurements. As we saw, the sampling problem is still more serious than the truncation problem.

We compare the results from impact measurements to those from volume measurements. A first insight is given by Fig. 8, which compares the RMSE to determine *μ* from impact measurements and volume measurements for four different cases (our standard case and the three example cases from MB09) and all 35 different moment methods. For methods producing the lowest RMSE in *μ* (i.e., the best method) results from impact and volume measurements are very similar (near the identity line). As the errors from a certain method increase based on impact measurements, they also increase for the volume measurements, but the worse a certain method performs, the larger the difference between impact measurements and volume measurements gets. Volume measurements lead to an RMSE in *μ* being partially 50% above the RMSE from impact measurements.

These results are reflected if we look at the achievable quality and the best methods. There is no significant difference in the (best) results from volume and impact measurements. The particular best method from volume measurements uses the small-order moments even more frequently than we found for impact measurements. The best method to derive *μ* with a small RMSE from volume measurements is with either *D _{m}* = 1 mm or

*η =*7 M012 for nearly all measurements. Only for large samples sizes (2048 and more drops) and

*μ*> 7 or

*μ*< 0, methods applying the third moment outperform M012 (not shown here). But methods based on medium- or high-order moments indicate an advantage of impact measurements. For example, applying M234 on volume measurements with 128 drops (

*η*= 7) always produces RMSEs larger than 1.5 (not shown), whereas corresponding impact measurements at least for some cases fall below the limit of 1 (see Fig. 6, lowest right plate).

It should be mentioned that these differences occur for measurements with the same number of counted drops. There is no reason why an impact measurement and a volume measurement should produce the same sample size for the same rain event. Depending on the measuring device, there can be differences. Thus, our investigation does not prove that impact measurements are superior to volume measurements.

Smith et al. (2009), Smith and Kliche (2005), and Kliche et al. (2008) investigate retrieval methods used to derive gamma parameters from volume measurements in several studies. They normalize the drop diameter by *D _{m}* and sample the drops in 150 classes from 0 up to 3

*D*, with a constant class width of 0.02

_{m}*D*.

_{m}In Smith and Kliche (2005) they use samples for an exponential DSD (*μ* = 0) without truncation at the lower end. With their Fig. 9 they show, for samples with 100 drops and for M234, M246, and M346, that “the general tendency is to overestimate *μ*.” They find that lower moments yield smaller biases. Our Fig. 9 shows the corresponding cumulative distributions of estimated values for *μ* for the same methods and additionally for M012. Whereas Smith and Kliche (2005) get only positive values for *μ* with M246 and M346, our results are a little more balanced. And whereas Smith and Kliche (2005) find that roughly 20% of all estimates with M346 are above 7, we find only 5% of these far outliers. Finally, M012 outperforms all of the other methods.

Figure 10 shows the cumulative distribution for derived values of *D _{m}*. The four black lines show results for μ = 0 as a comparison to Fig. 10 in Smith and Kliche (2005). The spread of our derived values is significantly smaller and the mean value is closer to the prescribed value of 1 mm. Finally, we compare the median value for

*μ*for the four methods as a function of the sample size in Fig. 11 with those presented by Smith and Kliche (2005) in their Fig. 12. Again, our values are significantly closer to the prescribed value

*μ*= 0.

There is no reason why our estimates should be better than those from Smith and Kliche (2005). They did not truncate their samples, so there is no need to regard the truncation in the retrieval. Certainly, their retrieval as outlined in the appendix of their paper is not identical to that in Eqs. (8)–(10).

In Kliche et al. (2008) mean, median, and RMSE values for *μ* are presented based on M234, ML, and LM for prescribed values of *μ* = 2 and *μ* = 5 and varying sample sizes. Kliche et al. (2008) determine these values for untruncated samples and for samples with truncation at *D*/*D _{m} =* 0.2. Their retrieval does not take the truncation into account. Our estimates for these values are given in Table 3.

Mean, median, and RMSE for M234 estimated *μ* vs sample size for two different prescribed values *μ* = 2 and *μ* = 5 and a prescribed reference diameter of *D _{m}* = 1 mm.

The sample sizes in Kliche et al. (2008) do not correspond to ours. Nevertheless, we can see that our results are comparable or even better than the results found there for untruncated samples using M234. The results for truncated samples in Kliche et al. (2008) are significantly worse than those presented here. Obviously, it is necessary to take the truncation into account in the retrieval.

For untruncated samples, LM and ML outperform M234 and even M012 (not shown). This is still true for truncated samples M234 and small sample sizes. For medium or large samples sizes (depending on *μ*) our M234 performs comparably or even better than ML and LM and M012 is in most cases a little better.

Finally, Smith et al. (2009) investigate volume measurements without truncation at the low end. In their Fig. 4 they present the cumulative distribution of *D _{m}* for 50 drops, and prescribed values for the shape parameter μ of

*μ*= 0,

*μ*= 2, and

*μ*= 5 determined with both M234 and M346. Our Fig. 10 already showed all of these distributions; they are closer to the prescribed value (

*D*= 1 mm) and are less biased. For

_{m}*μ*= 0 we find the mean of

*D*between 0.97 and 1.00 mm [Smith et al. (2009) report 0.94 mm], and between 57% and 64% of the estimates are smaller than 1 mm [Smith et al. (2009) report more than two-thirds]. For

_{m}*μ*= 2 the mean value is between 0.99 and 1.00 mm [Smith et al. (2009) report 0.97 mm], and for

*μ*= 5 it is between 1.00 and 1.01 mm [Smith et al. (2009) report 0.98 mm]. The standard deviation decreases from around 0.13 to 0.10 and then 0.07 mm [Smith et al. (2009) report, respectively, 0.27, 0.18, and 0.11 mm].

Whereas *D _{m}* is the parameter that can be determined with the highest quality, the worse results occur for

*μ*. The distributions of derived shape parameters are shown in Fig. 5 of Smith et al. (2009). Our Fig. 12 shows our distributions for a comparison.

For *μ* = 2 we find that 70% of the derived values for μ based on M234 are overestimates [Smith et al. (2009) have almost 80%], and the mean estimate is 3.00 [3.80 for Smith et al. (2009)]. For M346 the mean even increases to 4.38 [6.77 for Smith et al. (2009)] and the fraction of overestimates is 81% [nearly 95% for Smith et al. (2009)]. Applying M012 reduces the mean to 2.45 and “only” 60% overestimates.

Data quality improves for *μ* = 5. Using M346 the mean is 7.09 and 72% of the values are overestimates [respectively, 9.29 and about 85% for Smith et al. (2009)]. With M246 the mean is 5.84 with 63% of the values being overestimates [respectively, 6.77 and 72% for Smith et al. (2009)], and M012 leads to a mean of 5.32 and 53% overestimates.

It is still astonishing that our estimates tend to be better than those published by Smith et al. (2009), Smith and Kliche (2005), and Kliche et al. (2008). Obviously, the coarse size resolution of the Joss–Waldvogel disdrometer introduces no significant limitation to our measurements, in contrast to a comment in Smith et al. (2009). This encourages us to assume that uncertainties of real disdrometers in counting small drops are a limitation that can be dealt with. Counting the drops in the wrong drop size mainly introduces noise (not biases) into the measurements. The fraction of uncounted drops (resulting from dead times and reduced sensitivity) can probably be estimated and taken into regard in the retrieval.

The main insight of this investigation is that taking limitations of the measurement into account during the retrieval improves the results significantly. This will still be true when applied to further retrieval methods such as L moments, maximum likelihood, or least squares fits. Considering the truncation improves even the worst affected *method* M012 so it now performs best. Disregarding the truncation introduces biases in all the moments applied. Considering the truncation reduces the problem to increased noise (compared to a perfect measurement without truncation) but removes most of the biases.

## 7. Summary and conclusions

Estimating DSD properties from disdrometer measurements suffers from (at least) two problems: (i) difficulties in measuring very small drops (truncation problem) and (ii) a lack of statistical significance in measuring drops of large sizes resulting from low absolute counts (sampling problem). If we want to fit analytical functions to measurements, we have to deal with these limitations. With this study we investigate both how and how far errors in DSD parameters can be kept small.

Our investigation uses the gamma distribution and the moments method as an example for rating truncation and sampling problems by searching for the best performing triple of moments to derive the parameters of the gamma distribution. By simulating a disdrometer measurement we drew random samples from prescribed DSDs as the basis for the evaluation process.

We found that the truncation problem can be overcome much more efficient than the sampling problem. Using low-order moments leads to less biased and especially less noisy estimates of the gamma parameters *D _{m}*,

*μ*, and

*λ*than high-order moments. It is important to regard the truncation in the retrieval algorithm. If we neglect the limitation of the instruments’ measuring range, then all of the moments will be biased (underestimated). The bias reproduces itself in the derived parameters. Taking the truncation into account means estimating the unmeasured fraction of the moments; this estimation might be of worse quality than an optimal measurement, but it only introduces noise, not biases.

As long as only one combination of moments shall be applied for the analysis, we recommend using M012. There are cases were different methods perform a little better than M012, but especially with regard to the RMSE in *μ* a comparison of the second row (best method) and the third row (M012) of Fig. 6 indicates that additional noise caused by M012 compared to the best method is not essential.

Based on a comparable study MB09 propose the use of the maximum likelihood method. The direct comparison with M012 shows that it outperforms ML with respect to biases of *μ*, and it works as well as ML with respect to RMSE in *μ*.

The derivation of reference diameter *D _{m}* proved to be rather easy and is not very sensitive to the applied moments. Determining the shape parameter

*μ*and the slope parameter

*λ*turns out to be more challenging. They are statistically coupled and proved to tend to positive biases, at least in the region of most realistic small values (

*μ*< 5).

The achievable quality of the parameters depends most significantly on the number of measured drops, described here by *η*. Increasing the measurement time (i.e., decreasing temporal resolution) is no solution because it breaks the precondition of a homogenous rain event during the measuring time. We should use temporal resolutions that resolve the variations of the shape of the DSD, recognizable by “smooth” changes of *D _{m}* and

*μ*in time. Even with 1-min measurements we do not reach this goal. Reducing the temporal resolution thus provides no more representative estimates of the DSD. The only solution is an increase in the active area of the instrument. Because real instruments suffer from a dead time this change will result in the use of more than a single disdrometer at a certain measuring location. The error analyses shown in the second images in the left columns of Figs. 5 and 6 may help to estimate the requirements.

Comparing the results from impact disdrometers with those from volume measurements shows that impact measurements indeed reduce the sampling problem; nevertheless, low-order moments should still be favored.

## Acknowledgments

The authors want to thank Paul Smith who triggered the idea for this investigation by a talk in Bologna, Italy, and Till Grünewald who made most of the diligent work and Klaus D. Beheng who improved the text significantly.

## REFERENCES

Atlas, D., , Srivastava R. , , and Sekhon R. , 1973: Doppler radar characteristics of precipitation at vertical incidence.

,*Rev. Geophys. Space Phys.***11**, 1–35.Brawn, D., , and Upton G. , 2008: Estimation of an atmospheric gamma drop size distribution using disdrometer data.

,*Atmos. Res.***87**, 66–79.Cao, Q., , and Zhang G. , 2009: Errors in estimating raindrop size distribution parameters employing disdrometer and simulated raindrop spectra.

,*J. Appl. Meteor. Climatol.***48**, 406–425.Feingold, G., , and Levin Z. , 1986: The lognormal fit to raindrop spectra from frontal convective clouds in Israel.

,*J. Climate Appl. Meteor.***25**, 1346–1364.Goethe, J. W., 1808:

*Faust—Eine Tragödie von Goethe*(*Faust—A Tragedy, Translated from the German of Goethe*). Tübingen, Cotta’sche Verlagsbuchhandlung, 309 pp.Handwerker, J., , Straub W. , , and Beheng K. D. , 2004: Normalized drop size distributions.

*Proc. 14th Int. Conf. on Clouds and Precipitation,*Bologna, Italy, International Commission on Clouds and Precipitation, 537–549.Hosking, J. R. M., 1990: L-moments: Analysis and estimation of distributions using linear combination of order statics.

,*J. Roy. Stat. Soc.***52B**, 105–124.Illingworth, A. J., , and Blackman T. M. , 2002: The need to represent raindrop size spectra as normalized gamma distributions for the interpretation of polarization radar observations.

,*J. Appl. Meteor.***41**, 286–297.Joss, J., , and Waldvogel A. , 1967: Ein Spektrograph für Nie-der-schlags-trop-fen mit automatischer Auswertung.

,*Pure Appl. Geophys.***68**, 240–246.Kliche, D. V., , Smith P. L. , , and Johnson R. W. , 2008: L-moment estimators as applied to gamma drop size distributions.

,*J. Appl. Meteor. Climatol.***47**, 3117–3130.Lee, G., , Zawadzki I. , , Szyrmer W. , , Sempere-Torres D. , , and Uijlenhoet R. , 2004: A general approach to double-moment normalization of drop size distributions.

,*J. Appl. Meteor.***43**, 264–281.Mallet, C., , and Barthes L. , 2009: Estimation of gamma raindrop size distribution parameters: Statistical fluctuations and estimation errors.

,*J. Atmos. Oceanic Technol.***26**, 1572–1584.Seifert, A., , and Beheng K. D. , 2006: A two-moment cloud microphysics parameterization for mixed-phase clouds. Part 1: Model description.

,*Meteor. Atmos. Phys.***92**1–2, 45–66.Smith, P. L., , and Kliche D. V. , 2005: The bias in moment estimators for parameters of drop size distribution functions: Sampling from exponential distributions.

,*J. Appl. Meteor.***44**, 1195–1205.Smith, P. L., , Kliche D. V. , , and Johnson R. W. , 2009: The bias and error in moment estimators for parameters of drop size distribution functions: Sampling from gamma distributions.

,*J. Appl. Meteor. Climatol.***48**, 2118–2126.Testud, J., , Oury S. , , Black R. , , Amayenc P. , , and Dou X. , 2001: The concept of “normalized” distribution to describe raindrop spectra: A tool for cloud physics and cloud remote sensing.

,*J. Appl. Meteor.***40**, 1118–1140.Torres, D. S., , Porrà J. M. , , and Creutin J.-D., 1994: A general formulation for raindrop size distribution.

,*J. Appl. Meteor.***33**, 1494–1502.Willis, P., 1984: Functional fits to some observed drop size distributions and parameterization of rain.

,*J. Atmos. Sci.***41**, 1648–1661.

^{1}

How drops larger than *D*_{max} are treated depends on the measuring device. They may be counted as drops in the last size class [leading to *D*_{K} and (Δ*D*)_{K} approaching ∞], or they may be neglected. In this study, they are neglected.