Optimal Nonlinear Estimation for Cloud Particle Measurements

H. Pawlowska Météo-France, Centre National de Recherches Météorologiques/GMEI, Toulouse, France

Search for other papers by H. Pawlowska in
Current site
Google Scholar
PubMed
Close
,
J. L. Brenguier Météo-France, Centre National de Recherches Météorologiques/GMEI, Toulouse, France

Search for other papers by J. L. Brenguier in
Current site
Google Scholar
PubMed
Close
, and
G. Salut Laboratoire d’Analyse et d’Architecture des Systèmes, CNRS, Toulouse, France

Search for other papers by G. Salut in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Particle concentration is generally derived from measurements by cumulating particle counts on a given sampling period and dividing this particle number by the corresponding sampled volume. Such a procedure is a poor estimation of the concentration when the number of counts per sample is too low. It is shown that counting particles in a cloud is a conditionally Poisson random process given its intensity, which is proportional to the local average concentration of particles. Because of turbulence and mixing processes, the particle concentration in clouds fluctuates and so does the intensity of the counting process, which is referred to as an inhomogeneous Poisson process. The series of counts during a cloud traverse is a unique realization of the process. The estimation of the expected number of particles is thus a Bayesian procedure that consists in the estimation of the intensity of an a priori random inhomogeneous Poisson process from a unique realization of the process. This implies, of course, an a priori model for the possible variations of this intensity. The general theory of optimal estimation for point processes addresses the above problem. It is briefly recalled here, and its application to particle measurements in the atmosphere is tested with simulated series of particle counts. Two examples of estimation from droplet measurements in clouds are also shown and compared to the current method. Nonlinear estimation removes the noise inherent to the counting process while preserving sharp discontinuities in the droplet concentration.

* On leave from Institute of Geophysics, University of Warsaw, Warsaw, Poland.

† Affiliate scientist at NCAR/ATDMMM.

Corresponding author address: Dr. Hanna Pawlowska, CNRM/GMEI/MNP, 42, Av. Gustave Coriolis, 31057 Toulouse, Cedex, France.

Abstract

Particle concentration is generally derived from measurements by cumulating particle counts on a given sampling period and dividing this particle number by the corresponding sampled volume. Such a procedure is a poor estimation of the concentration when the number of counts per sample is too low. It is shown that counting particles in a cloud is a conditionally Poisson random process given its intensity, which is proportional to the local average concentration of particles. Because of turbulence and mixing processes, the particle concentration in clouds fluctuates and so does the intensity of the counting process, which is referred to as an inhomogeneous Poisson process. The series of counts during a cloud traverse is a unique realization of the process. The estimation of the expected number of particles is thus a Bayesian procedure that consists in the estimation of the intensity of an a priori random inhomogeneous Poisson process from a unique realization of the process. This implies, of course, an a priori model for the possible variations of this intensity. The general theory of optimal estimation for point processes addresses the above problem. It is briefly recalled here, and its application to particle measurements in the atmosphere is tested with simulated series of particle counts. Two examples of estimation from droplet measurements in clouds are also shown and compared to the current method. Nonlinear estimation removes the noise inherent to the counting process while preserving sharp discontinuities in the droplet concentration.

* On leave from Institute of Geophysics, University of Warsaw, Warsaw, Poland.

† Affiliate scientist at NCAR/ATDMMM.

Corresponding author address: Dr. Hanna Pawlowska, CNRM/GMEI/MNP, 42, Av. Gustave Coriolis, 31057 Toulouse, Cedex, France.

1. Introduction

Particle concentration is a key parameter in atmospheric physics. It represents the number of particles of a given type per unit volume of air (volumic concentration) or per unit mass of air (specific concentration). In cloud physics, droplet concentration refers to the concentration of liquid water droplets with a diameter smaller than about 50 μm; size distribution or spectrum refers to the relative concentration per unit particle diameter. Particles can also be sorted according to their shape (ice crystals) or their chemical composition (aerosols). Particle concentration is usually measured with single particle spectrometers: In a given sampled volume of air, particles are detected and classified according to their size, shape, or chemical composition. Concentration is then calculated as the ratio of the number N of particles counted during a given sampling period T to the corresponding sampled volume of air V, or when the air is flowing into the counter, as the ratio of the measured particle rate across the probe sensitive section λ = N/T to the sampled volume flow rate V/T through that section:
i1520-0426-14-1-88-e1
where Vdepends on the characteristics of the detector.

The accuracy of this measurement is strongly affected by the fact that, no matter what the physical process is that drives particle concentration, the occurrence of particles is a random process. If particles are independently distributed in space, which is generally the case in the atmosphere, arrivals of particles into the counter are independent events, and the counting is a Poisson process for a given local concentration. The sampling period must be chosen long enough for the number of counted particles to be a good statistical estimator of the concentration. However, if the sampling period is too long, information about variations of the local concentration is lost.

The choice of the sampling period is particularly difficult when measuring particles with very different concentrations. For example, in cloud physics, the droplet concentration often decreases rapidly with size so that the difference between the counted rate at the mode of the distribution and the rate in the last size class of an airborne spectrometer reaches one to three orders of magnitude. The most commonly used sampling period is 1 Hz (about 100-m scale resolution along the aircraft trajectory). This value is satisfactory for an accurate evaluation of the total concentration (in the whole size range covered by the spectrometer) or the evaluation of the concentration at the mode of the distribution, but it is insufficient for the measurement of the concentration of large particles (less than 100 counts per size class). This uncertainty is particularly critical when deriving, from the measured spectrum, parameters such as the liquid water mixing ratio or the reflectivity, which are proportional respectively to the third and sixth moment of the size distribution.

The same problem occurs when looking at low concentration regions, as, for example, at the interface between a cloud and the surrounding clear air. Measurements at the finest scale through such transitions are crucial for understanding the mixing process, but the relative error on the evaluation of low values of the concentration is so high that it is impossible to decide whether the observed variations are physically significant or if they reflect only the randomness of the counting.

A rigorous calculation of particle concentrations should thus include an estimation of the uncertainty for every particle category, based on the number of counts, so that when this uncertainty becomes too high, the concentration would be calculated over longer sampling periods until the prescribed level of significance has been reached.

The first attempts to document small scales were limited to the checking that the long samples needed for calculations were homogeneous. Brenguier (1989) described a method based on the comparison of the particle counted rate and the probe activity. Concurrently, Paluch and Baumgardner (1989) proposed an alternative method based on the frequency distribution of interarrival times between particles in the counter. More recently, Baker (1992) developed an efficient test that provides also an evaluation of the scales of inhomogeneities. All these methods are based on statistical properties of the measured series of counts and they are limited to scales long enough for these statistics to be significant. A good compromise is obtained when the samples contain more than 1000 detections. Smaller scales have been documented from the raw series of particle arrivals. Brenguier (1993) showed that the interface between clear and cloudy air can be shorter than a centimeter with a very high level of confidence (see section 4). However, the methodology was quite tedious since the interface was first identified from data processed at high rate, and the hypothesis was then tested separately from the raw data at the interface. These results motivated, infact, the present approach toward an operational optimal estimation of the particle concentration at the smallest resolvable scale.

The above discussion illustrates the limitations of concentration calculations based upon the counting of particles over fixed sampling periods. A more general definition of the particle concentration is proposed in the following section, and measurement accuracy and statistical significance are discussed in more detail. In section 3, a solution based on a Bayesian approach is presented, and it is evaluated in section 4. Finally, an example of application to cloud droplet measurements is shown in section 5, before the conclusions.

2. Measurements of the particle concentration

a. Is the particle concentration a measurable parameter?

As discussed above, if particle concentration is defined as a number of particles per volume of air ΔV, it does not converge when ΔV becomes so small that the number of counted particles is not statistically significant. At the limit, when ΔV tends to 0, the estimated concentration fluctuates between zero (no count) and infinity (one count). A regularizing definition must take into account the random nature of the actual number of particles per volume. Thus, particle concentration C(r, t) at point r and time t is better defined as the expectation of the number of particles whose center is inside the volume ΔV around r, during the period Δt around t. This definition converges when ΔV and Δt tend to 0 but needs a large number of independent realizations for a significant evaluation of the expectancy by the law of large numbers. Such a situation is generally unacceptable in the atmosphere because phenomena are not reproducible and most of them are too unstationary for allowing more than one sampling. This is particularly true for airborne particle measurements in clouds for which each cloud penetration is a unique realization. In such cases, the calculation of particle concentration is a Bayesian procedure that consists in deriving its probability density function (PDF), conditional to the unique data series taken along the aircraft trajectory and a certain probability model of the a priori assumptions. From this point of view the concentration itself is a random variable that can be estimated from raw data but that is not directly measurable.

b. Particle counting as a Poisson process

If particle counts are Poisson distributed, the counting process is characterized by the expected counted rate through the counter λ, also called the intensity of the Poisson process. When the intensity is a constant independent of time, the process is said to be homogeneous. Whenever λ is not constant, the process is called inhomogeneous.

For a homogeneous Poisson process, the intensity λ can be defined as the expected number of particles N counted during the sampling period T, divided by T:
i1520-0426-14-1-88-e2
For a Poisson process, the variance of the number of counts is equal to its mean value ():
NλT.

When definition (2) is replaced by the experimental average, the standard deviation in the estimation of λ decreases when the sampling period T increases, as T−1/2.

This simple relation allows one to estimate the minimum sampling period needed for the measurement of a given particle rate with a given accuracy, assuming the expectedparticle rate is constant.

The conditions for an arbitrary counting process to be Poisson distributed are explicited by Snyder (1975), theorem 2.2.7. For single particle counters they are obviously verified except for the condition of evolution without aftereffects. This assumption may be not valid either if there is a physical dependence between particles—as, for example, particle cluster—or if the particle counter itself generates such a dependence, as, for example, if there is a dead time after a particle detection.

c. In the atmosphere, particle counting is a conditional Poisson process

In the atmosphere, particle counting with an airborne spectrometer is a Poisson process because counts are independent events. Corrections of dead-time effects have been proposed by Baumgardner et al. (1993). They are not discussed here since the data presented in this paper have been sampled with a counter without dead time. We tested the lack of aftereffects by selecting long series of counts that show a high level of homogeneity. Counts have been accumulated over varying sampling periods. They show that the standard deviation of the counting rate is inversely proportional to the square root of the sampling period. Similarly, Brenguier (1993) showed that interarrival times between counts are exponentially distributed down to one-fourth of a microsecond. These two results indicate that the Poisson process is a reasonable model for the particle counting with an airborne spectrometer.

However, particle concentration in the atmosphere is generally not homogeneous, and particle measurements are precisely intended to document its spatial variation. The intensity of the process is varying with space and time (since the aircraft is fast, the process can be seen as stationary, and space and time can be related by the formula: space = aircraft velocity × time), and the simple formula (3) is no longer valid for long time intervals.

Particle counting is thus a Poisson random process conditional to the time-varying value of the intensity. The usual estimation of this intensity is a linear smoothing with a rectangular window of a fixed sampling period, which corresponds to a low-pass filter. If the sampling period is lengthened, the accuracy of intensity estimation is improved according to (3), but information on variations at scales smaller than T is lost. If, on the contrary, the sampling period is shortened for resolving smaller scales, variations of the measured concentration that are only due to the randomness of the counting interfere with physically significant time variations of the mean rate. Figure 1 illustrates the difficulty of selecting a good compromise for removing the noise intrinsic to random counting while preserving the detection of sharp variations in particle concentration. For the example shown in Fig. 1, the reference intensity has been set to a constant value of 50 000 s−1 (a value typical of airborne droplet measurements), except for a short interval of 1 ms, during which the intensity has been doubled (100 000 s−1). The estimated rate has been calculated as the ratio of the number of arrivals to a given sampling period T (equal respectively to 1 ms in Fig. 1a and 2 ms in Fig. 1b).

d. Discussion

If a large number of independent realizations of the same process were available, it would be possible to estimate the expected particle rate λ at any time t, with any time resolution δt, by calculating the expectancy of counting a particle between t andt + δt, without regard to the adjacent values. Accuracy and time resolution would only depend on how large the number of realizations is. When a single realization is available—that is, a unique series of interarrival times between particles in the counter—such a procedure is meaningless. The whole information, in these conditions, is contained in the conditional probability distribution of λ with respect to (i) the available measurements and (ii) the available a priori assumptions. This calls for a Bayesian approach, which proceeds as follows.

We consider an inhomogeneous Poisson process whose time-varying intensity is itself governed by a process, called the underlying process (the Poisson process is said to be conditional to the underlying process). It is important to mention that additional a priori information about this (possibly random) underlying process must be provided. This point is crucial because it implies that the expected particle rate cannot be estimated if nothing is known a priori about the underlying process. Indeed, estimation with no constraint at all is an ill-posed problem since there exists always a fast varying rate that would fit ideally each particle occurrence. The usual solution may be recognized as a limiting case corresponding to a crude assumption on the a priori information. Estimating the local expected particle rate as the number of counts in a window of period T implies from the statistical point of view that the particle rate during T is completely independent of what happened before and after that period. It is obvious that such a hypothesis is inadequate when the sampling period becomes too small.

The Bayesian approach is more rigorous as it allows a clear and coherent mathematical model of the a priori assumptions that are used to define optimal estimation of the intensity of an inhomogeneous Poisson process. Estimation is said to be optimal whenever it relies upon the conditional probability measure of variable of interest with respect to the observation, which is considered. An exhaustive study of the subject is given in the books Random Point Processes by Snyder (1975) and Random Point Processes in Time and Space by Snyder and Miller (1991). The next section describes the application to the specific problem of particle concentration measurements in the atmosphere. In Bayesian filtering, the estimation is determined only by the data preceding the current location, which is a constraint for real-time processing. In Bayesian smoothing, the estimation is determined by preceding data as well as any desired section of forthcoming data. Smoothing is more complex than filtering and is only suited to off-line processing. Both methods are described in the next section.

3. Optimal estimation of the particle rate

It has been discussed in the previous section that the counting of particles is a stochastic inhomogeneous Poisson process, conditionally to its intensity, and that evaluation of particle concentration from these measurements is formally equivalent to an estimation of the intensity process from a unique realization of the counting process (the series of particle interarrival times). This estimation is feasible only if a priori information is provided about the underlying process that governs the changes of the intensity. Here the underlying process is considered itself as a stochastic process in the sense that the laws that govern the changes of the particle concentration are not precisely known, but only follow an a priori random drift. Such a counting process is often referred to as a doubly stochastic Poisson process. Extracting an estimate of this intensity from raw data is a Bayesian procedure called filtering or smoothing, according to the manner in which measurements are delivered, as explained below.

a. The optimalnonlinear filtering equation

The filtering problem associated with a Poisson process is described as follows: Let {Nt; tt0} be a stochastic Poisson process conditional to its intensity {λt; tt0}. We assume that {N} is observed on the interval [t0, t] and that the entire counting path {Nσ; t0σ < t} is available. The endpoint time t corresponds to a real-time parameter, which increases from t0 as additional data are accumulated. The objective is to evaluate the conditional PDF of λt with respect to the observed measurements on N up to time t. It obviously evolves in time as (i) λt is a priori time varying and (ii) new data become available. It is important to note that, in the filtering process, only the current value λt of λ is to be estimated. Past values are not specifically considered after new data have been obtained, as opposed to smoothing, which will be described later on.

As mentioned before, one has to assume a possible type of randomness for λt as a stochastic process. In the absence of any information, the obvious choice is to assume a random drift with independent increments (random walk). In continuous time, one has only two possibilities: a Poisson process1 if the drift is made of jumps or a Brownian motion if the drift has no discontinuities. In both cases, these are Markov processes represented by a linear evolution operator (L) on their a priori PDF pt(λ) (Chamon et al. 1994):
dptλLptλdt.
The conditional distribution ptp(λt | {Nσ; t0σt}), with respect to the observed measurements {N}, obeys the following filtering equation [Snyder 1975, chapter 6, Eq. (6.142)], whose main feature is its forcing term (second term on the right-hand side):
i1520-0426-14-1-88-e5
where λ̂t = ∫λmaxλmin λpt(λ) dλ is the conditional mean of λ and dNt is the stochastic increment of N in the sense of jump differentials (dNt = 1, if there is a new particle at t, dN = 0, otherwise). Note that the filtering equation is nonlinear in pt, due to the presence of λ̂t as well as pt in the forcing term.

The operator L for continuous time random walk has two possible expressions.

(i) If the a priori random changes of λ are parameterized as a Brownian process, L has the following form:
i1520-0426-14-1-88-e6a
where q is a priori the variance of the changes.
(ii) If the a priori random changes of λ are themselvesparameterized as a Poisson process, L has the following form:
i1520-0426-14-1-88-e6b
where Λ is the a priori mean rate of jumps of the intensity process and f is the probability distribution of the amplitude of these jumps. Here λmin and λmax are the limits of all possible values of λ; λmin is obviously equal to zero (no particles); and λmax shall be taken large enough to cover all possible values of λ, in agreement with the expected experimental conditions.
If no information is available upon the underlying process, one can only admit a priori that all jumps are equally probable. Equation 6b then takes the form
i1520-0426-14-1-88-e6c

Computations in this paper are mainly based on this simplified form that corresponds to the simplest a priori hypothesis for droplet measurements in clouds. An example of filtering is given in the next section, for continuous changes of the intensity [Eq. (6a)] as well as an example with jumps of intensity [Eq. (6b)].

b. Remarks on the different terms of the equation

The first term of Eq. (5) does not contain any information on the counted events and represents the a priori evolution of the λ PDF [Eq. (4)]. When λ is a random walk, its free evolution leads asymptotically to a flat distribution, as expected. This is shown by solving Eq. (4), where operator L is given by Eq. (6c):
i1520-0426-14-1-88-e7
where p0(λ) is the initial value of the λ PDF at time t = 0. Here τ = 1/Λ appears as a mean characteristic time. The distribution is asymptotically flattened as e−Λt.

On the other hand, if one admits that λ is a priori subject to a continuous Brownian drift rather than Poisson jumps, then one recognizes in Eq. (6a) the diffusion equation, whose asymptotic behavior is well known. The distribution is asymptotically flattened as well, to a degenerate Gaussian distribution with infinite variance.

The second term of Eq. (5) contains a posteriori information upon the counting process gained from measurements and prevents pt(λ) from flattening, as follows. If the average particle counting rate is as expected, the last term is null in the average (dNtλ̂t dt = 0). If particles arrive more often than predicted by the current value of λ̂, this last term becomes positive in the average and the values of pt for λ > λ̂ are increased, and conversely for smaller values of λ. It follows that λ̂ moves progressively toward larger values. This is true also for the mode that represents the most likely value of the intensity. A similar behavior arises (toward smaller values) when particles arrive more slowly than predicted by λ̂.

c. Optimalnonlinear smoothing

Filtering is the way to process data in real time, as it is nonanticipatory of the future by definition. Since the method described here aims at postprocessing, one should use information from the entire data series. Such a procedure is called smoothing. As the local intensity mainly correlates with its close future (as well as past), only a short section of the data following the current location must be used for updating the estimation derived after filtering. This numerical scheme is called fixed-lag smoothing and it provides pt(λ | {Nσ; t0tt1}), at time t in an observation interval [t0, t1], where t1 increases with t as additional data are taken. In practice, the postinterval [t, t1] for smoothing does not need to be very long for efficiently updating the filtered data.

The exact resolution of the smoothing problem can be found in Snyder (1975, chapter 6). Here some explanation of smoothing methodology used in the paper, with similar notation as in Snyder, is presented.

To write the smoothing equation, we need to introduce some terminology and notation. Let {Nt; tt0} be a doubly stochastic Poisson process with intensity {xt; tt0} where xt is any of n values of intensity λ between given limits λmin and λmax. Denote the counting path observed on the interval [t0, t1] by Πt0,t1 = {Nσ; t0σ < t1}. Suppose t is a time in [t0, t1] and split the observed counting path into two subrecords Πt0,t = {Nσ; t0σ < t} and Πt,t1 = {Nσ; tσ < t1}. Let pt(λ | Πt0,t1) be the conditional probability density of xt given Πt0,t1. It will be called the conditional density of smoothing. The conditional densities pt(λ | Πt0,t) and pt(λ | Πt,t1) of xt given Πt0,t and Πt,t1 will be called the conditional density of filtering and the conditional density of intensity estimation, respectively. The conditional probability density for xu, tu < t1 given xt = λ and Πt,u ={Nσ; tσ < u} will be denoted by pu(ξ | xt = λ, Πt,u) and called the pinned conditional density of filtering. The conditional density of smoothing evolves with increasing t1, for t1t as
i1520-0426-14-1-88-e8

The estimates of the intensity process that are required on the right-hand side of this equation are the filtering estimates of Eq. (5). Here λ̂t1 | (t0,t1) is the mean value of λ with respect to the probability density of filtering calculated for the whole path Πt0,t1; λ̂t1 | (t,t1)(t,λ) is the mean value of λ with respect to the pinned conditional density of filtering pt1(ξ | xt = λ, Πt,t1). This density is obtained from Eq. (5) with the initial condition pt(ξ) = δλ at t1 = t. Equation (8) updates the density at time t changing smoothly the probability density distribution after the new data are taken into account. The procedure is similar to that given by the second term of Eq. (5); the difference is that in the place of λ stands λ̂t1 | (t,t1)(t, λ). The initial value of λ̂t1 | (t,t1)(t, λ) is λ and then for t1 > t it changes according to the filtering procedure after additional data are taken into account. At each time step the change of probability density is proportional to the difference between the mean value of λ with respect to the probability density of filtering calculated for the whole counting path Πt0,t1 and the mean value of λ with respect to the pinned conditional density of filtering calculated for the counting path Πt,t1. After some time, the filtering procedure “forgets” the initial conditions, and the two mean values λ̂t1 | (t,t1)(t, λ) and λ̂t1 | (t0,t1) become equal and updating is finished. Therefore, there is no need to do smoothing for a long time. In the present paper “smoothing time” (ts = t1t) was fixed and depended only on the choice of the parameterΛ.

4. Application to simulated data

The method cannot be directly evaluated with experimental data since the searched intensity (particle concentration in the atmosphere) is unknown and not measurable. Tests are thus performed with simulated data using a generator as described in appendix A. After the range of possible λ values has been discretized, filtering or smoothing equation is numerically integrated. Details about the numerics are given in appendix B.

a. Intensity changes parameterized as a Poisson process

The estimator presented in this paper has been especially developed to process airborne droplet measurements. The variations of the droplet rate through the counter are thus modeled as a stochastic process. In situ observations have shown that the occurrence of a change can happen at any time. For example, entering a cloud does not mean that there is a minimal distance before the next jump. Short bursts of droplets are very common, corresponding probably to cloud remnants. This leads to the choice of a Poisson process for describing the statistics of the intensity changes. The following step is to identify the probability density function of the amplitude of the changes. Here again, in situ observations are valuable. Inside cloud cells, the droplet concentration is often continuously varying, but examples of a sharp jump are also common. Brenguier (1993) documented an interface where the counted droplet rate increases from 0 to its maximum value (about 200 000 s−1 in this case) in less than 0.1 ms. It has thus been assumed in a first step that all values of sudden variations from 0 to the maximum intensity are a priori equiprobable, and the linear operator that describes the statistics of the intensity changes has taken the simplified form (6c).

Examples:

(i) The first example describes the filtering of a simulated series of arrival times for an intensity of 150 000 s−1 that falls suddenly to a value of 50 000 s−1. Figure 2a shows the time evolution of the λ PDF pt(λ). Figure 2b shows the true value of the intensity (dashed line) and the estimates represented by the value at the peak of the distribution or most probable value λm. The values λ and λ+ apart from λm delimit the interval containing 80% of the probability (80% confidence interval). They provide information about the flattering of the probability density distribution in the region of the peak. The calculating procedure makes that λ and λ+ are symmetric with respect to λm. Figure 2c represents the probability at the peak pt(λmλ.2 After 0.6 ms of filtering, a mode appears and the estimates are stable around the true value. The probability at the peak increases progressively, showing that the longer the intensity is constant, the higher is the likelihood of the estimate. Less than 0.1 ms after the intensity has fallen to a smaller value, a second mode appears in the λ PDF; the density at the mode decreases until the second mode at 50 000 s−1 becomes higher than the previous one. About 0.2 ms after the jump, the estimates have moved to the new mode and the corresponding density is rapidly increasing. This figure shows that only 10 events, arrivingthree times less often than previously, are enough for a significant detection of the change in the intensity of the process. Figure 2a shows clearly the nonlinear behavior of the equation. The mode is not sliding slowly from 150 000 to 50 000 s−1, but on the contrary, it almost jumps from the previous value to the new one.

(ii) The second example (Fig. 3) illustrates the role of the parameter Λ that represents the expected rate of change in the intensity. A series of random particle arrivals has been generated with a constant intensity of 50 000 s−1. The column on the left-hand side corresponds to Λ = 300 s−1, while on the right-hand side Λ = 5000 s−1. From top to bottom, the figures represent the most probable value λm after filtering only (Figs. 3a,b) and after filtering and smoothing on a 1-ms postinterval (Figs. 3c,d). The small bias in the estimation of λ in Figs. 3a and 3c is due to the discretization of the λ range. The last figures show the density at the peak of the PDF for nonlinear filtering and smoothing (Figs. 3e,f). Smoothing aims at reducing the lag between actual changes and their detection, but these figures show that it also reduces spurious gaps in the estimate, such as the null value of the intensity at 10 ms in Fig. 3a. The comparison between left- and right-hand side figures shows that the higher the Λ value, the more sensitive is the estimator. For Λ = 300 s−1, after a few milliseconds of stabilization, all the fluctuations of the intensity that are only due to the randomness of the counting are smoothed; for Λ = 5000 s−1 most of them are selected as significant. Concurrently, the significance of the result is decreased: at Λ = 300 s−1, the estimated value reaches a likelihood higher than 75%, while at Λ = 5000 s−1, it remains below 25%. Therefore, the choice of the Λ value is a parameter whose optimality is dictated by the frequency of jumps, which is expected for the intensity λ. If the value is too high, many structures will be selected by the estimator, but it will be difficult to decide if they are physically significant or if they reflect only the randomness of the counting. This is similar to reducing the width of the sampling period in the usual method, except that the latter is a linear processing that cannot track possible jumps of intensity within a few measurements, as allowed by nonlinear optimal processing. Only the results for two values of Λ were presented as examples, but one can imagine an experiment where Λ is a parameter of probability density and one searches for the best choice of Λ.

(iii) The nonlinear estimator is much more effective than the usual method for distinguishing between randomness of the counting and physically significant variations of the intensity. This is illustrated in Fig. 4, which shows the true value of the intensity (dashed line), the most probable value, and the 80% confidence interval after filtering and smoothing, for a simulation similar to Fig. 1, with various widths of the jump, from 1 ms in Fig. 4a to 0.5 ms in Fig. 4c. In this example, Λ has been set to 300 s−1. The comparison of Fig. 4a with Fig. 1, forthe same width of the jump (1 ms), shows that in the region of constant intensity the fluctuations due to the counting have been completely smoothed, while the estimator remains capable of detecting precisely the amplitude of the jump and its sharpness. When compared to Fig. 2b, this example shows how smoothing eliminates the lag in the detection of sharp changes. When the width of the jump becomes so small (Fig. 4b) that it contains less than 65 events in average, the mode of the λ PDF stays at the value 50 000 s−1, but the sudden broadening of the 80% confidence interval suggests that some fluctuation occured in the intensity (the precise information can be obtained by consulting the probability density distribution). Finally, when the width reaches 0.5 ms (Fig. 4c), the estimator misses the intensity variation.

(iv) Figure 5, for the same jump width as in Fig. 4b, shows how to analyze the λ PDF in order to get additional information about questionable estimates. Figure 5a shows two λ values λm and λm corresponding to the highest peaks in the λ PDF. Figure 5b represents the corresponding probabilities. In Fig. 5a, λm suggests that the fluctuation revealed by the broadening of the 80% confidence interval in Fig. 4b corresponds to an increase of the intensity. Figure 5b provides an estimate of the confidence and attests that assuming that nothing happened at this location (λ constant equal to 50 000 s−1) has the same level of likelihood as assuming that the intensity has increased, but that this level is quite low (between 10% and 20%).

(v) The above examples illustrate the potential of the optimal nonlinear estimator when little is known about the statistics of the changes in the particle rate, while such information can be provided to the filter through Eq. (6b). In the next experiment, the intensity is varying randomly, but its power spectrum is decreasing exponentially with a slope equal to −2. This is simulated again as a random process with a given intensity jump distribution. The resulting time series λ(t) is shown in Fig. 6a. As for the previous tests, it feeds the Poisson random generator for generating a series of interarrival times of particles in a counter. It must be noted here that the Poisson generator reacts to changes in the intensity only after an event has been generated. It follows that the statistics of λ are valid only at the scale of few events or larger (200 Hz in the case where the mean intensity is 1000 s−1). The series of interarrival times has then been processed by the current method with sampling periods of 10, 50 (Fig. 6b), and 100 ms (Fig. 6c). The result has not been plotted for T = 10 ms because fluctuations are so large that the figure would be unusable. The probability density function of the intensity changes has been empirically derived from the original time series λ(t) as the Gaussian distribution
i1520-0426-14-1-88-eq1
and introduced in Eq.(6b). The series of interarrival times has then been processed by the optimal estimator with Λ = 100 s−1 (Fig. 6d) and Λ = 1000 s−1 (Fig. 6e).

From Fig. 6 only, it is puzzling to decide which solution is the closest from the original series, especially between Figs. 6b (linear) and 6e (nonlinear).

However, there is a large discrepancy between the estimates when the power spectrum is considered. Figure 7a shows the power spectrum (times the frequency) for the estimates made with the current method, and Fig. 7b shows the original series (solid line) and the two nonlinear estimates.

The linear average over fixed sampling periods is characterized by minima at the inverse of the period and its harmonics. When the period is too long (T = 100 ms), the signal is attenuated. When it is too short (T = 10 ms), the white noise of the Poisson process hides the underlying process. Finally, for the optimum value of the sampling period (T = 50 ms), the power spectrum is significant only at frequencies smaller than 6 Hz (a period of about 200 ms), then it becomes oscillatory.

On the contrary, the nonlinear estimation preserves the slope of the power spectrum. If Λ is too small, the signal is attenuated; and for the optimum value (Λ = 1000 s−1) the power spectrum of the estimate is identical to the power spectrum of the original series, up to the maximum frequency (500 Hz in this case).

The examples presented show that the nonlinear estimator is much more effective than the usual linear method for the estimation of processes that are characterized by sharp discontinuities in their intensity. When the a priori spectral properties of the process are known, the nonlinear filtering is capable of retrieving an estimate that retains the same properties. The expected rate of change of the intensity Λ is a tuning parameter, which allows one to characterize optimality in the compromise between the sensibility of the filter and the level of likelihood of the estimate. It should be stressed, once again, that this is equivalent to choosing a time constant for frequency cut for the λ process, but nonlinear filtering enables one to fully utilize jump models for λ.

b. Intensity changes parameterized as a Brownian process

In the examples shown above, the intensity of the underlying process is subject to discontinuities. If it is a priori known that such discontinuities cannot exist, the operator that describes the intensity changes takes the form given by Eq. (6a). This is the case when the underlying process is governed by diffusion. The absence of random Poisson discontinuities implies linear processing might be appropriate using windowing of adequate width (inverse of spectral frequency cut). The next example shows that even in this case, the nonlinear filter still provides a better estimate.

The original intensity λ(t) varies as a sinusoidal function between the values 50 000 s−1 and 150 000 s−1, with a period of 4 ms from t = 0 to t = 16 ms, 2 ms from 16 to 24 ms, and finally 1 ms from 24 to 28 ms. Figure 8 shows the estimates by the current method with T equal respectively to (a) 0.5 ms, (b) 1 ms, and (c) 2 ms and by nonlinear smoothing with (d) q = 1011 s−3 and (e) q = 1012 s−3. In Fig. 8e, the Poisson noisehas been better removed than in Fig. 8b, while the detection of the highest frequency is almost as good as in Fig. 8a.

5. Application to actual data

In the previous section, the optimal estimator has been tested with artificial series of particle interarrival times whose intensity is known. The potential of the method is now illustrated with two samples of droplet measurements collected in actual clouds with the fast forward-scattering spectrometer probe (Fast FSSP) (Brenguier 1993). This probe detects water droplets crossing a laser beam and provides for each detection: the amplitude of the pulse (directly related to the droplet diameter), the pulse duration, the interarrival time between successive pulses, and a flag that depends on the position of the droplet into the detection beam. That flag is used for selecting only droplets crossing the beam in the depth of field (DOF), where they are correctly sized. The total beam-sensitive section is equal to about 2 mm2, that is, a total volume flow rate of 200 cm3 s−1 for an aircraft flying at 100 m s−1, while the DOF has a cross section between 0.4 and 0.05 mm2, depending on the probe configuration, that is, a sampled volume flow rate between 40 and 5 cm3 s−1 for an aircraft flying at 100 m s−1.

The data have been processed in order to produce a set of series of interarrival times. The first series contains all the detections selected in the DOF, and each interarrival time is calculated as the sum of all the measured pulse durations and interarrival times between two successive selected detections. The second series is derived from the previous one after selection of only droplets with a diameter larger than a given threshold and the corresponding interarrival times are similarly calculated. Successive series are thus derived for increasing values of the diameter threshold. The objective of the processing is to estimate from this set of interarrival time series a set of time series of droplet rates λϕi(t), where the ϕi are the successive diameter thresholds. Droplet rates are converted into concentrations Nϕi after division by the sampled volume flow rate. At any time t, the λϕi(t) (or the Nϕi) represent the cumulated droplet size distribution. Obviously, the higher the diameter threshold, the lower is the corresponding droplet rate. Therefore, if the droplet rates are estimated by averaging over a fixed sampling period, the statistical significance of the estimates decreases with increasing diameter thresholds. The following examples illustrate this limitation of the current method and the potential of the nonlinear estimator.

a. Level penetration through a convective cell

During the Sensors Performance in Cloud Experiment performed in November 1993 at Palm Beach, Florida, the Fast FSSP was mounted on the NCAR King Air. The probe was set with a diameter resolution of 32 classes between 1 and 40 μm and a DOF section equal to 0.4 mm2. On 17 November (flight 6), the King Air flew through nonprecipitating cumuli with very high droplet concentrations (Brenguier and Rodi 1995). The traverse through an isolated cell at 1921:59 is shown in Fig. 9a, with eight series of concentrationsNϕi calculated by the current method with a sampling period of 10 ms (100 Hz). The diameter scale is indicated in the figure. The total droplet rate ranges between 20 000 and 60 000 s−1 (between 200 and 600 particles per sample in average for the total rate, and less for other series). The missing data at t = 3000 ms correspond to a saturation of the acquisition system.

On the left-hand side of the figure, the transition from clear to cloudy air is sharp. The first 200-m section (until t = 2900 ms) corresponds to an ascending cell where total concentration and size distribution are relatively uniform. The second part of the traverse shows a decrease of the total concentration and inhomogeneities in the size distribution at various scales. An interesting feature appears at t = 5000 ms, where a more uniform structure, about 15 m long (200 ms), is embedded in a region of small-scale fluctuations. However, it is difficult to get more information from this sample, especially on the characteristic size of the microphysical structures because small-scale fluctuations are present all over the sample. Such fluctuations are typical of the randomness of the counting.

Figure 9b shows the same sample processed with the nonlinear estimator. Most of the small-scale fluctuations have been rejected except where it is more likely for these structures to be physically significant. The main cell appears more uniform than with the current method, and the difference between the short uniform structure at t = 5000 ms and its fluctuating environment is reinforced. Details upon this section are given in Fig. 9c. Finally, Fig. 9d, which shows a very short (30 ms, i.e., less than 3 m) region of reduced concentration, illustrates the potential of the estimator for studying the mixing process at the very small scale and its effect on droplet sizes.

b. Ascent through a stratocumulus layer

During the European Cloud Radiation Experiment (EUCREX) performed in April 1994 at Brest, France, the Fast FSSP was mounted on the Météo-France Merlin-IV. The probe was set with a diameter resolution of 256 classes between 1 and 34 μm and a DOF section equal to 0.05 mm2. On 18 April (flight 11), the Merlin-IV flew through an extended stratocumulus layer with high droplet concentrations. (The air was flowing from the continent on the northeast.) The scientific objective here is to describe the microphysical structure in relation with the cloud radiative properties. To document the vertical profile of the droplet size distribution, several ascents and descents were made across the cloud layer. An example is shown in Fig. 10 for an ascent.

The reduced DOF section during this experiment offers a better selection of droplets before sizing, but concurrently it reduces the measured droplet rate and consequently the statistical significance of the data. The Merlin-IV was flying at about 85 m s−1, and the maximum droplet rate in the DOF is between 1000 and 1500 s−1 (a droplet concentration slightly larger than 500 cm−3). In Fig. 10a, the DOF droplet rate has been processed by averaging over a sampling period of 50 ms (about 4 m in spatial resolution along the flight path). The fluctuations in the rate are characteristic of the Poisson noise and suggest that this sampling period is too short. This is obviously worst for the following series derived with higher diameter thresholds (not plotted in the figure) since the corresponding rates are lower. However, the complete ascent is lasting less than 30 s, and reducing the spatial resolutionis not appropriate for a detailed description of the vertical evolution of the droplet size distribution. Figure 10b shows the same sample processed by nonlinear filtering. Saturation periods of the acquisition system have been marked in the figure. As previously, series corresponding to increasing diameter thresholds have been processed, and the size distribution has been derived from the differences between the series. Its time evolution along the ascent is reproduced in Fig. 10c. The droplets are growing from about 4 μm at cloud base to more than 10 μm at cloud top. As already observed in stratocumulus, the droplet spectrum is broadening with altitude. The contour plot of the spectra smoothes the details that can be observed in a representation, such as in Fig. 9b, but it offers a better visualization of the modes. Even in this case, the figure shows many details in the spectral evolution that could not be observed with the current method.

6. Conclusions

Most of the acquisition systems used for airborne particle measurements are accumulating particle counts on fixed sampling periods, and local concentrations are then derived as the ratio of this counted number to the corresponding sampled volume of air. It is possible to record particle arrival times with improved acquisition systems such as the 2D imaging PMS probes for large particles, the DSM acquisition system (Baumgardner et al. 1993), or the Fast FSSP (Brenguier et al. 1993) for cloud droplets, but particle concentrations are still derived by estimating the number of counts during a given sample, from either the counted particle number, the probe activity, or the slope of the interarrival time distribution (Brenguier et al. 1994).

However, particle detections are random discrete events, and the number of particles counted during a given sample is not necessarily equal to the expected value. Counting is a Poisson process, and consequently the estimate standard deviation decreases when the counted number increases (a value between 1000 and 5000 counts per sample can be chosen as a good compromise). In cloud physics, it is not always possible to guarantee a local statistically significant number of counts in every sample when the particle concentration decreases. For example, during measurements performed in isolated clouds, the number of particles counted in every cloud can be too small, especially for large particles. To reach a satisfactory statistical significance, it is thus necessary to cumulate measurements over many cloud traverses but the usefulness of the data is then reduced. This constraint strongly limits the spatial resolution of airborne particle measurements.

From the statistical point of view, each cloud sample is a unique realization of a random inhomogeneous Poisson process, whose intensity (the droplet rate through the counter) is an unknown process that is not directly measurable. The problem is thus to derive from this unique realization the best estimation of the intensity, viewed as an a priori random process, which leads to a Bayesian approach.

It has been shown in this paper that cumulating the particle counts over a given sampling period is a crude procedure that does not allow one to discriminate physically significant variations of the concentration from variations due only to the randomness of the counting process. This dilemma can better be solved with optimal nonlinear filtering or smoothing by calculating the probability density function of the possible values of the intensity as a function of time. Of course, this Bayesian approach is more expensive in terms of computation time but it provides an estimate of the most probable value and its likelihood. Interpretation of the data is thus moreobjective.

A priori information about the dynamical statistics of the intensity process is a crucial step in such an approach. If nothing is known, tracking of a time-varying intensity is meaningless. Defining an optimal estimation implies that some assumptions have been made upon these statistics. For example, the current linear method of cumulating over adjacent samples is equivalent to the unrealistic assumption that the spectral density of the solution is limited to periods longer than the sample duration and that solutions in adjacent samples are fully independent. More generally, optimal linear filtering is suited when it is possible to postulate a linear dynamical model for the intensity process, which is not the case for the droplet concentration in clouds.

If the sought solution cannot be described as a deterministic parametric function, only two models are available, for the intensity process to have independent increments: It must be either a Poisson process or a Brownian process. Numerical solutions have been given for both models. The Poisson model is more suited for cloud measurements where sudden changes of intensity (jump model) are of interest.

Tests performed with simulated series have shown that, in all possible conditions, this nonlinear Bayesian approach removes efficiently the noise due to the counting while preserving the detection of sharp variations in the intensity. Additional tests have been accomplished with a simulated series whose spectral density is known, and consequently the probability density function of the changes. In such a case, the method provides an optimal solution whose spectral density is similar to the original series.

Finally, the method has been applied to actual measurements of cloud droplets performed with the Fast FSSP. Two examples have been shown: a traverse across a cumulus cell and an ascent into a stratocumulus layer. They illustrate the potential of the method for studies at the smallest scales and for improved accuracy in the evaluation of the low concentration of large particles.

The conclusion of the paper is that optimal nonlinear estimation is far superior to usual linear windowing for detecting microstructures in clouds. Indeed, nonlinear estimation behaves in that case as a permanent jump detection of a state variable (λ), when a suitable a priori model is provided from a probabilistic point of view.

The most important feature of the method is its potential for improvements. Results shown in this paper for actual measurements have been obtained with very little a priori information about the jump amplitude statistics of the droplet concentration, namely, that it is subject to random changes and that any amplitude of change is equiprobable. It is also necessary to fix the value of the parameter Λ corresponding to the expected rate of change of the intensity. For too high values of Λ, the estimator detects small-scale fluctuations, but their likelihood is low. For too low values of Λ, the estimator misses significant changes in the intensity and the likelihood is also low. As Λ approaches its optimal value, the likelihood of the estimated value of intensity is increased. When the probability density function of the jumps is known, as in the simulated example described in Fig. 6, the likelihood of the estimated value is improved. It follows that, as our knowledge of the microphysical processes increases, the performance of the estimator is progressively improved. Finally, if the particle concentration is itself modeled as a parametric function, the estimator can be extended to a parametric estimation that can be retrieved from the particle counts the conditional distribution of the parameters.

Acknowledgments

This research was supported by Météo-France and INSU under Grant 95317. The authors acknowledge Dr. R. Lenschow and the reviewers for their valuable remarks andsuggestions.

REFERENCES

  • Baker, B. A., 1992: Turbulent entrainment and mixing in clouds: A new observational approach. J. Atmos. Sci.,49, 387–404.

    • Crossref
    • Export Citation
  • Baumgardner, D., K. Weaver, and B. Baker, 1993: A technique for the measurement of cloud structure on centimeter scales. J. Atmos. Oceanic Technol.,10, 557–563.

  • Brenguier, J. L., 1989: Coincidence and dead-time corrections for particle counters. Part II: High concentration measurementswith an FSSP. J. Atmos. Oceanic Technol.,6, 585–598.

    • Crossref
    • Export Citation
  • ———, 1993: Observations of cloud microstructure at the centimeter scale. J. Appl. Meteor.,32, 783–793.

    • Crossref
    • Export Citation
  • ———, and A. Rodi, 1995: Evolution of cloud droplets in small Florida cumuli. Preprints, Conf. on Cloud Physics, Dallas, TX, Amer. Meteor. Soc., 361–365.

  • ———, D. Baumgardner, and B. Baker, 1994: A review and discussion of processing algorithms for FSSP concentration measurements. J. Atmos. Oceanic Technol.,11, 1409–1414.

    • Crossref
    • Export Citation
  • Chamon, M., A. Monin, and G. Salut, 1994: Estimation/detection non-lineaire. Theorie, algorithms et applications. Rapport LAAS No. 94.507, 96 pp. [Available from G. Salut, LAAS, 7 Avenue du Colonel Roche, 31077 Toulouse Cedex, France.].

  • Paluch, I. R., and D. G. Baumgardner, 1989: Entrainment and mixing in a continental convective cloud. J. Atmos. Sci.,46, 261–278.

    • Crossref
    • Export Citation
  • Snyder, D. L., 1975: Random Point Processes. Wiley and Sons, 495 pp.

  • ———, and M. J. Miller, 1991: Random Point Processes in Time and Space. Springer Verlag, 481 pp.

APPENDIX A

Poisson Generator

Let us suppose that {ui} is a sequence of uniformly distributed variables on [0, 1]. Then another sequence of independent, exponentially distributed variables can be obtained by the transformation ti = −λ−1ln(ui). Under this transformation, the probability density for ti is λeλT, for T > 0.

A Poisson process with intensity λ (which can be time dependent) can be simulated by generating u1, transforming to obtain t1, then assigning t1 as the time from t0 to the first occurrence time; generating u2, transforming to obtain t2, then assigning t2 as the time from the first to the second occurrence time; and so forth.

APPENDIX B

Details about the Numerics of the Filtering and Smoothing Procedure

Discretization

To apply the filtering or smoothing procedure, Eqs. (5) and (8) should be discretized in the range (λmin, λmax) of all possible intensity values. The equations are then solved for each of n equally distributed intensity classes {λi} [class width Δλ = (λmaxλmin)/n]. The probability density pi is assumed constant inside each class; piΔλ thus gives the probability of having an intensity corresponding to the ith class. After multiplying Eqs. (5) and (8) by Δλ and denoting henceforth the probability simply by pi one obtains

  • the discretized version of Eq. (5) at time tt0 [with the operator L given by Eq. (6c)]
    i1520-0426-14-1-88-eb01
    where λ̂ = Σni=1 λipi; and
  • the discretized version of Eq. (8) at time t1 (tt1t + ts, ts − smoothing time)
    i1520-0426-14-1-88-eb02
    where λ̂ f = Σi=1n λ ip f i [p f i is simply the probability distribution of filtering Eq. (B1) at time t1] and λ̂ p i is the mean value of λ with respect to the pinned conditional density of filtering; that is,λ̂ p i = Σ nj=1λjppij, where
    i1520-0426-14-1-88-eb03
    with initial condition ppij = δ ij at t = t1.

Set of parameters

The only parameter of the filter is Λ, but it is also necessary to fix the values of λmin, λmax, the number of classes n, the time of smoothing ts, and the time integration step Δt.

The choice of the parameter values depends on the mean counted rate (λ̄) in a given region. The natural choice for the minimum value is λmin = 0. The maximum value λmax should be chosen large enough to be sure that the probability of getting this value is close to zero. In the studies of the simulated time series it was set at λmax = 5λ̄, but in the studies of the real cases λmax = 8λ̄. The number of classes n is a compromise between the desired resolution and the computer time consumption. In our case n = 50. The time step of integration (Δt and Δt1) is taken as one-tenth of the distance between two successive droplets, but no more than a given maximum value, that represents one-tenth of the mean distance between the droplets. The value of Δt is calculated after each droplet arrival. The time of smoothing was fixed and corresponds to 50 times the mean distance between droplets (ts = 1 ms for the mean counted rate equal 50 000 s−1). The optimal value of Λ depends on the statistics of the changes in the intensity. For cloud droplets measurements, a value λ̄/200 provides satisfactory results.

Algorithm

The outline of the algorithm is presented in Fig. B1. The filtering procedure is started with a uniform probability distribution; that is, pi = 1/n. The calculations start at time t0 with the filtering procedure. While integrating in time, ΔN takes the value 1 if a droplet arrives during Δt or 0 if there is no droplet. After each droplet arrival, the new value of Δt is calculated. At any instant in time it is possible to update the results by running the smoothing procedure. In practice, the updating is not done after each time step Δt, but only when the results are needed.

Fig. 1.
Fig. 1.

Results of the moving average procedure (solid line) for sampling periods (a) 1 ms and (b) 2 ms for a simulated time series of intensity given by a function subject to jumps (dashed line).

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 2.
Fig. 2.

Results of the filtering of a simulated series of arrival times for an intensity changing as a step function. (a) Time evolution of λ PDF. (b) The most likely value of intensity λm (solid line), the two values of intensity (λ, λ+) giving the 80% confidence interval (dotted lines), and the true value of intensity (dashed line). (c) The probability peak of λ PDF.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 3.
Fig. 3.

Results of the filtering [(a), (b)] and smoothing [(c), (d)] of a simulated series of arrival times for a constant intensity (dashed line) for two values of parameter Λ: 300 s−1 (left column), and 5000 s−1 (right column). Panels (e) and (f) represent the probability peak of λ PDF obtained by the smoothing method.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 4.
Fig. 4.

Results of the smoothing for the simulated series of arrival times for intensities in the form of a jump function with different values of jump width. (a) 1 ms; (b) 0.65 ms; (c) 0.5 ms. In each figure, solid lines represent the most likely value of intensity, dotted lines the 80% confidence interval, and dashed lines the true value of intensity.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 5.
Fig. 5.

The same jump function as in Fig. 4b. (a) The values of two peaks of the λ PDF. The highest peak is represented by a solid line (λm), the second peak by dots (λm). (b) Probabilities corresponding to the peaks.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 6.
Fig. 6.

(a) Simulated time series representing a randomly varying intensity with the power spectrum decreasing exponentially with a slope of −2. Results of moving average with sampling periods (b) T = 50 ms and (c) T= 100 ms applied to the original time series. Results of filtering with parameters (d) Λ = 100 s−1 and (e) Λ = 1000 s−1.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 7.
Fig. 7.

The power spectra (FFT) times frequency for intensities shown in Fig. 6; (a) for the intensities obtained by moving average for periods of sampling T = 10 ms (solid line), T = 50 ms (dashed line), and T = 100 ms (dotted line); (b) for original intensity (solid line), filtered intensity with Λ = 1000 s−1 (dashed line) and Λ = 100 s−1 (dotted line).

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 8.
Fig. 8.

Simulated time series with intensity in the form of sinusoidal function with amplitude varying between 50 000 and 150 000 s−1 and periods 4, 2, and 1 ms (dashed lines). Results of moving average for sampling periods (a) 0.5 ms, (b) 1 ms, (c) 2 ms, and nonlinear smoothing with (d) q = 1011 s−3 and (e) q = 1012 s−3.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 9.
Fig. 9.

Traverse through isolated cumulus cell during SPICE experiment. (a) Results of moving average with sampling period 10 ms, (b) Results of nonlinear smoothing for different diameter thresholds indicated in the figures. Some details of (b) are shown in (c) and (d).

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

Fig. 10.
Fig. 10.

Ascent through extended stratocumulus layer during EUCREX. (a) Total intensity calculated by moving average for sampling period 50 ms; (b) results of nonlinear smoother. Regions of the saturation of measuring instrument are shown by flashes in (b). (c) Contours of droplets spectrum distribution; the scale of probability density distribution is shown on the right of the figure.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

i1520-0426-14-1-88-f101

Fig. B1. The outline of the algorithm of the filtering and smoothing procedure.

Citation: Journal of Atmospheric and Oceanic Technology 14, 1; 10.1175/1520-0426(1997)014<0088:ONEFCP>2.0.CO;2

1

This Poisson process, which describes the occurrence of changes of the intensity λ, must be clearly distinguished from the Poisson process that describes the arrival of particles in the counter, whose intensity is λ.

2

Since the λ PDF has been discretized, this value is not a density, but the probability for the intensity to belong to the class λm of width Δλ.

Save
  • Baker, B. A., 1992: Turbulent entrainment and mixing in clouds: A new observational approach. J. Atmos. Sci.,49, 387–404.

    • Crossref
    • Export Citation
  • Baumgardner, D., K. Weaver, and B. Baker, 1993: A technique for the measurement of cloud structure on centimeter scales. J. Atmos. Oceanic Technol.,10, 557–563.

  • Brenguier, J. L., 1989: Coincidence and dead-time corrections for particle counters. Part II: High concentration measurementswith an FSSP. J. Atmos. Oceanic Technol.,6, 585–598.

    • Crossref
    • Export Citation
  • ———, 1993: Observations of cloud microstructure at the centimeter scale. J. Appl. Meteor.,32, 783–793.

    • Crossref
    • Export Citation
  • ———, and A. Rodi, 1995: Evolution of cloud droplets in small Florida cumuli. Preprints, Conf. on Cloud Physics, Dallas, TX, Amer. Meteor. Soc., 361–365.

  • ———, D. Baumgardner, and B. Baker, 1994: A review and discussion of processing algorithms for FSSP concentration measurements. J. Atmos. Oceanic Technol.,11, 1409–1414.

    • Crossref
    • Export Citation
  • Chamon, M., A. Monin, and G. Salut, 1994: Estimation/detection non-lineaire. Theorie, algorithms et applications. Rapport LAAS No. 94.507, 96 pp. [Available from G. Salut, LAAS, 7 Avenue du Colonel Roche, 31077 Toulouse Cedex, France.].

  • Paluch, I. R., and D. G. Baumgardner, 1989: Entrainment and mixing in a continental convective cloud. J. Atmos. Sci.,46, 261–278.

    • Crossref
    • Export Citation
  • Snyder, D. L., 1975: Random Point Processes. Wiley and Sons, 495 pp.

  • ———, and M. J. Miller, 1991: Random Point Processes in Time and Space. Springer Verlag, 481 pp.

  • Fig. 1.

    Results of the moving average procedure (solid line) for sampling periods (a) 1 ms and (b) 2 ms for a simulated time series of intensity given by a function subject to jumps (dashed line).

  • Fig. 2.

    Results of the filtering of a simulated series of arrival times for an intensity changing as a step function. (a) Time evolution of λ PDF. (b) The most likely value of intensity λm (solid line), the two values of intensity (λ, λ+) giving the 80% confidence interval (dotted lines), and the true value of intensity (dashed line). (c) The probability peak of λ PDF.

  • Fig. 3.

    Results of the filtering [(a), (b)] and smoothing [(c), (d)] of a simulated series of arrival times for a constant intensity (dashed line) for two values of parameter Λ: 300 s−1 (left column), and 5000 s−1 (right column). Panels (e) and (f) represent the probability peak of λ PDF obtained by the smoothing method.

  • Fig. 4.

    Results of the smoothing for the simulated series of arrival times for intensities in the form of a jump function with different values of jump width. (a) 1 ms; (b) 0.65 ms; (c) 0.5 ms. In each figure, solid lines represent the most likely value of intensity, dotted lines the 80% confidence interval, and dashed lines the true value of intensity.

  • Fig. 5.

    The same jump function as in Fig. 4b. (a) The values of two peaks of the λ PDF. The highest peak is represented by a solid line (λm), the second peak by dots (λm). (b) Probabilities corresponding to the peaks.

  • Fig. 6.

    (a) Simulated time series representing a randomly varying intensity with the power spectrum decreasing exponentially with a slope of −2. Results of moving average with sampling periods (b) T = 50 ms and (c) T= 100 ms applied to the original time series. Results of filtering with parameters (d) Λ = 100 s−1 and (e) Λ = 1000 s−1.

  • Fig. 7.

    The power spectra (FFT) times frequency for intensities shown in Fig. 6; (a) for the intensities obtained by moving average for periods of sampling T = 10 ms (solid line), T = 50 ms (dashed line), and T = 100 ms (dotted line); (b) for original intensity (solid line), filtered intensity with Λ = 1000 s−1 (dashed line) and Λ = 100 s−1 (dotted line).

  • Fig. 8.

    Simulated time series with intensity in the form of sinusoidal function with amplitude varying between 50 000 and 150 000 s−1 and periods 4, 2, and 1 ms (dashed lines). Results of moving average for sampling periods (a) 0.5 ms, (b) 1 ms, (c) 2 ms, and nonlinear smoothing with (d) q = 1011 s−3 and (e) q = 1012 s−3.

  • Fig. 9.

    Traverse through isolated cumulus cell during SPICE experiment. (a) Results of moving average with sampling period 10 ms, (b) Results of nonlinear smoothing for different diameter thresholds indicated in the figures. Some details of (b) are shown in (c) and (d).

  • Fig. 10.

    Ascent through extended stratocumulus layer during EUCREX. (a) Total intensity calculated by moving average for sampling period 50 ms; (b) results of nonlinear smoother. Regions of the saturation of measuring instrument are shown by flashes in (b). (c) Contours of droplets spectrum distribution; the scale of probability density distribution is shown on the right of the figure.

  • Fig. B1. The outline of the algorithm of the filtering and smoothing procedure.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 658 479 20
PDF Downloads 173 40 5