Abstract

Microphysical schemes based on the scaling normalization of the particle size distribution (PSD) are cast into a variational data assimilation method to assess their ability to retrieve the precipitation structure and humidity from moments of the PSD that can be derived from radar- and ground-based disdrometer measurements. The sedimentation and evaporation, which are the main processes below the cloud base, are examined. Various identical twin experiments are presented in the context of a column time-dependent model used to simulate the passage of precipitating cells over a short period of time. The relative humidity profile is assumed constant. The feedback of the microphysical processes on the thermodynamic fields is ignored. Observations are generated from a three-moment scheme having the zeroth, third, and sixth moments of the PSD as prognostic variables. The model is discretized in terms of the logarithms of the predictive moments, which render the adjustment of the model variables easier to the observations. An upper bound for the characteristic diameter for the sixth moment is however necessary to prevent numerical instabilities from developing during the data assimilation process.

The tangent linear model of the three-moment scheme reproduces well the difference between two nonlinear integrations over the assimilation window (8 min), which validates the use of its adjoint in the minimization of the cost function that measures the misfit between observations and corresponding model variables. A weak smoothness penalty function should be added to the cost function when noisy observations are assimilated.

When all the predicted moments are observed and assimilated, the minimization converges very well, even with 40% observation error. In this case, the reflectivity factor, which is related to the sixth moment, can be retrieved with 0.2-dB accuracy. When only the sixth moment is observed, the total number of concentration (related to the zeroth moment) cannot be recovered. However, the constant relative humidity can be obtained with 1% accuracy. When simpler one-moment and two-moment schemes are used to retrieve the precipitation structure from the observed sixth moment, the model error strongly projects on the nonobserved moments of the PSD.

1. Introduction

The assimilation of observations in cloud and precipitation into numerical weather prediction (NWP) models is foreseen as a key ingredient for improving the forecast accuracy of significant weather systems, especially at shorter terms. There is an increasing number of remote sensing measurements in precipitation available for data assimilation, like ground- and satellite-based radar reflectivity and rainy microwave radiances, but the proper extraction of their information content is challenging. The variational data assimilation technique, which is now operational or developed in many meteorological centers, allows the direct assimilation of such precipitation measurements at all atmospheric scales. However, precipitation is not a continuous field in space and time, like wind and temperature for instance: it is rather a collection of solid or liquid particles that develops within weather systems that have typically from a few hours to a few days life time. Furthermore, the formation and evolution of the precipitation is governed by several nonlinear microphysical processes. This may introduce inherent discontinuities and pathologies that could be difficult to handle in the variational data assimilation framework, which requires the linearization of these physical parameterizations for solving the analysis problem.

In recent years, the variational assimilation of surface precipitation rate (from rain gauges or estimated from satellite and radar data) in large-scale and mesoscale NWP models has been the subject of several investigations (e.g., Zou and Kuo 1996; Fillion and Errico 1997; Zupanski et al. 2002; Marécal and Mahfouf 2002). In this case, precipitation is the product of subgrid cloud and convection parameterization schemes in which the precipitation amount primarily depends on the temperature and moisture profiles. The presence of thresholds and strong nonlinearities in these schemes makes their linearization difficult and sometime impractical without drastic simplifications. Nevertheless, encouraging results were obtained when temperature and moisture profiles are first retrieved from the assimilation of precipitation rates using a one-dimensional variational data assimilation (1DVAR) method. Then the total column water vapor derived from these profiles is assimilated, along with all the other observations, in 4DVAR (Marécal and Mahfouf 2002). A similar approach was also employed by Benedetti et al. (2004) to directly assimilate radar reflectivity measurements from the Tropical Rainfall Measuring Mission (TRMM) instead of rainfall rates.

The assimilation of radar reflectivity at the cloud-resolving scale has also been the subject of several publications. The works of Rutledge and Hobbs (1983), Ziegler (1985), and Roux (1985) showed how detailed thermodynamic and microphysical fields using dual-Doppler data can be retrieved. Using crude microphysical parameterizations, attempts in the context of single-Doppler observations were not as successful, as shown by Verlinde and Cotton (1993), Sun and Crook (1997), and Wu et al. (2000), in the 4DVAR context. In particular, they conclude that assimilation of radar reflectivity demonstrates the need for improved microphysical parameterizations that would produce more realistic model predictions. Along these lines, Benedetti et al. (2003) developed a detailed double-moment cirrus model and showed how millimeter radar reflectivity can then be successfully assimilated. They also showed that the specific humidity field in cloud can be retrieved with good accuracy.

In Part I of this study (Szyrmer et al. 2005, hereafter SLZ), we showed that an accurate representation of the particle size distribution (PSD) is crucial for developing a microphysical scheme and also for relating model variables to bulk observations such as radar reflectivity. The concept of scaling normalization has been found very convenient to formulate compact representation of the drop size distribution (DSD) for rain and to study its variability (Lee et al. 2004). SLZ developed a flexible microphysics formulation that is based on this concept and is entirely formulated in terms of moments of the DSDs. The number of predicted moments can be varied from 1 to 3. This can be seen as merging recent observational progress on the DSD with modeling requirements such as developing simplified, though realistic, microphysical schemes. We first focused on the two most important processes below the cloud base: the sedimentation and evaporation of rain. We showed that three predictive moments [i.e., 3-M(0, 3, 6) following the notation introduced in SLZ] can reproduce, with good accuracy, the time evolution of the precipitation as compared to a high resolution spectral (bin) model. The limitations of using one-moment and two-moment schemes were assessed.

In this companion paper, we cast the various microphysical schemes into a variational data assimilation method and assess their ability to retrieve the precipitation structure and humidity from observed moments that can be obtained from radar- and ground-based disdrometers. As a first step identical twin experiments, in which the observations are generated from a control run, are conducted in a one-dimensional time-dependent framework. Such experiments are ideal to first examine the model properties to be used in the data assimilation context.

In section 2, the nonlinear formulation of the schemes is presented along with its linearization. The choice of variables and the regularization procedures that make the numerical scheme robust from numerical instability are discussed. The variational formulation and observations used in this study are given in section 3. All of the retrieval experiments with the microphysical schemes developed in SLZ are presented in section 4. Finally, conclusions are drawn in section 5.

2. Model description

The full description of the microphysics scheme is presented in SLZ. In this section, we focus on the choice of variables and the discretization that are convenient for the linearization of the scheme.

a. Nonlinear formulation

The time evolution of the reference moment Mm is given by

 
formula

where the first term on the right-hand side is the dynamic (transport) contribution while the second term represents the effect of all the microphysical processes PRC(n). The contribution of each process is obtained from

 
formula

where Mp represents all the nonreference moments that can be obtained from the following power relationships [Eq. (2.10) in SLZ]:

 
formula

where Ω is the set of order of reference moments and CΩp and eΩmp are constants.

All moments of the PSD are strictly positive. This restriction can be problematic in the data assimilation context when the adjusted moments approach zero. One way to avoid this problem is to make the discretization in terms of logarithms of the reference moments as in Benedetti et al. (2003). With this change of variables, the time evolution of a given reference moment due to a given microphysical process

 
formula

becomes

 
formula

where Qm is simply ln(Mm). There are other advantages to adopt the logarithmic representation: all the moments, which may range over several orders of magnitude, become of the same order: also from a statistical point of view, model error and observation error distributions for precipitation are better represented by lognormal than normal models (Errico et al. 2001). Although the change of variable prevents the moments to become negative, we nevertheless have to specify lower bound values corresponding to the nonprecipitating regime. These values have been chosen small enough to avoid problems when solving the variational problem presented in section 3. They are based on the following criteria: M0 > 10−4 m−3, the mean volume diameter defined as the cubic root of the ratio of M3 and M0 greater than 0.08 × 10−3 m, the lower bound values for M6 corresponding to −100 dBZ.

b. Regularizations

The predictive set of moments at every grid point has to correspond to a realistic PSD to prevent numerical instability to develop during the nonlinear model integration. We found that this is not always the case during the iterative process (to be described in section 3) in which unrealistic PSD profiles can be tried by the minimization algorithm. Consequently, additional conditions should be imposed within the various microphysical schemes to enforce numerical stability. This problem can be avoided by verifying the largest size tail of the PSD necessary to obtain reasonable fall speeds of the moments within the sedimentation process. We assume here that the characteristic diameter for M6, defined as D5,7 = [M7/M5]1/2 [from (2.4) in SLZ], cannot exceed the value of D5,7max= 6 mm for the raindrop distribution. Thus, from (2.10) in SLZ, the following relation has to be satisfied:

 
formula

where the superscript Ω, describing the reference moment order set, has been omitted. If the relation above is not satisfied, one of the moment [M0 if a reference moment; M6 in 2-M(3, 6)] is adjusted. For example, in the 2-M(0, 3) scheme with the assumed inverse exponential distribution, (2.6) gives 3.6[M3/M0]1/3D5,7 max; when it is not fulfilled, we put M0 = [0.28D5,7 max]−3M3 and impose D5,7 = D5,7 max. Mostly, this adjustment is active during the minimization for the initial and boundary conditions, and thus the adjusted moment conservation is not compromised. In the interior of the domain, the error introduced by the adjustment is negligible in 3-M(0, 3, 6) scheme. Similarly, with the 2-M(3, 6) scheme, the nonconservation error for M6 is very small. On the contrary with the 2-M(0, 3) scheme, the number of drops introduced via adjustment calculations is very large (about 20% of the domain entering drops), due to the overestimation of the gravitational sorting.

c. Linearization

The linearization of the above formulation is needed to construct the tangent linear model (TLM) and its adjoint. The linearization of (2.5) is

 
formula

where the prime stands for the perturbation and the overbar represents the nonlinear trajectory around which the linearization is performed. A potential problem may appear when

 
formula

becomes very small, corresponding to large negative tendencies in the nonlinear model (2.5). This problem is however prevented by restricting to be greater or equal to the lower bound values, giving (Qttm)′ = 0 in this later case.

The linearization of (3.5) and (3.7) for the sedimentation and of (3.12) and (3.16) for diffusional growth and evaporation in SLZ, written in terms of Qm, does not require any approximation. The coding of the TLM and its adjoint was performed manually following the procedure of Giering and Kaminski (1998), and several tests to ensure their correctness have been done, as suggested by Thépaut and Courtier (1991).

d. Control run

All the simulations presented in this paper include only the sedimentation and evaporation processes. As in SLZ, a column model extending from surface up to 2 km is employed with a vertical resolution of 25 m and a time step of 2.5 s. The three-moment scheme, which is able to reproduce the results of a bin-resolving model with good accuracy, is used to generate the control run. The initial conditions for each reference moment are set to their lower bound value, while at the model lid the PSD is specified and follows a Marshall–Palmer distribution with radar reflectivity factor varying in time as

 
formula

where t is time in seconds. This simulates the passage of precipitating cells, as seen in Fig. 7 in SLZ. The temperature profile is nearly pseudoadiabatic with θe = 25°C and the relative humidity is taken constant at 75% over the whole depth. There is no feedback of the microphysical processes on the humidity and temperature profiles and no air motion is assumed. The model is integrated until 21 min.

This control run is considered as the truth in the identical twin experiments presented in section 4. It is also used to generate synthetic observations of the various moments.

3. The data assimilation method

a. Variational formulation

The optimal estimation of the reference moments is obtained by minimizing the weighted least squared distance between observations and the corresponding model prediction, defined as

 
formula

where elements of x are the initial and upper conditions (or control variables) of the predictive moments, F is the forward operator that maps the predictive moments from nonlinear scheme onto the observation space, elements of y are the observations, and 𝗪y is the observation covariance error matrix. In the experiments presented in this work, we do not take into account a priori (or background) information for which associated error structure usually imposes a certain degree of smoothing to the analysis. We thus add a penalty function to Jo in order to control the level of noise in the retrieved moments due to the observation errors. The cost function to be minimized then becomes

 
formula

where Jp is the a spatial smoothness penalty

 
formula

and αm are coefficients that control the degree of smoothing. It is well known that this penalty function modifies the shape of the cost function, making the minimization problem better or worse depending on the magnitude of αm (Thacker 1988). Consequently, these coefficients should be carefully chosen so that the desired smoothing is achieved while the convergence rate is not compromised (see section 4b). The minimization of (3.2) is performed by iteration with the limited-memory quasi–Newton algorithm developed by Gilbert and Lemaréchal (1989) and employs the adjoint model to calculate the gradient of the cost function.

b. Observations

In this study, we suppose that all or some of the predictive moments of the DSD (actually their logarithms) are observed. This makes the forward model F simpler, which is relevant for the identical twin experiments conducted in section 4. The observations of Q6 (which can be obtained from the reflectivity factor measurements of a vertically pointing radar for instance) are always available in space and time. The value of having disdrometer measurements, from which all the predictive moments of the DSD near the ground can be calculated, in addition to the hypothetical radar is also assessed. In most experiments, we suppose 40% observational Gaussian error for all observed moments. This corresponds to a nearly 2-dB error for the radar reflectivity factor. We suppose that there are no time and space error correlations in the observations. Since the same level of observation errors is assumed in all observed moments, we set all the diagonal values of 𝗪−1y to unity. This makes the specification of αm easier in the smoothness penalty term.

c. First guess

The quasi–Newton algorithm requires a first guess of the control variables as a starting point for the minimization. The first guess for the predictive moments is estimated from the observed Q6, assuming (a Marshall–Palmer like) an exponential DSD with fixed N0 = 0.4 × 107 m−4. This value intentionally differs from the one specified at the model lid (equal to 0.8 × 107 m−4 in the Marshall–Palmer distribution) in order to start with a first guess farther to the control run.

4. Experiments

The retrievals presented in this section are performed over an assimilation time window of 8 min, starting at t = 10 min in the control run. The vertical domain extends from surface up to 1.75 km. This domain lid is 0.25 km below the one in the control run. Thus, the upper boundary conditions to be retrieved do not correspond to the simple Marshall–Palmer distribution imposed in the control run. The same discretization as in the control run is used in the forward model and its adjoint.

For convenience, measurable bulk quantities such as the radar reflectivity factor (Z), the liquid water content (LWC), and the logarithms of total number concentration [log(N)] are displayed rather then the predictive moments. These quantities are related to the model variables as follows:

 
formula
 
formula
 
formula

The moments are given in MKS, that is, [Mm] = mm-3 and Z, LWC, and N in dBZ, g m−3, and m−3, respectively.

Figure 1 shows the time evolution of Z, LWC, and log(N) over the assimilation window for the control, the 3-M(0, 3, 6) simulation from the first guess and the observation with 40% error that are used in this section.

Fig. 1.

Vertical profiles of (left) Z, (middle) LWC, and (right) log(N) for the control run (solid curves), the simulation with 3-M(0, 3, 6) from the first guess (dashed curves), and the observations (dotted curves). Perturbations at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 1.

Vertical profiles of (left) Z, (middle) LWC, and (right) log(N) for the control run (solid curves), the simulation with 3-M(0, 3, 6) from the first guess (dashed curves), and the observations (dotted curves). Perturbations at (upper) 10, (middle) 14, and (lower) 18 min.

a. Linearization error

Before doing variational data assimilation or retrieval experiments, it is of interest to assess the degree of nonlinearity of the forward model that is used to fit the observations. This can be achieved by comparing the TLM integration with difference between two nonlinear integrations over the assimilation period of interest. When the model solution is nearly linear, an initial perturbation propagated in time by the TLM corresponds well to the difference of two nonlinear solutions for which initial conditions differ only by the same initial perturbation. In this case, the cost function (3.2) is nearly quadratic and is easier to minimize with an algorithm such as the quasi Newton one used in this study. In addition, the cost function is likely to have a single minimum. Figure 2 shows vertical profiles of perturbations from two nonlinear trajectories and from the TLM for Z, LWC, and log(N) at 10, 14, and 18 min in the assimilation window with the 3-M(0, 3, 6) scheme. The initial perturbations shown are the differences between the first guess and the control. At the initial time (10 min), perturbations of Z correspond to the observation errors (which were added to the control to generate the synthetic moment observations). The initial perturbations for LWC and log(N) are the result of the difference between the control and profiles calculated from the observed reflectivity assuming a Marshall–Palmer-like DSD. These calculations give smoother initial LWC and log(N) profiles than if they were themselves perturbed randomly. The perturbations from the TLM remain close to those from the difference between the two nonlinear integrations. All details in the profiles are well reproduced by the TLM, even at the end of the period (18 min) between 1.25 and 1.75 km in the reflectivity profile where large vertical fluctuations appear. In that particular location, the TLM slightly overestimates the amplitude. It is interesting to see that the perturbation profiles become smoother in time. There are several mechanisms that contribute to this: first, the evaporation tends to reduce larger number of concentration; second, the gravitational sorting disperses the precipitation in the vertical; and third, the numerical treatment of the sedimentation produces implicit diffusion.

Fig. 2.

Vertical profiles of perturbations from two nonlinear trajectories (solid line) and from the TLM (dashed line) for (left) Z, (middle) LWC, and (right) log(N). Perturbations at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 2.

Vertical profiles of perturbations from two nonlinear trajectories (solid line) and from the TLM (dashed line) for (left) Z, (middle) LWC, and (right) log(N). Perturbations at (upper) 10, (middle) 14, and (lower) 18 min.

Overall, the TLM is able to reproduce with good accuracy the evolution of perturbation of the size corresponding to the difference between the first guest and the control. This means that the topology of the cost function in vicinity of the first guest, the control (truth) and its minimum is nearly quadratic and the adjoint model (which is the transpose of the TLM) is able to provide a useful gradient of the cost function to the minimization algorithm.

b. Retrieval with the three-moment scheme

When all the predictive moments are observed in space and time without observational error and the same 3-M(0, 3, 6) scheme used to generate the observations is employed in the forward model F, then all the moments are retrieved with high accuracy. Without the smoothness penalty in the cost function and starting from the first guess, the solution converges within 100 iterations with a reduction of the cost function value of seven orders of magnitude. When all predictive moments are measured with observation errors, then the final solution without smoothness penalty term (i.e., α = {α0, α3, α6} = {0.0, 0.0, 0.0}) is noisy, especially for the total number of concentration over the whole period, as shown in Fig. 3. The best set of {α0, α3, α6} was determined by trial and error under the restriction that the ratio of the penalty term over the cost function for each moment is nearly equal. The rms errors of Z, LWC, and log(N) over the vertical profile as function of time of the best set of coefficients (α = {2.0, 1.0, 0.2}) are shown in Fig. 3. In general, the noise is considerably reduced, but not completely. The error level seen in Fig. 3 for Z (∼0.2 dB) is one order of magnitude smaller than the observation error (∼2.0 dB). The ratio of the penalty term over the cost function with this set of coefficients remains fairly constant during the minimization and is around 0.02. The contribution of the penalty function to the cost function is thus small. When the value(s) of the coefficients are increased, the level of noise reduces but at the expense of a good recovery of Z at the beginning of the assimilation window. The convergence rate becomes also slower in this case as expected when the strength of the smoothness penalty function is too high. In all the other experiments presented below, the optimal set of coefficients is used in the cost function and the moments are assimilated with 40% observation error.

Fig. 3.

Rms errors of the retrieved Z, LWC, and log(N) as function of time without smoothing (dotted curves), with optimal smoothing (solid curves), and with strong smoothing (dashed curves).

Fig. 3.

Rms errors of the retrieved Z, LWC, and log(N) as function of time without smoothing (dotted curves), with optimal smoothing (solid curves), and with strong smoothing (dashed curves).

Figure 4 shows the relative errors of control variables with respect to the first guess as a function of the number of iterations in the minimization for Z, LWC, and log(N). When all the predictive moments are observed, the values of Z, LWC, and log(N) converge at about the same rate within 50 iterations. When only Q6 is assimilated, the convergence rate for Z remains about the same, but it is slower for LWC and log(N). The total number of concentration remains closer to the first guess (in the lower panel) in this case. It seems that the information content in the observed Q6 is not sufficient to recover log(N) over the assimilation window. The retrieval of log(N) improves when all the moments at the ground are observed. Figure 5 presents the detail retrieved profiles of Z, LWC, and log(N) during the assimilation period. When all moments are assimilated, the solution (not shown) is fairly closed to the control so that the curves would be indistinguishable in this figure. When only Q6 or both the surface moments and Q6 are assimilated, the solution at 14 and 18 min are similar. However, at the beginning of the assimilation window (10 min), the retrieval of LWC and log(N) below 0.75 km is substantially improved when the surface moments are assimilated. This improvement is due to the upward propagation of the surface information in the measured moments, thanks to the sedimentation process.

Fig. 4.

Relative errors of the initial and boundary conditions of Q0, Q3, and Q6 as function of number of iterations when all predictive moments are observed in space and time (solid curves), when the moments at the surface in time and the Q6 in space and time are assimilated (dashed curves), and when only Q6 in space and time is assimilated (dotted curves).

Fig. 4.

Relative errors of the initial and boundary conditions of Q0, Q3, and Q6 as function of number of iterations when all predictive moments are observed in space and time (solid curves), when the moments at the surface in time and the Q6 in space and time are assimilated (dashed curves), and when only Q6 in space and time is assimilated (dotted curves).

Fig. 5.

Retrieved vertical profiles with the 3-M(0, 3, 6) scheme for (left) Z, (middle) LWC, and (right) log(N) when the moments at the surface in time and the Q6 in space and time are assimilated (dashed curves) and when only Q6 in space and time is assimilated (dotted curves). The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 5.

Retrieved vertical profiles with the 3-M(0, 3, 6) scheme for (left) Z, (middle) LWC, and (right) log(N) when the moments at the surface in time and the Q6 in space and time are assimilated (dashed curves) and when only Q6 in space and time is assimilated (dotted curves). The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

In the last experiment with the 3-M(0, 3, 6) scheme, the ability of the assimilation method to recover the relative humidity is evaluated. In the control run (as well as in all the other experiment presented in this study), the relative humidity is constant over the whole domain at 75%. Starting from a first guess of 100%, Fig. 6 shows the evolution of the relative humidity during the minimization process when only Q6 is assimilated. After some fluctuations at the beginning of the minimization, the relative humidity decreases steadily until 50 iterations. Then the retrieved values remain near 76%, which is 1% above the control relative humidity. This encouraging result is obtained even though observations of Q6 are 40% in error, which explains the small discrepancy between the retrieved relative humidity and the control value. The mechanism responsible for the success is the evaporation process that directly depends on the relative humidity (see SLZ). Finally, the attempt of recovering the full moisture profile will be reported in a future study, when the feedback of the microphysical processes on the thermodynamic fields will be included in our model.

Fig. 6.

Recovery of the relative humidity (in %) as function of number of iterations when only Q6 is assimilated in space and time.

Fig. 6.

Recovery of the relative humidity (in %) as function of number of iterations when only Q6 is assimilated in space and time.

c. Retrieval with the two-moment schemes

The purpose of the experiments presented in this subsection is to assess the ability of retrieving Z, LCW, and log(N) profiles when the forward model is a two-moment scheme. We suppose here that only Q6 is observed and is generated from the 3-M(0, 3, 6) scheme in which 40% observation error were added. Figure 7 shows the retrieved profiles, after 100 iterations, when the 2-M(0, 3) and the 2-M(3, 6) schemes are used as a forward model. For both schemes, the retrieved Z profiles correspond very well to the control. This is not surprising since Q6 values predicted by the scheme are fitted to the observations in space and time. However, the two-moment schemes are not able to reproduce the evolution of the moments predicted by the three-moment scheme. The model error strongly projects onto the other retrieved moments, as shown for LWC and log(N) in Fig. 7. In general, the LWC and log(N) are largely underestimated with the 2-M(0, 3) scheme while they are overestimated with the 2-M(3, 6) scheme. This may be explained by the deviations of the “truth” DSD distributions from the imposed in these schemes an exponential form. In the case of 2-M(0, 3), this may be explained by the adjustment procedure described above, preventing unrealistic DSD, but not assuring the conservation of the total number concentration.

Fig. 7.

Retrieved vertical profiles with the 2-M(0, 3) scheme (dotted curves) and the 2-M(3, 6) scheme (dashed curves) for (left) Z, (middle) LWC, and (right) log(N) when only Q6 in space and time is assimilated. The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 7.

Retrieved vertical profiles with the 2-M(0, 3) scheme (dotted curves) and the 2-M(3, 6) scheme (dashed curves) for (left) Z, (middle) LWC, and (right) log(N) when only Q6 in space and time is assimilated. The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

d. Retrieval with the one-moment schemes

Similar experiments as in the last section are presented here but with the one-moment schemes. Figure 8 displays the retrieved profiles when the 1-M(3) and the 1-M(6) schemes are used as forward models. As opposed to the two-moment schemes, both one-moment schemes behave similarly in general: log(N) is systematically overestimated at all time, LWC is overestimated in the lower levels at 14 min and underestimated at the end of the assimilation window (18 min). There are however some differences. With the 1-M(3) scheme, strong anomalies appear in the Z and LWC profiles between 1.24 and 1.25 km. After investigations, we found that these anomalies develop to better fit observations of Q6 below 1.24 km. These anomalies are thus not numerical artifacts. To better illustrate that unexpected result, Fig. 9 displays the full simulations from the control run and when initializing the M-1(3) scheme with the first guess and retrieved fields. We can see that large values of Z reach the surface faster in the control than with 1-M(3) scheme from the first guess. This is particularly obvious at 14 min near the surface. Since only Q6 is assimilated in this experiment (which is closely related to Z), the predictive moment in 1-M(3) is adjusted to fill the missing amount of reflectivity predicted by the control (from which the observations are generated) near the surface around 14 min. The only efficient way to achieve this with the 1-M(3) scheme is by creating the anomaly of Q3, seen in the LWC in bottom central panel in Fig. 9, in the initial conditions between 1.24 and 1.25 km. This anomaly appears to compensate 1-M(3) scheme deficiencies to propagate downward the largest drops, which is performed via the prediction of the third moment in this case. In the control run, the prediction of the sixth moment, in which the associated fall speed is higher in the presence of large drops in the DSD, can make the correct prediction as shown in SLZ. This is why with the 1-M(6) scheme, no anomaly developed. Figure 10 shows the more detailed time evolution over the assimilation window of Z, LWC, and log(N) at the surface from the control and from those obtained with the 1-M(3) scheme when the initial and boundary conditions are from the first guess, the full retrieval and the retrieval in which the anomaly in Q3 between 1.24 and 1.25 km is removed by interpolating from values below and above these levels (dash–dotted curves). We can see that the solution near the ground is further away from the control when the anomaly is removed from the profile. The results are especially degraded for Z.

Fig. 8.

Retrieved vertical profiles with the 1-M(3) scheme (dotted curves) and the 1-M(6) scheme (dashed curves) for (left) Z, (middle) LWC, and (right) log(N) when only Q6 in space and time is assimilated. The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 8.

Retrieved vertical profiles with the 1-M(3) scheme (dotted curves) and the 1-M(6) scheme (dashed curves) for (left) Z, (middle) LWC, and (right) log(N) when only Q6 in space and time is assimilated. The profiles are shown at (upper) 10, (middle) 14, and (lower) 18 min.

Fig. 9.

Full simulation of Z, LWC, and log(N) from (upper) the control, (middle) the 1-M(3) scheme using the first guess as initial and boundary conditions, and (lower) the assimilation of Q6 in space and time using the 1-M(3) scheme.

Fig. 9.

Full simulation of Z, LWC, and log(N) from (upper) the control, (middle) the 1-M(3) scheme using the first guess as initial and boundary conditions, and (lower) the assimilation of Q6 in space and time using the 1-M(3) scheme.

Fig. 10.

Surface Z, LWC, and log(N) as function of time obtained with the 1-M(3) scheme from the first guess (dotted curves), when Q6 in space and time is assimilated (dashed curves) as in Fig. 6 and when the anomaly between 1.24- and 1.25-km height in the retrieved initial conditions is removed (dash–dotted curves). The solid curves are the true surface variable as in Fig. 1.

Fig. 10.

Surface Z, LWC, and log(N) as function of time obtained with the 1-M(3) scheme from the first guess (dotted curves), when Q6 in space and time is assimilated (dashed curves) as in Fig. 6 and when the anomaly between 1.24- and 1.25-km height in the retrieved initial conditions is removed (dash–dotted curves). The solid curves are the true surface variable as in Fig. 1.

5. Conclusions

The various microphysical schemes developed in SLZ have been tested in the data assimilation framework. The ability of retrieving the precipitation structure and a constant relative humidity below the cloud base, in the context of identical twin experiments, has been investigated. In these experiments, only the sedimentation and the evaporation of rain are considered in a one-dimensional time-dependent domain. The control run (used as the truth) is generated by using the 3-M(0, 3, 6), which is able to reproduce simulations from a bin-resolving model with good accuracy as shown in SLZ.

The discretization of the microphysical processes requires some care in the context of variational data assimilation, where a large variety of initial and boundary conditions (including those unrealistic) can be tried by the minimization algorithm, and also requires the adjoint of the linearized formulation to calculate the gradient of the cost function. To make the schemes robust from negative values for the predictive moments, the processes are written in terms of the logarithms of the moments. To prevent numerical instability due to unrealistic initial and boundary conditions, a maximum value for the characteristic diameter of M6 should be imposed.

The linearization proprieties of the 3-M(0, 3, 6) over an assimilation window of 8 min have been examined. Starting from a perturbation corresponding to the difference between the first guess and the control, the TLM is able to predict the difference between two nonlinear integrations with good accuracy. This means that the evolution of the predictive moments by the 3-M(0, 3, 6) is nearly linear in the assimilation window and the shape of the cost function is thus nearly quadratic with a unique minimum. This is in agreement with the fast converging and well-behaved results obtained in the retrieval experiment in which all the predictive moments, taken from the control run, are assimilated without observation errors. When 40% error is added to the observed moments, the retrieval is somewhat noisy, especially for Q6 over the whole period and Q3 in the first half of the assimilation window. A weak smoothness penalty function should be added to the cost function to reduce the noise level, which also palliates for the lack of a priori information. With the optimal smoothing, the reflectivity can be retrieved with 0.2-dB precision, which is one order of magnitude smaller than the mean square observation random error. When only Q6 is observed, the LWC is still retrieved with good accuracy, but the total number of concentration [i.e., log(N)] profile remains close to the first guess. The information content about Q0 in Q6 observed in space and time is therefore weak. In contrast, the information content extracted for Q3 is important, which means that the estimation of LWC and rainfall rate from the assimilation method proposed in this study may be better than simple power law relationships proposed in the past [see Bringi and Chandrasekar (2001) for a comprehensive review]. When all the moments are measured at the surface (as those estimated from disdrometer measurements), then the retrieval of LWC and log(N) are improved near the surface, up to half of the vertical domain at the beginning of the assimilation window. This is due to the upward propagation of the information of the surface observations. However, the measurement uncertainty of the zeroth moment from disdrometers is usually very large. Consequently, the conclusions drawn here about the retrieval of log(N) may be too optimistic. Finally, a constant relative humidity can also be retrieved with good accuracy with the 3-M(0, 3, 6) scheme from the observation of Q6 only.

Retrieval of the precipitation structure using simpler one-moment and two-moment schemes have been examined when only observations of Q6 are available. The results with the two-moment schemes are disappointing: both LWC and log(N) are either largely overestimated or underestimated depending on the scheme. This is the consequence of model errors. Their impacts do not appear as systematic biases or like random signals. Thus, taking into account such model errors in a weak constraint variational formulation would not be easy. The results obtained with the one-moment schemes are somewhat better, although these schemes are simpler. However, strong anomalies with the 1-M(3) (which look like numerical artifacts) can develop in the upper domain in order to better fit the observations downward. These anomalies in the LWC could be detrimental to the retrieved thermodynamic and dynamic fields when the feedback of the microphysical processes, through latent heat release, is accounted for in the model. However, it is important to note that the conclusions drawn here are for precipitating cells. For other atmospheric conditions, it is not clear that the deficiencies in the one-moment and two-moment schemes would impact the retrievals in the same way. In these cases, it is possible that the results may be better with the two-moment schemes.

Finally, the column model so far developed is still too simple to be used for the assimilation of real data. For example, mechanisms like the feedback of the evaporation process to the thermodynamic profiles and vertical motions should be included for more realistic simulations. In addition, a priori information as well as the observation error statistics should be carefully evaluated and introduced in the variational formulation. These tasks are beyond the scope of this paper but will be tackled in the future.

Acknowledgments

We thank Jean-François Mahfouf for valuable comments on this manuscript. This research was supported by the Canadian Foundation for Climate and Atmospheric Sciences.

REFERENCES

REFERENCES
Benedetti
,
A.
,
G. L.
Stephens
, and
T.
Vukicevic
,
2003
:
Variational assimilation of radar reflectivities in a cirrus model. II: Optimal initialization and model bias estimation.
Quart. J. Roy. Meteor. Soc.
,
129
,
301
319
.
Benedetti
,
A.
,
P.
Lopez
,
P.
Bauer
, and
E.
Moreau
,
2004
:
Experimental use of TRMM precipitation radar observations in 1D+4D-Var assimilation. ECMWF Tech. Memo. 448, 21 pp
.
Bringi
,
V. N.
, and
V.
Chandrasekar
,
2001
:
Polarimetric Doppler Weather Radar: Principles and Applications.
Cambridge University Press, 636 pp
.
Errico
,
R. M.
,
D. J.
Stensrud
, and
K. D.
Raeder
,
2001
:
Estimation of the error distributions of the precipitation produced by convective parametrization schemes.
Quart. J. Roy. Meteor. Soc.
,
127
,
2495
2512
.
Fillion
,
L.
, and
R. M.
Errico
,
1997
:
Variational assimilation of precipitation data using moist convective parameterization schemes: A 1D-Var study.
Mon. Wea. Rev.
,
125
,
2917
2942
.
Giering
,
R.
, and
T.
Kaminski
,
1998
:
Recipes for adjoint code construction.
ACM Trans. Math. Software
,
24
,
437
474
.
Gilbert
,
J. C.
, and
C.
Lemaréchal
,
1989
:
Some numerical experiments with variable-storage quasi-Newton algorithms.
Math. Prog.
,
45
,
407
435
.
Lee
,
G. W.
,
I.
Zawadzki
,
W.
Szyrmer
,
D.
Sempere-Torres
, and
R.
Uijlenhoet
,
2004
:
A general approach to double-moment normalization of drop size distributions.
J. Appl. Meteor.
,
43
,
264
281
.
Marécal
,
V.
, and
J-F.
Mahfouf
,
2002
:
Four-dimensional variational assimilation of total column water vapor in rainy areas.
Mon. Wea. Rev.
,
130
,
43
58
.
Roux
,
F.
,
1985
:
Retrieval of thermodynamic fields from multiple-Doppler radar data using the equation of motion and thermodynamic equation.
Mon. Wea. Rev.
,
113
,
2142
2157
.
Rutledge
,
S. A.
, and
P. V.
Hobbs
,
1983
:
The mesoscale and microscale structure and organization of clouds and precipitation in mid-latitude cyclone. VIII: A model for the “seeder-feeder” process in warm-frontal rainbands.
J. Atmos. Sci.
,
40
,
1185
1206
.
Sun
,
J.
, and
N. A.
Crook
,
1997
:
Dynamical and microphysical retrieval from Doppler radar observations using a cloud model and its adjoint. Part I: Model development and simulated data experiments.
J. Atmos. Sci.
,
54
,
1642
1661
.
Szyrmer
,
W.
,
S.
Laroche
, and
I.
Zawadzki
,
2005
:
A microphysical bulk formulation based on scaling normalization of the particle size distribution. Part I: Description.
J. Atmos. Sci.
,
62
,
4206
4221
.
Thacker
,
W. C.
,
1988
:
Fitting models to inadequate data by enforcing spatial and temporal smoothness.
J. Geophys. Res.
,
93
,
10655
10665
.
Thépaut
,
J-N.
, and
P.
Courtier
,
1991
:
Four-dimensional data assimilation using the adjoint of a multilevel primitive equation model.
Quart. J. Roy. Meteor. Soc.
,
117
,
1225
1254
.
Verlinde
,
J.
, and
W. R.
Cotton
,
1993
:
Fitting microphysical observations of nonsteady convection clouds to a numerical model: An application of the adjoint technique of data assimilation to a kinematic model.
Mon. Wea. Rev.
,
121
,
2776
2793
.
Wu
,
B.
,
J.
Verlinde
, and
J.
Sun
,
2000
:
Dynamical and microphysical retrievals from Doppler radar observations of a deep convection cloud.
J. Atmos. Sci.
,
57
,
262
283
.
Ziegler
,
C. L.
,
1985
:
Retrieval of thermal and microphysical variables in observed convective storms. Part I: Model development and preliminary testing.
J. Atmos. Sci.
,
42
,
1487
1509
.
Zou
,
X.
, and
Y-H.
Kuo
,
1996
:
Rainfall assimilation through an optimal control of initial and boundary conditions in a limited-area mesoscale model.
Mon. Wea. Rev.
,
124
,
2859
2882
.
Zupanski
,
D.
,
M.
Zupanski
,
E.
Rogers
,
D. F.
Parrish
, and
G. J.
DiMego
,
2002
:
Fine-resolution 4DVAR data assimilation for the Great Plains tornado outbreak of 3 May 1999.
Wea. Forecasting
,
17
,
506
525
.

Footnotes

Corresponding author address: Wanda Szyrmer, Dept. of Atmospheric and Oceanic Sciences, McGill University, Montreal, QC H3A 2K6, Canada. Email: wanda.szyrmer@.mcgill.ca