Calibration of Parameter Perturbations for Ensemble Prediction Using Data-Consistent Inversion

Axelle Fleury CNRM, Université de Toulouse, Météo-France, CNRS, Toulouse, France

Search for other papers by Axelle Fleury in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-3158-9925
,
François Bouttier CNRM, Université de Toulouse, Météo-France, CNRS, Toulouse, France

Search for other papers by François Bouttier in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0001-6148-4510
, and
Thierry Bergot CNRM, Université de Toulouse, Météo-France, CNRS, Toulouse, France

Search for other papers by Thierry Bergot in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-7704-5974
Full access

Abstract

Parameter perturbations are an attractive way to represent model errors in an ensemble prediction system due to their ability to target precise sources of uncertainty. However, most parameters do not have a linear impact on the model outputs, and therefore the distributions chosen to perturb their value influence the climatology of the ensemble system. In particular, distributions centered on the parameter’s default value are not always sufficient to prevent the ensemble average from deviating from the deterministic model. In this study, we propose to use inverse problem theory to adapt the parameter distributions in order to produce unbiased ensembles. Specifically, we use a method called data-consistent inversion to solve the inverse problem in a simplified framework of low dimensions. The updated distribution of two microphysical parameters of the model AROME is computed thanks to an ensemble of single-column simulations of a radiation fog case. This updated distribution, as well as two other standard distributions—uniform and lognormal—is then used to produce ensembles of single-column and 3D simulations. Results indicate that both 1D and 3D ensembles produced with the updated distribution are better centered on the deterministic AROME model than with the uniform or lognormal distributions, which demonstrates the potential benefit of the data-consistent inversion framework for designing parameter perturbations. However, many challenges remain to be addressed to apply this method in operational ensemble systems, especially if a large number of parameters need to be perturbed.

Significance Statement

Meteorological forecast uncertainty is often estimated with the help of ensemble systems, which provide sets of alternative scenarios for a given prediction. Ensembles are produced by introducing perturbations in the system which represent various sources of error. Among them, parameter perturbations are a popular way to target uncertainty coming from the meteorological model itself. However, the way in which parameter values are randomly perturbed influences the ensemble output statistics because the response of the model to changes in a parameter’s value is often nonlinear. The purpose of this study is to design appropriate distributions from which parameter values can be sampled to produce unbiased ensembles. We show how, in a simplified framework, using inverse problem theory can help to construct such distributions.

Fleury’s current affiliation: Laboratoire d’Ecogéochimie des Environnements Benthiques (LECOB), Observatoire Océanologique de Banyuls, CNRS, Sorbonne Université, Banyuls-sur-Mer, France.

© 2025 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: François Bouttier, francois.bouttier@meteo.fr

Abstract

Parameter perturbations are an attractive way to represent model errors in an ensemble prediction system due to their ability to target precise sources of uncertainty. However, most parameters do not have a linear impact on the model outputs, and therefore the distributions chosen to perturb their value influence the climatology of the ensemble system. In particular, distributions centered on the parameter’s default value are not always sufficient to prevent the ensemble average from deviating from the deterministic model. In this study, we propose to use inverse problem theory to adapt the parameter distributions in order to produce unbiased ensembles. Specifically, we use a method called data-consistent inversion to solve the inverse problem in a simplified framework of low dimensions. The updated distribution of two microphysical parameters of the model AROME is computed thanks to an ensemble of single-column simulations of a radiation fog case. This updated distribution, as well as two other standard distributions—uniform and lognormal—is then used to produce ensembles of single-column and 3D simulations. Results indicate that both 1D and 3D ensembles produced with the updated distribution are better centered on the deterministic AROME model than with the uniform or lognormal distributions, which demonstrates the potential benefit of the data-consistent inversion framework for designing parameter perturbations. However, many challenges remain to be addressed to apply this method in operational ensemble systems, especially if a large number of parameters need to be perturbed.

Significance Statement

Meteorological forecast uncertainty is often estimated with the help of ensemble systems, which provide sets of alternative scenarios for a given prediction. Ensembles are produced by introducing perturbations in the system which represent various sources of error. Among them, parameter perturbations are a popular way to target uncertainty coming from the meteorological model itself. However, the way in which parameter values are randomly perturbed influences the ensemble output statistics because the response of the model to changes in a parameter’s value is often nonlinear. The purpose of this study is to design appropriate distributions from which parameter values can be sampled to produce unbiased ensembles. We show how, in a simplified framework, using inverse problem theory can help to construct such distributions.

Fleury’s current affiliation: Laboratoire d’Ecogéochimie des Environnements Benthiques (LECOB), Observatoire Océanologique de Banyuls, CNRS, Sorbonne Université, Banyuls-sur-Mer, France.

© 2025 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: François Bouttier, francois.bouttier@meteo.fr

1. Introduction

Since the 1990s, ensemble prediction has been used to estimate forecast uncertainties arising from errors made by weather prediction systems and their interaction with the atmospheric flow. An ensemble is a set of alternative scenarios, whose dispersion should indicate the degree of confidence to be given in the prediction. The scenarios (“members”) are produced using different techniques that aim to represent various sources of uncertainty in the system. These include uncertainty in the initial state and boundary conditions, as well as errors coming from the numerical weather prediction (NWP) model itself.

Numerous strategies exist to represent the latter: one can combine the outputs of different NWP models or use different versions of some parts of a model to cover different simulation choices (the so-called multimodel and multiphysics approaches, see, for example, Stensrud et al. 2000; Arribas et al. 2005; García-Moya et al. 2011). It is also possible to use a single NWP model and to introduce stochastic perturbations at some steps of the calculations. This last approach is nowadays widely used in ensemble prediction because it produces statistically equally likely members and is usually less expensive than maintaining several different model configurations (Jankov et al. 2017). Among stochastic perturbations, a very well-known and used method is the stochastically perturbed parameterization tendencies (SPPT) scheme (Buizza et al. 1999; Palmer et al. 2009; Lock et al. 2019), which consists in multiplying the net tendencies computed by the parameterization schemes of a model by a random number field with prescribed spatial and temporal correlations. The simplicity of its implementation and its ability to generate large perturbations are clear advantages; however, limits have also been outlined in the literature (see, e.g., Leutbecher et al. 2017). One of them is the “bulk” nature of the method, as a single random perturbation is used to represent all the uncertainty, which in practice limits its physical interpretation. Several studies have therefore focused on alternative approaches to target more precisely the sources of uncertainty inside a NWP model.

One such approach is the parameter perturbation method, which dates back to the early days of ensemble prediction (Houtekamer et al. 1996) and has been the focus of several recent studies (e.g., Baker et al. 2014; Christensen et al. 2015; Ollinaho et al. 2017; Kalina et al. 2021; Frogner et al. 2022). Meteorological models use a number of parameters in their equations, whose values are not always known with confidence. Parameters are indeed rarely directly measurable, and it is usually a hard task to find suitable values for all the parameters in a model. Despite ongoing efforts on automatic tuning tools (Couvreux et al. 2021; Tuppi et al. 2023), setting parameter values is still often done by hand. The fact that numerous parameter values may not be adequate is therefore a potential source of model error, which parameter perturbation methods try to represent by allowing some parameters to change value, within a range that is deemed acceptable. This approach has the advantage of targeting the uncertainty by selecting which parameters are perturbed and of keeping a clear physical interpretation since the parameters are each linked to specific physical processes.

Different versions of parameter perturbation methods have been proposed in the literature. In the “multiparameter” approach, specific parameter values are given to each member of an ensemble and kept constant over time and space (Gebhardt et al. 2011; Fresnay et al. 2012; Shiogama et al. 2014). In the “random parameters” (RP) scheme proposed by Bowler et al. (2008), parameters are treated as random variables and their values evolve in time according to a first-order autoregressive process. Wimmer (2021) proposes the term “random perturbed parameter” (RPP) when parameter values are randomly chosen for each ensemble member and each forecast but held constant during the model integration. Finally, in the stochastically perturbed parameterization (SPP) method, introduced by Ollinaho et al. (2017), parameters vary in space and time. SPP has been used in several recent studies where it proved to give comparable or better results in terms of ensemble forecast skill than SPPT (Lang et al. 2021; Frogner et al. 2022; McTaggart-Cowan et al. 2022a,b).

Despite these positive results, parameter perturbation methods are still hampered by some technical aspects. Identifying the parameters to perturb, as well as their range of acceptable values, is a time-consuming task. It is usually done manually, following the advice of experts. Sometimes, sensitivity analysis can be used to refine the parameter set (Wimmer et al. 2022), but in any case, the work may have to be done again if modifications are introduced in the model (e.g., with a new parameterization scheme). Another difficulty can be the choice of the distributions in which the parameter values are sampled. In most studies, lognormal (Ollinaho et al. 2017; Frogner et al. 2022), Gaussian (Jankov et al. 2017, 2019; Thompson et al. 2021), or uniform (Bowler et al. 2008; McTaggart-Cowan et al. 2022a) distributions are used. These are empirical and practical choices, as the “true” distributions of the parameter values are not known. A uniform distribution enables coverage of the whole range of values of a parameter with the same probability, but there is a risk to introduce bias in the system if the range is not centered on the parameter’s default value (Bowler et al. 2008; Bouttier et al. 2022). This led McCabe et al. (2016) to introduce a mapping in the RP scheme to ensure that both sides around the default value are equally sampled. However, even when the distribution is centered on the default value, parameter perturbations can increase the ensemble bias as reported by Frogner et al. (2022). This could be related to uneven impacts of the parameter values around their standard setting due to nonlinearities in the model.

To represent model uncertainty with parameter perturbations, there may therefore be an interest in estimating parameter uncertainty in the form of probability distributions. This can typically be achieved using inverse problem theory. In an inverse problem, one seeks to determine the value of some parameters of a model given observed outputs and their associated uncertainty. In several formalisms, like Bayesian inference, the solution takes the form of a probability distribution which fully characterizes the parameter uncertainty. Numerous studies dealing with uncertainty quantification and parameter estimation make use of inverse problems to explore parametric uncertainty in climate models (Jackson et al. 2008; Solonen et al. 2012; Yang et al. 2013; Qian et al. 2018), NWP models (Ollinaho et al. 2013b), or cloud models (Golaz et al. 2007; Posselt and Vukicevic 2010; Posselt 2016; van Lier-Walqui et al. 2020). For example, Posselt and Vukicevic (2010) estimate the uncertainty of 10 microphysical parameters thanks to synthetic observations of precipitation, liquid and ice water paths, and radiative fluxes in the context of a squall-line simulation. The authors use the formalism of Mosegaard and Tarantola (2002) and a Markov chain Monte Carlo (MCMC) algorithm to sample the posterior joint parameter distribution. van Lier-Walqui et al. (2012) extend the study by using synthetic radar observations, and van Lier-Walqui et al. (2014) use the same inverse method applied to “process parameters,” i.e., multiplicative factors of hydrometeor tendencies from several processes. Järvinen et al. (2010) use Bayesian inference and MCMC to estimate four parameters of the climate model ECHAM5 (Roeckner et al. 2003) using observations of the radiative net fluxes at the top of the atmosphere. An online parameter estimation system, inspired from Bayesian inference and MCMC methods, is also proposed by Järvinen et al. (2012) and Laine et al. (2012). Ollinaho et al. (2013b,a) use it to estimate four parameters of the ECHAM5 model and ECMWF Integrated Forecasting System (IFS), respectively, and the uncertainty estimate given by the method is used by Christensen et al. (2015) to construct perturbations of the ECMWF convection scheme. To the authors’ knowledge, this is the only example where the uncertainty given by solving an inverse problem has been used to design parameter perturbations for the representation of model uncertainty in ensemble prediction.

In light of the work of Ollinaho et al. (2013a) and Christensen et al. (2015), we propose to use the probability distribution provided by solving an inverse problem to generate parameter perturbations in a convection-permitting ensemble. A particular approach called data-consistent inversion, introduced by Butler et al. (2018, hereafter BJW18), is used to solve the inverse problem because of its interesting ability to produce an “updated” parameter distribution such that the model output distribution exactly matches the distribution of observations (BJW18, Butler et al. 2020). Our aim in this study is to obtain an ensemble centered around a reference deterministic model; therefore, the “observations” used to solve the inverse problem are reference model outputs instead of real meteorological observations. In addition, any reference to bias in this article refers to a bias with regard to the deterministic model rather than real observations.

This study is based on the limited-area model AROME of Météo-France (Seity et al. 2011; Termonia et al. 2018). Ensembles of AROME simulations are produced by perturbing two parameters of its microphysics scheme, linked to the definition of the cloud droplet size distribution. This small number of perturbed parameters allows to keep the problem simple for a first implementation of Butler et al.’s method. The perturbations applied to the parameters are of an RPP type: the parameter values are randomly sampled for each member and simulation but held constant during the model integration. Two different frameworks are used successively: (i) a simplified one-dimensional (1D) framework, using a single-column version of AROME to simulate a radiation fog case, and (ii) a framework closer to the operational context with a three-dimensional (3D) version of the model.

Two main questions are addressed in the study. First, we investigate whether it is possible to reduce the bias of an RPP ensemble, with regard to a reference unperturbed simulation, by sampling parameter values in the distribution given by solving an inverse problem with the data-consistent inversion framework. Second, we study whether the distribution found in a simplified 1D framework is still relevant to produce ensembles of 3D simulations of a real case.

The paper is divided as follows. First, the method of BJW18 is introduced, and motivations for using it are highlighted. Then, details are given on the model AROME, the radiation fog case, and the choice of the two microphysical parameters to perturb. The impact of these parameters on the model is further described in the next section. Then, the results of the inverse problem are presented for a simplified 1D framework. These results are then used to generate ensembles of 3D simulations of a real case. The last section is dedicated to discussion and conclusions.

2. The data-consistent inversion approach

In this section, we introduce the method for solving an inverse problem that is at the heart of our study. A problem is referred to as an inverse problem when one wants to estimate the parameters θ of a model G based on observations of some quantities of interest (QoI) d that can be predicted by this model. Equation (1) is called the forward model:
G(θ)=d.

Here and in the following, we consider a discrete inverse problem where there are a finite number of parameters to estimate and a finite number of observations (θ and d are vectors). We emphasize that d does not always correspond to observations in the meteorological sense but should be regarded more generally as training data. A common method for solving such a problem is Bayesian inference. It allows to obtain an estimate of the parameters θ in the form of a probability distribution which is given by Bayes’ law. The latter involves a prior knowledge about the distribution of the parameters, as well as a likelihood function which represents the compatibility of a QoI with a given parameter value. The likelihood is constructed by considering that there is noise in the observed data. An observation d can therefore be modeled as a random variable: d = G(θ) + ϵ, where ϵ is a random variable representing the uncertainty (see, e.g., Aster et al. 2018). The uncertainty of the parameter estimate, reflected in the posterior distribution, will depend on this noise as well as on the information contained in the prior distribution.

In this study, we want to estimate the distribution of values of two microphysical parameters θ of a NWP model G and use these distributions to sample several parameter values in order to create RPP ensembles. Our interest in solving an inverse problem to estimate the distribution of θ is to produce RPP ensembles with no bias in the QoI d. Importantly, we also want the ensembles to have a nonnegligible spread with regard to the QoI. In other words, we can define a “target” distribution of d with desired mean and spread, and our goal is to be able to reproduce this distribution via a distribution of θ. We argue that classical Bayesian inference is not the most appropriate approach to solve this particular problem. Indeed, Bayesian inference is mainly a parameter identification method, designed to find the true value of parameters from noisy data. The posterior distribution is used to give an estimate of the parameter value as well as an indication of the uncertainty of this estimate. With this approach, the uncertainty is considered reducible: by using more observations, a better estimate of the parameter value can be obtained, which is reflected in the posterior distribution becoming concentrated around this value as explained by the Bernstein–von Mises theorem (van der Vaart 1998). In our case, the philosophy is different: the parameters are considered fundamentally uncertain, and their distribution is supposed to explain the distribution of the observations. The posterior distribution of θ should, when propagated through the model G, reproduce the distribution of the training data d. This property is not guaranteed by Bayesian inference, contrary to the alternative approach called data-consistent inversion, proposed by BJW18, and described in the rest of this section.

Like classical Bayesian inference, data-consistent inversion estimates a parameter distribution given observational data of some QoI. However, the construction of the parameter distribution is different, and one of its main properties is to be “consistent” with the observations, in the sense that the forward propagation of the parameter distribution through the model reproduces the distribution of the observations. In our case, using data-consistent inversion with a prescribed distribution of observations centered on a reference value and with nonnegligible spread should allow us to construct an unbiased ensemble of AROME simulations with nonnegligible spread.

Data-consistent inversion can be summarized by Eq. (2). Although the approach was originally introduced by BJW18, the terminology used by the authors to present their method has evolved in subsequent studies in order to avoid confusion with Bayesian inference (e.g., Butler et al. 2020, 2022). Here, we adopt the notations of Butler et al. (2020), which are summarized in Table 1 of their paper:
πupdate(θ)=πinit(θ)πobs[G(θ)]πpredict[G(θ)].

The term πinit, called the initial density of the parameters, represents a prior knowledge of the parameter uncertainty. The propagation of the initial density through the model G, illustrated in Fig. 1, gives a predicted density of QoI πpredict (which is referred to as the “push-forward of the prior” in BJW18). The term πobs represents the distribution of the observations, defined on the QoI space. Finally, πupdate is the updated parameter density that we are interested in.

Fig. 1.
Fig. 1.

Illustration of the propagation of the initial density through the model. Parameter values are sampled from (left) πinit. The QoI associated with these parameter values, evaluated using (middle) the forward model, are a sample of (right) the predicted density.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The idea of propagating a prior knowledge on the parameters through the model and using the distribution obtained in the QoI space to weight an updated parameter distribution is already present in Poole and Raftery (2000) in their so-called Bayesian melding approach. However, BJW18 are the first to propose a rigorous framework for the definition of this updated distribution.

The main difficulty in obtaining the updated density πupdate with the BJW18 method is to calculate the predicted density. Indeed, while πinit and πobs are prescribed and can be simple probability density functions, it is usually difficult to get an analytical form of πpredict, which has, therefore, to be estimated numerically. One possibility suggested by BJW18 is to use Monte Carlo sampling and kernel density estimation (KDE). Then, by combining it with rejection sampling, they established a simple algorithm to sample the updated parameter density πupdate. This algorithm is used in the present study (section 5); therefore, the main steps are recalled below for clarity. As in BJW18 and Butler et al. (2020), we define the ratio r(θ) = πobs[G(θ)]/πpredict[G(θ)].

  1. Estimation of πpredict: A sample of parameter values (θi)i=1,...,N1 is drawn from πinit, and the corresponding QoI are computed thanks to the model G. The resulting sample of QoI values is used to estimate πpredict using KDE.

  2. Estimation of a constant M from the sample of parameter values produced during step 1: M=maxi=1,...,N1[r(θi)]. This constant is used in the rejection sampling method.

  3. Rejection sampling: In this process, πupdate is the density function associated with the target distribution, and πinit is the density function associated with the proposal distribution. A sample of parameter values (θj)j=1,,N2 is drawn from πinit, and each θj is accepted with probability πupdate(θj)/init(θj). For this, a random number ξj is generated from a uniform distribution on the interval [0, 1] for each θj. If r(θj)/M > ξj, θj is accepted; otherwise, it is rejected. The subset of accepted θj is a sample of the updated distribution.

In the above steps, the same sample of parameter values can be used for estimating both πpredict and πupdate (Butler et al. 2020), in which case i = j and N1 = N2.

To use the BJW18 method, a “predictability assumption” (Butler et al. 2020) must be verified to guarantee that πupdate in Eq. (2) is indeed a density. This assumption states that there exists C > 0 such that πobs(q) ≤ predict(q) for (almost every) qD, where D denotes the space of observable QoI. A practical way to verify that the predictability assumption holds is given by BJW18: the integral of the updated density can be approximated by the sample average of r(θ), which therefore should be close to 1. An important consequence of this assumption is that the existence of the constant M used in the rejection sampling algorithm is ensured.

3. Model and case study

a. The AROME model

All the results presented in this study are produced with the AROME NWP model of Météo-France (Seity et al. 2011; Termonia et al. 2018) and the associated ensemble prediction system AROME–Ensemble Prediction System (AROME-EPS) (Bouttier et al. 2012). AROME is a nonhydrostatic, convection-permitting model. It is used operationally to produce 2-day forecasts over limited areas such as metropolitan France. It has 13 prognostic variables, among which are the temperature, specific humidity, horizontal components of the wind, five species of hydrometeors, and the turbulent kinetic energy. The compressible Euler equation system is solved using a semi-implicit semi-Lagrangian discretization scheme, and several parameterization schemes are used to represent unresolved processes such as turbulence, shallow convection, radiation, and microphysics. Interactions with the surface are provided through a coupling with the externalized surface scheme Surface Externalisée (SURFEX). Both the deterministic model AROME and the ensemble AROME-EPS use 90 vertical levels. Since June 2022, they have the same horizontal resolution of 1.3 km, which has been used in this study.

Three sources of uncertainty are represented operationally in AROME-EPS. The initial state uncertainty is represented by perturbing the operational analysis for each ensemble member thanks to an ensemble data assimilation system (Brousseau et al. 2011). Boundary conditions are provided by members from the global ensemble Prévision d’Ensemble ARPEGE (PEARP) (Descamps et al. 2015), selected with a clustering method (Bouttier and Raynaud 2018). In addition, perturbations are applied to several surface parameters and surface variables of the SURFEX scheme in order to perturb the deterministic surface analysis (Bouttier et al. 2016). Finally, model uncertainty is represented by the SPPT method, which applies stochastic perturbations to the total tendency given by the parameterization schemes of the model (Bouttier et al. 2012).

b. Perturbed microphysical parameters

In the present study, perturbations are applied to two microphysical parameters of AROME. These parameters appear in the one-moment microphysics scheme ICE3 (Pinty and Jabouille 1998), which is used to predict the evolution of the mixing ratios of water vapor and five classes of hydrometeors (cloud liquid water, cloud ice, rain, snow, and graupel). The size distribution of hydrometeors is modeled by a generalized gamma distribution: for a hydrometeor class j, the number of particles of diameter D per unit volume of air is nj(D)dD, with
nj(D)=N0,jαjΓ(νj)λjαjνjDαjνj1e(λjD)αj,
where Γ is the gamma function. The parameters of this distribution are N0,j, representing the total number of hydrometeors per unit volume, αj and νj, which control the shape of the distribution, and λj, which is a scale parameter. In ICE3, λj is diagnosed from the hydrometeor prognostic mixing ratio, whereas the other parameters N0,j, αj, and νj are held constant, with a distinction made between oceanic and continental areas (see Table 1 for the cloud liquid water category).
Table 1.

Values of the parameters N0,c, αc, and νc of the cloud liquid water droplet size distribution used operationally in the microphysics scheme ICE3 of AROME.

Table 1.

In this study, the parameters N0,c and νc of the cloud liquid water droplet size distribution are perturbed. This choice is motivated by the fact that both parameter values are uncertain (Miles et al. 2000; Igel and van den Heever 2017) and have been shown to have an important impact on the simulation of fog (e.g., Boutle et al. 2022; Osorio et al. 2022). Additionally, these parameters have not been considered in the 21 AROME parameters studied by Wimmer et al. (2022), so this study broadens the field of parameters that could be perturbed in AROME in the context of ensemble prediction.

Because only the cloud liquid water distribution is perturbed in this work, the “c” subscript is dropped in the parameters name, which are written N0 and ν for simplicity in the rest of the paper.

c. One-dimensional framework and case study

In the first part of this study, a simplified framework is used to test parameter perturbations and solve the inverse problem with the BJW18 method. A single-column version of AROME (AROME-SCM; Malardel 2008) is employed, where the dynamical core of the model is replaced by external forcing, whereas parameterizations schemes are still active. AROME-SCM is used to simulate a radiation fog case from the intercomparison study of Boutle et al. (2022). This case is derived from the first intensive observation period (IOP1) of the Local and Nonlocal Fog Experiment (LANFEX) field campaign (Price et al. 2018). The observations, made in Cardington (United Kingdom), on 24–25 November 2014, show a shallow fog layer developing around 1800 UTC 24 November and disappearing around 0800 UTC 25 November. The fog layer remains stable during several hours before evolving into a well-mixed, optically thick layer of around 100 m high toward 0400 UTC (Boutle et al. 2018). A number of simulations of this radiation fog case have already been published (e.g., Smith et al. 2018; Boutle et al. 2018; Poku et al. 2019), including the recent intercomparison study by Boutle et al. (2022). The simulation of this case (which will be denoted as the “LANFEX case” in the following) by AROME-SCM in the present study follows the specifications of Boutle et al. (2022). The only external forcing given to the model is the surface temperature—no advection forcing is considered. The case starts at 1700 UTC with initial profiles of temperature, humidity, and wind derived from a radiosonde profile. In the microphysics scheme, the total number concentration of cloud droplets N0 is set to 50 cm−3 instead of the operational value of 300 cm−3 to follow Boutle et al.’s recommendations. This value of 50 cm−3 is taken as the parameter’s default value in all this study, including the 3D tests in section 6.

d. Reference simulation of the fog case

Figure 2 shows the profiles of potential temperature and cloud liquid water content obtained in the simulation of the LANFEX case by AROME-SCM.

Fig. 2.
Fig. 2.

Evolution of (a) potential temperature and (b) cloud liquid water content profiles during the simulation of the LANFEX case by AROME-SCM. The simulation is initialized at 1700 UTC, and times indicated on the graphs are simulation times.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The vertical profiles of potential temperature (Fig. 2a) show that the case starts with a stable stratification (blue curve). A mixed fog layer appears after about 5 h of simulation, which is too early compared to the observations, and is a shortcoming already observed with several models for the simulation of this case (Boutle et al. 2018, 2022). Dissipation does not occur in our simulation, whereas in IOP1, the fog disappears around 0800 UTC. This clearance is related to the arrival of overlying clouds which are not represented in this simplified simulation, following Boutle et al. (2022). The time–height graph of liquid water content is roughly similar to the one obtained by Smith et al. (2018), but the height reached by the fog layer at the end of the simulation and the cloud liquid water content are lower in our case. The liquid water path (LWP; not shown) increases throughout the night up to 35 g m−2, which is within the range of LWPs simulated by various single-column models for this case [see Fig. 1d in Boutle et al. (2022)].

4. Preliminary study

a. Sensitivity tests

Among the reasons for choosing to perturb the parameters N0 and ν is the sensitivity of fog simulations to the value of these two parameters, as demonstrated by previous studies (e.g., Boutle et al. 2022). We illustrate in this section the sensitivity of AROME-SCM simulation of the LANFEX case to these two parameters.

Figure 3 shows the impact of N0 and ν on the cloud droplet size distribution. The curves are calculated using the mixing ratio of cloud water of the reference LANFEX simulation at +18 h and at approximately 50 m, Eq. (3), and different values of N0 and ν. The parameter α is set to 1 for all the results presented in this study.

Fig. 3.
Fig. 3.

Sensitivity of the cloud droplet size distribution to different values of the parameters (a) N0 and (b) ν.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

Figure 3a shows the size spectra obtained with three different values of N0: N0 = 50 cm−3 (the reference value in this study), N0 = 10 cm−3 (another value used in the intercomparison study of Boutle et al. 2022), and N0 = 300 cm−3 (the operational AROME value for continental areas). The three curves show that, for a given cloud liquid water content, increasing N0 results in an increase in the abundance of small droplets, with a distribution maximum shifted toward smaller droplets and a reduced distribution tail. Figure 3b shows the same kind of graph for different values of ν: ν = 3 (the reference value in this study); ν = 1, which corresponds, if α = 1, to an exponential distribution; and ν = 15, which is within the range of values found in several observational studies (Miles et al. 2000; Igel and van den Heever 2017). The three curves show that the shape of the distribution is strongly affected by the value of ν. When ν decreases, the distribution maximum is shifted toward smaller droplet sizes, eventually becoming an exponential distribution for ν = 1. However, the tail of the distribution also becomes heavier when ν decreases.

To evaluate the impact of these different droplet size distributions on the LANFEX case, five simulations are made with AROME-SCM using the values of N0 and ν studied above. The results in terms of liquid water path are presented in Fig. 4.

Fig. 4.
Fig. 4.

Sensitivity of the LWP to different values of the parameters (a) N0 and (b) ν in the simulation of the LANFEX case by AROME-SCM. The simulation is initialized at 1700 UTC, and times indicated on the graphs are simulation times.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The black curve on the graphs corresponds to the reference simulation using N0 = 50 cm−3 and ν = 3. The red and blue curves show that increasing either N0 or ν results in an increase in the LWP and vice versa. This is linked to the strong impact of N0 and ν on the sedimentation of cloud droplets, which is one of the main microphysics processes that control the fog simulation. Sedimentation strongly depends on the size of the particles, with bigger droplets falling faster. The cloud water sink represented by the sedimentation process is therefore more important when droplets are larger. According to Fig. 3a, decreasing N0 results in bigger droplets, thus a higher sedimentation rate, which explains that the LWP is lower for the simulation N0 = 10 cm−3 (blue curve). Conversely, increasing N0 results in a lower sedimentation rate and a higher LWP (red curve). According to Fig. 3b, when ν decreases, the distribution maximum is shifted toward smaller droplets, but the right tail of the distribution is heavier, which indicates the presence of big droplets. Most of the water is contained in these big droplets, which fall faster; therefore, the sedimentation rate is higher and the LWP is lower. Conversely, a higher ν results in a higher LWP as can be seen in Fig. 4b. These results are consistent with those presented in Boutle et al. (2022) (see their Fig. 12).

The oscillations that can be seen on the LWP toward the end of the simulation are caused by sporadic activation of the shallow convection scheme and disappear if this scheme is deactivated in AROME-SCM. They do not significantly affect the results presented in this study.

Overall, these tests confirm the sensitivity of the cloud liquid water content of the LANFEX fog simulated by AROME-SCM to changes in the value of the parameters N0 and ν. Other aspects of the fog are less clearly affected: the formation time remains similar in all the simulations (Fig. 4), and the height reached by the fog layer varies by no more than one model level between our simulations, which represents a maximum difference of around 30 m (not shown). A finer vertical resolution might be necessary to study this latter aspect in more detail. On the other hand, there is a clear sensitivity of the fog liquid water content to the value of N0 and ν. According to Fig. 4, the LWP of the simulation using N0 = 300 cm−3 is almost 4 times greater, at the end of the simulation, than with N0 = 10 cm−3. Changing the value of ν from 1 to 15 results in an LWP more than 2 times greater. These values are within the range of LWP values produced by 10 different SCM models presented in the intercomparison study of Boutle et al. (2022). This is a motivation for applying perturbations on N0 and ν to represent the model uncertainty associated with the simulation of this fog case.

b. First ensembles

Perturbing the parameters of a model to generate ensembles requires specifying the sampling distribution for the random parameters. As a first step, two AROME-SCM ensembles are constructed by sampling N0 and ν from two classical distributions used for parameter perturbations. Each ensemble consists of 100 simulations. In the first one, called “UNIF_1D,” the values are independently sampled from two uniform distributions defined on the intervals [10, 350] cm−3 and [1, 15] for N0 and ν, respectively. In the second one, “LOG_1D,” independent samples are taken from two lognormal distributions, centered on the parameter default values, and with prescribed standard deviations (50 cm−3 and 2 for N0 and ν, respectively). The graphs in Fig. 5 show the mean profiles as well as the dispersion of cloud liquid water content obtained in the two ensembles.

Fig. 5.
Fig. 5.

Cloud liquid water profiles at +18 h in two RPP ensembles of 100 AROME-SCM simulations of the LANFEX case, using (a) uniform or (b) lognormal distributions. The black line represents the reference (unperturbed) AROME-SCM simulation. The red dashed line is the ensemble mean, and the red shaded area corresponds to the interval between the 10th and 90th percentiles of the ensemble.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The first striking observation that can be made is that the mean of the ensemble UNIF_1D (red dashed curve in Fig. 5a) is significantly different from the reference simulation (black curve). The latter is not even included in the 10th–90th percentile range of cloud liquid water content found in the ensemble (red shaded area). This can be explained by the fact that the default value of the parameters (50 cm−3 and 3 for N0 and ν, respectively, in this study) has a relatively off-center position in the intervals used to construct the UNIF_1D ensemble. However, the profiles in Fig. 5b indicate that using lognormal distributions, centered on the default values, also results in an ensemble that is biased with regard to the reference simulation. We can thus hypothesize that the bias is also caused by model nonlinearities and a nonsymmetrical impact of the parameters around their default values. The negative bias on cloud liquid water in the LOG_1D ensemble is observed during all the simulation, and other model variables such as temperature or specific humidity are also biased. Additional 100-member ensembles made with other samples of N0 and ν from the same distributions consistently show the same biases (not shown).

These results show that using parameter perturbations can change the ensemble mean climatology, which can deviate from the deterministic model from which the ensemble is constructed. We note that this can happen even if the distributions from which parameter values are sampled are centered on their default value because of the complex and nonlinear impact of the parameters on the model. In the following section, an inverse problem is solved to construct a new distribution for N0 and ν that allows to produce an ensemble that is not biased with regard to the deterministic model.

5. Solving the inverse problem in the single-column framework

a. Data-consistent inversion

Our problem is to find the distribution of two parameters (N0 and ν) that, when propagated through a model (AROME-SCM), gives a distribution of a QoI (cloud liquid water content) that has desired properties (centered on a reference and with nonnegligible spread). As stated in section 2, this problem can be solved using the data-consistent inversion framework proposed by BJW18. In our problem, the parameter space is two-dimensional, with θ = (N0, ν), and we choose to focus on one QoI which is the cloud liquid water content at a given time (+18 h) and a given vertical level (level 84, which corresponds to about 100 m) in the LANFEX simulation, denoted as ql(+18 h, 100 m). This choice of QoI is motivated by the fact that, among model variables, the ensemble bias is most clearly observed in cloud liquid water. One hundred meters is situated in the middle of the fog layer, where the bias is strong, and the final time step +18 h is selected because the fog liquid water content, as well as the bias, is highest at that time.

To solve the inverse problem with the BJW18 method, three probability densities involved in Eq. (2) must be determined. The first one is the initial density of parameters πinit. We choose it to be uninformative by assuming that N0 and ν are independent and both follow a uniform distribution, such that πinit(N0, ν) = funif(N0)funif(ν), where funif is the probability density function of the continuous uniform distribution with support [1, 350] cm−3 and [0.25, 15] for N0 and ν, respectively. These intervals are wider than those used for the UNIF_1D ensemble in section 4 in order to explore a bigger range of model outputs. The second element in Eq. (2) is the predicted density πpredict which is the push-forward of the initial density. Obviously, no analytical form can be derived for this density, so it has to be estimated numerically. We follow BJW18 and use Monte Carlo sampling and Gaussian KDE to estimate πpredict. Finally, the last term of Eq. (2) is the observed probability density of QoI πobs. Once the inverse problem is solved, the propagation of the updated parameter distribution through the model should be consistent with πobs, so it represents the distribution of ql(+18 h, 100 m) that we want to obtain in our ensemble of AROME-SCM simulations. Here, we define it to be a simple Gaussian distribution, centered on the reference LANFEX simulation (ql,ref ≈ 1.76 × 10−4 kg kg−1), with a standard deviation equal to that obtained in the LOG_1D ensemble (σLOG_1D ≈ 5 × 10−5 kg kg−1). The range of LWP values of the LOG_1D ensemble is of the same order of magnitude as the uncertainty on the LWP shown in the intercomparison study of Boutle et al. (2022). We subsequently consider the spread of cloud liquid water obtained in this ensemble as a good target.

To perform the kernel density estimation, as well as for the rejection algorithm that is later used to generate a sample of the updated distribution, numerous runs of the forward model are needed. In our case, this means performing AROME-SCM simulations of the LANFEX case with specific values of N0 and ν. To limit this time-consuming task, an emulator of the forward model is constructed with a dataset of 2640 simulations of the LANFEX case, with values of N0 and ν spanning the interval [1, 350] × [0.25, 15]. The QoI—ql(+18 h, 100 m)—obtained in these simulations is presented in Fig. 6. The emulation is done by replacing AROME-SCM with a statistical surrogate model, which, in this study, simply consists of bilinear interpolations on the dataset of 2640 LANFEX simulations. Bilinear interpolation is chosen here for the sake of simplicity, but more elaborate emulators, based on Gaussian processes or neural networks, have been used successfully in the literature in the context of parameter uncertainty quantification (Lee et al. 2011; Johnson et al. 2015; Eidhammer et al. 2024). The advantage of using the surrogate model is that the QoI can be evaluated from any θ = (N0, ν) at a very low cost, which allows us to draw large samples of parameter values as presented below.

Fig. 6.
Fig. 6.

The surrogate model constructed with 2640 AROME-SCM simulations of the LANFEX case.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The estimation of the predicted density is represented by the black curve in Fig. 7. To obtain this curve, the interval [1, 350] × [0.25, 15] is regularly sampled (21 000 pairs of values) to get uniform samples of N0 and ν, the corresponding QoI are estimated with bilinear interpolations from the data of Fig. 6, and the density is estimated with a Gaussian KDE. At this point, it is necessary to check “the predictability assumption” mentioned in section 2, which relates πobs to πpredit and reflects the fact that all observed data should be predictable. We perform the numerical check proposed by BJW18 and compute the average of r(θ) on the sample of θ = (N0, ν) values used to estimate πpredit. We obtain E[r(θ)] ≈ 0.968, which is within the range of values reported in other studies and deemed sufficient to consider that the density is well approximated (Butler et al. 2020; Tran and Wildey 2021; Mattis et al. 2022).

Fig. 7.
Fig. 7.

The push-forward of the initial and updated densities.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

Then, the steps mentioned in section 2 are followed to generate samples of the updated parameter distribution. Figure 8 shows a final sample of around 1800 values, obtained with the rejection algorithm and an initial set of 10 000 values of θ sampled in the initial distribution (the rejection algorithm, therefore, has an acceptance rate of about 18%). The QoI associated with the final sample of parameter values is a sample of the push-forward of πupdate. It is represented by the blue histogram in Fig. 7 (for this figure, a final sample of around 18 000 values was generated). We see that it closely follows the red curve that represents the Gaussian density that has been chosen for πobs, which is the main feature of data-consistent inversion.

Fig. 8.
Fig. 8.

The sample of around 1800 values from the updated parameter density. The red dots on the graph correspond to the subsample of 100 values used to produce the new RPP ensemble CAL_1D in section 5c.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

b. Comparison with Bayesian inference

In order to illustrate the differences between data-consistent inversion and Bayesian inference that were outlined in section 2, we also estimate the posterior parameter density given by Bayes’ rule:
πpost(θ|d)=πprior(θ)πlik(d|θ)πevi(d).
For the purpose of comparison, we choose πprior to be equal to πinit. The likelihood πlik is constructed by considering that Gaussian errors are associated with the data: d = G(θ) + ϵ, with ϵN(0, σLOG_1D). In this way, if we consider an observation d = ql,ref(+18 h, 100 m), then πlik is identical to πobs. On the other hand, the denominator πevi(d), which is a normalizing constant, differs from the denominator of BJW18’s formula. MCMC is used to generate a sample of the posterior density πpost. Figure 9 represents the posterior sample obtained after 10 000 iterations when d = ql,ref(+18 h, 100 m).
Fig. 9.
Fig. 9.

The posterior sample obtained using one observation: d = ql,ref(+18 h, 100 m). The histograms represent (a) N0 and (b) ν values of the posterior sample and (c) the push-forward of the posterior sample through the model.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

In this figure, we can see that the average QoI obtained after propagating the posterior sample through the model is quite far from the reference ql,ref(+18 h, 100 m), and that the push-forward distribution does not match the target density πobs. As mentioned in section 2, the accuracy of the posterior estimate can be improved when more data are taken into account. We therefore perform another test with nine observations, taken as the 10th–90th percentiles of a Gaussian distribution centered on ql,ref(+18 h, 100 m) with a standard deviation σLOG_1D. The results, presented in Fig. 10, show that with more observations, the mean QoI of the push-forward of the posterior sample is closer to the reference.

Fig. 10.
Fig. 10.

The posterior sample obtained using nine observations. The histograms represent (a) N0 and (b) ν values of the posterior sample and (c) the push-forward of the posterior sample through the model.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The push-forward distribution now has a bell shape, but its dispersion is lower than that of πobs. These two examples illustrate the fact that Bayesian inference does not ensure that the posterior parameter distribution can be used to reproduce the distribution of observations. None of these two posterior distributions could be used to generate an RPP ensemble that corresponds to our criteria: either the ensemble would be biased with regard to the QoI (first example) or its dispersion would be too low (second example).

We acknowledge that our choice of an uninformative uniform prior may not be optimal when using Bayesian inference, and that a more appropriate distribution could have been selected after a “prior predictive check” as proposed by Gelman et al. (2020). This check consists in propagating the prior through the model and verifying that the distribution obtained is compatible with what we know about the QoI. Figure 7 clearly shows that the push-forward of a uniform prior gives a distribution that is far from the Gaussian distribution of the observations. It explains the shape of the histogram representing the push-forward of the posterior sample in the first example above (Fig. 9), which is heavily influenced by the uniform prior. With a more appropriate choice of πprior, we could certainly obtain a push-forward of the posterior that is closer to the distribution of the observations. However, we emphasize that this tuning step of the prior is not necessary with the BJW18 method, as it provides, by construction, a data-consistent updated distribution, which makes it particularly attractive for our problem.

c. New RPP ensemble

From the final sample of parameter values obtained with the data-consistent inversion approach in section 5a, a subsample of 100 values is extracted to produce a new RPP ensemble of AROME-SCM simulations (CAL_1D). The results in terms of cloud liquid water content at +18 h for this ensemble are represented in Fig. 11.

Fig. 11.
Fig. 11.

(a) Cloud liquid water profiles and (b) standard deviation at +18 h in two RPP ensembles of 100 AROME-SCM simulations of the LANFEX case: one using N0 and ν values sampled from lognormal distributions (in red) and one using the updated parameter distribution obtained with data-consistent inversion (in blue). The black line represents the reference AROME-SCM simulation. The dashed lines represent ensemble means, and the shaded areas correspond to the interval between the 10th and 90th percentiles for each ensemble.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

The mean profile of the new RPP ensemble CAL_1D (in blue) is much closer to the reference simulation (in black) than the LOG_1D ensemble (in red). Even if the QoI used for solving the inverse problem is the cloud liquid water content at a given time and height, the bias is reduced on most of the profile. The bias in other model variables is also reduced (not shown). Moreover, the dispersion of the two ensembles are of the same order of magnitude (Fig. 11b), which is consistent with the settings of the inverse problem (the standard deviation of πobs has been set according to that observed in the LOG_1D ensemble). These results show that, in the simple framework of single-column simulations of a radiation fog case, it is possible to generate an RPP ensemble that is not biased with regard to a reference simulation thanks to a calibrated parameter distribution obtained with data-consistent inversion. The next question, addressed in section 6, is whether this approach can be of any use for 3D simulations of a real fog case.

6. Application to a three-dimensional case study

Using the data-consistent inversion approach with a 3D version of the AROME model is a much more challenging task than in the one-dimensional framework. Running the forward model is more computationally demanding, and selecting an appropriate QoI may be more difficult. Consequently, we choose in this study, as a first step, to see whether the updated parameter distribution obtained in the 1D framework can show any benefits in a 3D configuration.

We carry out simulations of two recent days (11–12 November 2022) during which fog was observed over a large part of metropolitan France. The horizontal visibility diagnostic simulated by AROME at 0700 UTC 12 November is presented in Fig. 12a and shows large areas of reduced visibility in northern France and the southwest. This is further illustrated by the cloud liquid water content at 10 m simulated by AROME and shown in Fig. 12b.

Fig. 12.
Fig. 12.

(a) Visibility at the surface and (b) cloud liquid water content at 10 m in the reference AROME simulation at +28 h (0700 UTC). The pink marker on the maps indicates the location at which the profiles of Fig. 15 have been plotted.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

Three experiments are performed using the ensemble system AROME-EPS (see section 3a). All the simulations start at 0300 UTC 11 November and cover 51 h. To study only the impact of parameter perturbations on the ensemble, the same initial and boundary conditions are given to all the members and are the same as the ones given to the deterministic model used to produce Fig. 12. The operational SPPT method used to represent model uncertainty is deactivated, as well as surface perturbations, so the only difference between the ensemble members is the value of the parameters N0 and ν. The first experiment (named UNIF_3D in the following) uses N0 and ν values selected independently from two uniform distributions, defined on the interval [10, 350] × [1, 15] as for the UNIF_1D ensemble in section 4. The second experiment (named LOG_3D in the following) uses two lognormal distributions identical to those used in the LOG_1D ensemble in section 4. Finally, the third experiment (named CAL_3D in the following) uses the sample from the updated parameter distribution obtained in the previous section. Each experiment consists in a 50-member ensemble.

As in the previous section, we want to study the bias of the ensemble mean with regard to the reference deterministic AROME simulation. This is represented in Fig. 13 for the cloud liquid water content at 10 m. The maps show, for each model grid point, the difference of the mean of the 50 ensemble members with the reference AROME simulation. For the UNIF_3D experiment, the difference is mainly positive over all the area covered by fog, which means that the ensemble mean is greater than the reference, and is consistent with the results obtained in the single-column model framework (see the profiles in Fig. 5a). Similarly, the results for the LOG_3D experiment are consistent with those shown in section 4: the ensemble mean is lower than the control on almost all the domain where fog occurs. The last map shows that the difference with the control is significantly reduced in experiment CAL_3D. Positive and negative areas can be observed over the domain.

Fig. 13.
Fig. 13.

Difference of cloud liquid water content at 10 m at +28 h between the ensemble mean and the reference AROME simulation (see Fig. 12b) for three different experiments: (a) UNIF_3D, (b) LOG_3D, and (c) CAL_3D, where N0 and ν values have been sampled from uniform, lognormal, or the data-consistent updated distribution, respectively. The color scale of the first map is not the same as the two others because of larger differences in the UNIF_3D experiment.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

Figure 14 shows the standard deviation between ensemble members obtained in the three experiments. The order of magnitude seems roughly similar between the three experiments, even if the spatial distribution of the dispersion is slightly different for the UNIF_3D experiment.

Fig. 14.
Fig. 14.

Standard deviation of cloud water content at 10 m at +28 h between ensemble members for the three experiments (as in Fig. 13). The same color scale is used for the three maps.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

To further illustrate these results, vertical profiles of cloud liquid water content are extracted at the point indicated by the marker in Fig. 12. The ensemble mean and the reference profiles are plotted in Fig. 15a, and the ensemble standard deviation is plotted in Fig. 15b. As observed in section 5, the mean profile from the CAL_3D experiment is, on most vertical levels, closer to the reference than experiments UNIF_3D and LOG_3D. We can see, however, a small positive bias at this location for CAL_3D. The standard deviation profiles show that the dispersion is of the same order of magnitude for the three experiments, with roughly similar profiles.

Fig. 15.
Fig. 15.

Vertical profiles of cloud liquid water content at +28 h, extracted at the point indicated by the pink star on the maps in Fig. 12. The graphs show (a) the mean profile of each ensemble, as well as the control, and (b) the standard deviation between members. The green, red, and blue lines are for the UNIF_3D, LOG_3D, and CAL_3D experiment, respectively.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

Finally, Fig. 16 shows the temporal evolution of the absolute difference of the ensemble mean with the reference simulation (solid lines) and of the ensemble standard deviation (dashed lines) which have been spatially averaged over all the points of the domain. According to this figure, throughout the night of 11–12 November, the ensemble bias with regard to the reference is reduced in the LOG_3D experiment compared to the UNIF_3D experiment and is even lower in the CAL_3D experiment. The standard deviation remains roughly similar for the three experiments. We can therefore conclude that, for this case study, using the data-consistent updated parameter distribution defined in the 1D framework (section 5a) to sample N0 and ν allows to reduce the bias of a 3D RPP ensemble without decreasing the dispersion of its members.

Fig. 16.
Fig. 16.

Time evolution of the absolute difference of cloud liquid water content at 10 m between the ensemble mean and the control simulation (solid lines) and the standard deviation between ensemble members (dashed lines), both averaged over the simulation domain. The UNIF_3D experiment is represented in green, the LOG_3D experiment is in red, and the CAL_3D experiment is in blue.

Citation: Monthly Weather Review 153, 4; 10.1175/MWR-D-24-0032.1

7. Discussion and conclusions

Parameter perturbations are an attractive method to represent model uncertainty, as it allows one to control precisely the physical processes that are being perturbed through the choice of the parameters, as well as their range of variation. However, using parameter perturbations in an ensemble may change the climatology of the ensemble mean depending on the distributions in which the parameter values are sampled. In this article, we investigated the impact of the choice of the parameter distributions on ensembles and the benefit that inverse problem theory can bring to the definition of these distributions.

This study is primarily a proof of concept and therefore used a simplified framework. Two parameters of the one-moment microphysics scheme ICE3 were perturbed: the total number concentration N0 and the shape parameter ν, both involved in the characterization of the cloud liquid water droplet size spectra. A large part of the results were produced with a single-column version of the model AROME (AROME-SCM) and a case study of radiation fog, derived from observations made during the LANFEX campaign (Price et al. 2018). Several ensembles of AROME-SCM simulations of this case were constructed by sampling the values of N0 and ν in various distributions. The main observation that can be made from these ensembles is that, when the parameter values are sampled from uniform or lognormal distributions, the ensemble mean is biased compared to the reference (unperturbed) AROME simulation. This can be explained by intervals that are off-centered with respect to the parameter default value in the case of the uniform distribution, but the nonlinear impact of parameters on the model outputs must also play a role as centered lognormal distributions also result in biased ensembles. Results then show how using the updated distribution obtained with the data-consistent inversion approach of BJW18 can significantly reduce this bias.

The data-consistent inversion framework was used because it allows us to control the parameter distribution according to how we want the model output distribution to be. More precisely, the method requires to define a distribution of one or several observed quantities (the QoI) and ensures that the updated parameter distribution will be consistent, when propagated through the model, with this distribution of the QoI. By setting this target QoI distribution to be centered on a reference (the QoI given by the unperturbed model), with a given spread, it is possible to obtain a parameter distribution that can be used to construct an unbiased ensemble. In this study, Butler et al.’s method was successfully applied in the simplified 1D framework to determine a joint distribution of N0 and ν and construct an unbiased ensemble of simulations of the LANFEX case.

An additional goal in this study was to investigate whether the results obtained in the 1D framework could be used with a 3D configuration of the model, closer to the operational context. To this end, several ensembles of simulations of a real case using the ensemble system AROME-EPS were produced. To isolate the impact of the parameter perturbations, initial and boundary perturbations were deactivated, as well as SPPT. The N0 and ν values were the only difference between the ensemble members. The results obtained for a two-day period in November 2022 when fog occurred on large parts of metropolitan France are consistent with those obtained in the single-column framework. Large positive bias of the ensemble mean with regard to a reference deterministic AROME simulation is observed when N0 and ν are sampled from uniform distributions, whereas negative bias is obtained with lognormal distributions. Using the data-consistent updated distribution defined in the 1D framework allows to significantly reduce this bias, without decreasing the ensemble spread. This suggests that (i) the parameter distribution is not excessively sensitive to the meteorological conditions—the distribution determined with the LANFEX case is still relevant for another fog situation and another region, and (ii) although determined in a 1D framework, the updated parameter distribution can bring a benefit in a 3D ensemble.

This last result is important as the use of data-consistent inversion, as other approaches used to solve inverse problems, may be limited by computational costs. The cost associated with solving an inverse problem comes from the number of evaluations of the forward model that are required to compute the posterior distribution. In the data-consistent inversion approach, BJW18 emphasize that the quality of the updated density is heavily influenced by the estimation of the predicted density. In the first numerical example presented in their paper, where the dimensions of the problem are the same as in our study (two-dimensional parameter space, one-dimensional QoI space), 10 000 samples are generated from the initial density in order to estimate the predicted density. This means 10 000 forward model evaluations. In our case, these were AROME-SCM simulations. To limit the computational cost, we therefore used a surrogate model: 2640 AROME-SCM simulations of the LANFEX case were performed, with N0 and ν values spanning their respective intervals, and bilinear interpolations allowed to estimate the QoI associated with any value of N0 and ν inside their intervals. Producing the 2640 AROME-SCM simulations only took a couple of hours on a regular personal computer. However, even with this simplification, the computational cost would have been prohibitive if we had to use a 3D version of AROME. The fact that parameter distributions estimated within the single-column context may still be relevant in 3D is therefore an interesting result.

Our study is, however, limited in several aspects. First, we only considered one type of meteorological situation (fog) both in the 1D and 3D contexts. A first step in extending this work would therefore be to check whether the distribution obtained with data-consistent inversion in this study is relevant (i.e., could be used to produce unbiased ensembles) in other meteorological situations. It is likely that, for very different meteorological phenomena (such as thunderstorms, for example), the results will be different. Therefore, we should probably consider solving the inverse problem for a set of cases representing a variety of weather situations. In a recent study, van Lier-Walqui et al. (2020) used, for example, 40 synthetic cases to constrain the parameters of their microphysics scheme. Furthermore, in this study, we focused on only one model variable which is the cloud liquid water content. This is sufficient to reduce the biases in the other model variables in the single-column simulations, but more thorough investigations should be done in 3D and more variables might need to be included. These two aspects (more test cases and more model variables) are likely to increase the dimensions of the QoI space.

Another clear limitation is the small number of parameters considered in this study. Only two microphysical parameters—N0 and ν—known to be uncertain and to have an impact on fog simulation, were perturbed to construct the ensembles. It is of course insufficient to represent model uncertainty in the context of ensemble prediction. Operational ensemble systems that use parameter perturbations typically perturb several dozen parameters [e.g., 27 in Lang et al. (2021)]. This choice was made to keep the inverse problem easy to solve, but the next step would be to extend it to more parameters.

Increasing the dimensions of the QoI space and of the parameter space raises a number of challenges for solving the inverse problem. Efficient sampling of the parameter space becomes important and may be achieved with methods such as Markov chain Monte Carlo (MCMC). However, unlike the rejection method, these methods produce dependent samples whose distribution converges to the target distribution, so that a number of iterations are required before the estimated distribution is considered acceptable. For example, Posselt (2016) estimate eight microphysical parameters using MCMC and find that an acceptable level of convergence is achieved for chains of around 10 000 iterations. In total, over one million model runs are performed in their study, which forces them to use a coarse resolution for their cloud-resolving model. A more moderate estimate can, however, be found in Tuppi et al. (2020), where 50 iterations with a 20-member ensemble are sufficient to obtain a good level of convergence for the estimation of two convection parameters in the ECMWF IFS model. In the data-consistent inversion framework, BJW18 show with a numerical example that their algorithm based on KDE and rejection sampling can still be used with a reasonable acceptance rate in the case of a 100-dimensional parameter space. However, they subsequently show how increasing the dimensions of the QoI space affects the convergence rate of the error in the updated density and suggest that alternative methods to KDE could be used. The possibility to reduce the dimensions of the QoI space is explored by Mattis et al. (2022) and Pilosov et al. (2023) using the principal component analysis (PCA), as well as clustering methods. This approach is particularly interesting if the inverse problem is to be solved using spatial and temporal outputs of a 3D NWP model. Alternatives to the rejection sampling including MCMC and generative adversarial networks (GANs) have also been proposed recently by Rumbell et al. (2023) to deal with high-dimensional problems. Finally, in all cases, the use of emulators is an interesting strategy to mitigate the cost of numerous forward model evaluations. Various kinds of emulators such as Gaussian process, neural network, or random forest have been used in the literature in the context of sensitivity analysis, uncertainty quantification, and optimization (Johnson et al. 2015; Couvreux et al. 2021; Eidhammer et al. 2024). In this study, the simple surrogate model consisting of bilinear interpolations and the naive sampling of the parameter space used to construct the dataset of AROME simulations may not be a viable option with more parameters. The number of simulations required to span the parameters intervals would indeed increase as NM (with N being the number of parameters and M being the number of values tested for each of them). More elaborate emulators together with efficient methods for sampling the parameter space such as Latin hypercube sampling (Mckay et al. 2000) should be considered instead.

We finally emphasize that, in this article, we have referred to ensemble bias as a bias with regard to a reference which is the deterministic model, considering that the ensemble mean should remain close to it. This choice is questionable. Changing the ensemble mean climatology from that of the corresponding deterministic model because of stochastic perturbations may or may not be a problem. It can result in an increased ensemble bias with regard to real observations (Frogner et al. 2022), but stochastic perturbations may on the contrary have an average impact that improves a deterministic model (Sakradzija and Klocke 2018). It is therefore an open question whether one should take care, when introducing stochastic perturbations, that the average behavior of the ensemble remains close to the deterministic model. The choice has been made in this study to consider that the deterministic model is our reference, as deterministic NWP models are continually improved and tuned by meteorological centers to give the best performances. Ensemble prediction has therefore been considered as a way to produce alternative scenarios but not as a way to correct the deterministic model. Our objective has been to adapt the stochastic perturbations of model parameters in order to produce ensembles of simulations that remained centered on the deterministic model but not to improve the realism of the simulations. However, the inverse problem approach presented in this study could be used with real observations as training data, instead of deterministic model outputs. Field campaigns dedicated to particular weather situations such as LANFEX or, more recently, SoFog3D (Burnet et al. 2020) could be an interesting resource to calibrate parameter perturbations with real observations. However, care must be taken regarding the representativeness of the observations, which can be an issue for fine-scale phenomena such as fog (Lestringant and Bergot 2021). In this context, intercomparison studies are also a valuable tool, which in addition provides information on the dispersion between different models and therefore can help designing model perturbations for ensemble prediction.

Acknowledgments.

The authors thank Grégory Roux, Carole Labadie, and Laurent Descamps for their help with the AROME 3D simulations and Ian Boutle who provided the initial profiles and surface temperature data for running the LANFEX simulation. We also thank Dr. Pirkka Ollinaho and two anonymous reviewers for their constructive and detailed comments which helped to improve this manuscript. This study was supported by Météo-France, the University of Toulouse, and CNRS/LEFE/INSU project IPSIPE.

Data availability statement.

Given the size of the data files used in this study (especially 3D experiments data), they have not been deposited on an online repository. They are, however, available upon request.

REFERENCES

  • Arribas, A., K. B. Robertson, and K. R. Mylne, 2005: Test of a poor man’s ensemble prediction system for short-range probability forecasting. Mon. Wea. Rev., 133, 18251839, https://doi.org/10.1175/MWR2911.1.

    • Search Google Scholar
    • Export Citation
  • Aster, R. C., B. Borchers, and C. H. Thurber, 2018: Parameter Estimation and Inverse Problems. Elsevier, 404 pp., https://www.sciencedirect.com/book/9780128046517/parameter-estimation-and-inverse-problems.

  • Baker, L. H., A. C. Rudd, S. Migliorini, and R. N. Bannister, 2014: Representation of model error in a convective-scale ensemble prediction system. Nonlinear Processes Geophys., 21, 1939, https://doi.org/10.5194/npg-21-19-2014.

    • Search Google Scholar
    • Export Citation
  • Boutle, I., J. D. Price, I. Kudzotsa, H. Kokkola, and S. Romakkaniemi, 2018: Aerosol-fog interaction and the transition to well-mixed radiation fog. Atmos. Chem. Phys., 18, 78277840, https://doi.org/10.5194/acp-18-7827-2018.

    • Search Google Scholar
    • Export Citation
  • Boutle, I., and Coauthors, 2022: Demistify: A large-eddy simulation (LES) and single-column model (SCM) intercomparison of radiation fog. Atmos. Chem. Phys., 22, 319333, https://doi.org/10.5194/acp-22-319-2022.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., and L. Raynaud, 2018: Clustering and selection of boundary conditions for limited-area ensemble prediction. Quart. J. Roy. Meteor. Soc., 144, 23812391, https://doi.org/10.1002/qj.3304.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., B. Vié, O. Nuissier, and L. Raynaud, 2012: Impact of stochastic physics in a convection-permitting ensemble. Mon. Wea. Rev., 140, 37063721, https://doi.org/10.1175/MWR-D-12-00031.1.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., L. Raynaud, O. Nuissier, and B. Ménétrier, 2016: Sensitivity of the AROME ensemble to initial and surface perturbations during HyMeX. Quart. J. Roy. Meteor. Soc., 142, 390403, https://doi.org/10.1002/qj.2622.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., A. Fleury, T. Bergot, and S. Riette, 2022: A single-column comparison of model-error representations for ensemble prediction. Bound.-Layer Meteor., 183, 167197, https://doi.org/10.1007/s10546-021-00682-6.

    • Search Google Scholar
    • Export Citation
  • Bowler, N. E., A. Arribas, K. R. Mylne, K. B. Robertson, and S. E. Beare, 2008: The MOGREPS short-range ensemble prediction system. Quart. J. Roy. Meteor. Soc., 134, 703722, https://doi.org/10.1002/qj.234.

    • Search Google Scholar
    • Export Citation
  • Brousseau, P., L. Berre, F. Bouttier, and G. Desroziers, 2011: Background-error covariances for a convective-scale data-assimilation system: AROME–France 3D-Var. Quart. J. Roy. Meteor. Soc., 137, 409422, https://doi.org/10.1002/qj.750.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 28872908, https://doi.org/10.1002/qj.49712556006.

    • Search Google Scholar
    • Export Citation
  • Burnet, F., and Coauthors, 2020: The SOuth West FOGs 3D experiment for processes study (SOFOG3D) project. EGU General Assembly 2020, Online, European Geosciences Union, Abstract EGU2020-17836, https://doi.org/10.5194/egusphere-egu2020-17836.

  • Butler, T., J. Jakeman, and T. Wildey, 2018: Combining push-forward measures and Bayes’ rule to construct consistent solutions to stochastic inverse problems. SIAM J. Sci. Comput., 40, A984A1011, https://doi.org/10.1137/16M1087229.

    • Search Google Scholar
    • Export Citation
  • Butler, T., T. Wildey, and T. Y. Yen, 2020: Data-consistent inversion for stochastic input-to-output maps. Inverse Prob., 36, 085015, https://doi.org/10.1088/1361-6420/ab8f83.

    • Search Google Scholar
    • Export Citation
  • Butler, T., T. Wildey, and W. Zhang, 2022: LP convergence of approximate maps and probability densities for forward and inverse problems in uncertainty quantification. Int. J. Uncertainty Quantif., 12, 6592.

    • Search Google Scholar
    • Export Citation
  • Christensen, H. M., I. M. Moroz, and T. N. Palmer, 2015: Stochastic and perturbed parameter representations of model uncertainty in convection parameterization. J. Atmos. Sci., 72, 25252544, https://doi.org/10.1175/JAS-D-14-0250.1.

    • Search Google Scholar
    • Export Citation
  • Couvreux, F., and Coauthors, 2021: Process-based climate model development harnessing machine learning: I. A calibration tool for parameterization improvement. J. Adv. Model. Earth Syst., 13, e2020MS002217, https://doi.org/10.1029/2020MS002217.

    • Search Google Scholar
    • Export Citation
  • Descamps, L., C. Labadie, A. Joly, E. Bazile, P. Arbogast, and P. Cébron, 2015: PEARP, the Météo-France short-range ensemble prediction system. Quart. J. Roy. Meteor. Soc., 141, 16711685, https://doi.org/10.1002/qj.2469.

    • Search Google Scholar
    • Export Citation
  • Eidhammer, T., and Coauthors, 2024: An extensible Perturbed Parameter Ensemble (PPE) for the Community Atmosphere Model version 6. EGUsphere, ttps://doi.org/10.5194/egusphere-2023-2165.

    • Search Google Scholar
    • Export Citation
  • Fresnay, S., A. Hally, C. Garnaud, E. Richard, and D. Lambert, 2012: Heavy precipitation events in the Mediterranean: Sensitivity to cloud physics parameterisation uncertainties. Nat. Hazards Earth Syst. Sci., 12, 26712688, https://doi.org/10.5194/nhess-12-2671-2012.

    • Search Google Scholar
    • Export Citation
  • Frogner, I.-L., U. Andrae, P. Ollinaho, A. Hally, K. Hämäläinen, J. Kauhanen, K.-I. Ivarsson, and D. Yazgi, 2022: Model uncertainty representation in a convection-permitting ensemble–SPP and SPPT in HarmonEPS. Mon. Wea. Rev., 150, 775795, https://doi.org/10.1175/MWR-D-21-0099.1.

    • Search Google Scholar
    • Export Citation
  • García-Moya, J.-A., A. Callado, P. Escribà, C. Santos, D. Santos-Muñoz, and J. Simarro, 2011: Predictability of short-range forecasting: A multimodel approach. Tellus, 63A, 550563, https://doi.org/10.1111/j.1600-0870.2010.00506.x.

    • Search Google Scholar
    • Export Citation
  • Gebhardt, C., S. E. Theis, M. Paulat, and Z. B. Bouallègue, 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168177, https://doi.org/10.1016/j.atmosres.2010.12.008.

    • Search Google Scholar
    • Export Citation
  • Gelman, A., and Coauthors, 2020: Bayesian workflow. arXiv, 2011.01808v1, https://doi.org/10.48550/arXiv.2011.01808.

  • Golaz, J.-C., V. E. Larson, J. A. Hansen, D. P. Schanen, and B. M. Griffin, 2007: Elucidating model inadequacies in a cloud parameterization by use of an ensemble-based calibration framework. Mon. Wea. Rev., 135, 40774096, https://doi.org/10.1175/2007MWR2008.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchell, 1996: A system simulation approach to ensemble prediction. Mon. Wea. Rev., 124, 12251242, https://doi.org/10.1175/1520-0493(1996)124<1225:ASSATE>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Igel, A. L., and S. C. van den Heever, 2017: The importance of the shape of cloud droplet size distributions in shallow cumulus clouds. Part I: Bin microphysics simulations. J. Atmos. Sci., 74, 249258, https://doi.org/10.1175/JAS-D-15-0382.1.

    • Search Google Scholar
    • Export Citation
  • Jackson, C. S., M. K. Sen, G. Huerta, Y. Deng, and K. P. Bowman, 2008: Error reduction and convergence in climate prediction. J. Climate, 21, 66986709, https://doi.org/10.1175/2008JCLI2112.1.

    • Search Google Scholar
    • Export Citation
  • Jankov, I., and Coauthors, 2017: A performance comparison between multiphysics and stochastic approaches within a North American RAP ensemble. Mon. Wea. Rev., 145, 11611179, https://doi.org/10.1175/MWR-D-16-0160.1.

    • Search Google Scholar
    • Export Citation
  • Jankov, I., J. Beck, J. Wolff, M. Harrold, J. B. Olson, T. Smirnova, C. Alexander, and J. Berner, 2019: Stochastically perturbed parameterizations in an HRRR-based ensemble. Mon. Wea. Rev., 147, 153173, https://doi.org/10.1175/MWR-D-18-0092.1.

    • Search Google Scholar
    • Export Citation
  • Järvinen, H., P. Räisänen, M. Laine, J. Tamminen, A. Ilin, E. Oja, A. Solonen, and H. Haario, 2010: Estimation of ECHAM5 climate model closure parameters with adaptive MCMC. Atmos. Chem. Phys., 10, 999310 002, https://doi.org/10.5194/acp-10-9993-2010.

    • Search Google Scholar
    • Export Citation
  • Järvinen, H., M. Laine, A. Solonen, and H. Haario, 2012: Ensemble prediction and parameter estimation system: The concept. Quart. J. Roy. Meteor. Soc., 138, 281288, https://doi.org/10.1002/qj.923.

    • Search Google Scholar
    • Export Citation
  • Johnson, J. S., Z. Cui, L. A. Lee, J. P. Gosling, A. M. Blyth, and K. S. Carslaw, 2015: Evaluating uncertainty in convective cloud microphysics using statistical emulation. J. Adv. Model. Earth Syst., 7, 162187, https://doi.org/10.1002/2014MS000383.

    • Search Google Scholar
    • Export Citation
  • Kalina, E. A., I. Jankov, T. Alcott, J. Olson, J. Beck, J. Berner, D. Dowell, and C. Alexander, 2021: A progress report on the development of the High-Resolution Rapid Refresh ensemble. Wea. Forecasting, 36, 791804, https://doi.org/10.1175/WAF-D-20-0098.1.

    • Search Google Scholar
    • Export Citation
  • Laine, M., A. Solonen, H. Haario, and H. Järvinen, 2012: Ensemble prediction and parameter estimation system: The method. Quart. J. Roy. Meteor. Soc., 138, 289297, https://doi.org/10.1002/qj.922.

    • Search Google Scholar
    • Export Citation
  • Lang, S. T. K., S.-J. Lock, M. Leutbecher, P. Bechtold, and R. M. Forbes, 2021: Revision of the stochastically perturbed parametrisations model uncertainty scheme in the integrated forecasting system. Quart. J. Roy. Meteor. Soc., 147, 13641381, https://doi.org/10.1002/qj.3978.

    • Search Google Scholar
    • Export Citation
  • Lee, L. A., K. S. Carslaw, K. J. Pringle, G. W. Mann, and D. V. Spracklen, 2011: Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters. Atmos. Chem. Phys., 11, 12 25312 273, https://doi.org/10.5194/acp-11-12253-2011.

    • Search Google Scholar
    • Export Citation
  • Lestringant, R., and T. Bergot, 2021: Analysis of small-scale spatial variability of fog at Paris Charles de Gaulle airport. Atmosphere, 12, 1406, https://doi.org/10.3390/atmos12111406.

    • Search Google Scholar
    • Export Citation
  • Leutbecher, M., and Coauthors, 2017: Stochastic representations of model uncertainties at ECMWF: State of the art and future vision. Quart. J. Roy. Meteor. Soc., 143, 23152339, https://doi.org/10.1002/qj.3094.

    • Search Google Scholar
    • Export Citation
  • Lock, S.-J., S. T. K. Lang, M. Leutbecher, R. J. Hogan, and F. Vitart, 2019: Treatment of model uncertainty from radiation by the Stochastically Perturbed Parametrization Tendencies (SPPT) scheme and associated revisions in the ECMWF ensembles. Quart. J. Roy. Meteor. Soc., 145, 7589, https://doi.org/10.1002/qj.3570.

    • Search Google Scholar
    • Export Citation
  • Malardel, S., 2008: MUSC: (Modèle Unifié, Simple Colonne) for Arpege-Aladin-Arome-Alaro-Hirlam-(IFS) (CY31T1 version). Tech. Rep., 17 pp., https://www.umr-cnrm.fr/gmapdoc/IMG/pdf_DOC_1D_MODEL.pdf.

  • Mattis, S. A., K. R. Steffen, T. Butler, C. N. Dawson, and D. Estep, 2022: Learning quantities of interest from dynamical systems for observation-consistent inversion. Comput. Methods Appl. Mech. Eng., 388, 114230, https://doi.org/10.1016/j.cma.2021.114230.

    • Search Google Scholar
    • Export Citation
  • McCabe, A., R. Swinbank, W. Tennant, and A. Lock, 2016: Representing model uncertainty in the Met Office convection-permitting ensemble prediction system and its impact on fog forecasting. Quart. J. Roy. Meteor. Soc., 142, 28972910, https://doi.org/10.1002/qj.2876.

    • Search Google Scholar
    • Export Citation
  • Mckay, M. D., R. J. Beckman, and W. J. Conover, 2000: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42, 5561, https://doi.org/10.1080/00401706.2000.10485979.

    • Search Google Scholar
    • Export Citation
  • McTaggart-Cowan, R., and Coauthors, 2022a: Using stochastically perturbed parameterizations to represent model uncertainty. Part I: Implementation and parameter sensitivity. Mon. Wea. Rev., 150, 28292858, https://doi.org/10.1175/MWR-D-21-0315.1.

    • Search Google Scholar
    • Export Citation
  • McTaggart-Cowan, R., L. Separovic, M. Charron, X. Deng, N. Gagnon, P. L. Houtekamer, and A. Patoine, 2022b: Using stochastically perturbed parameterizations to represent model uncertainty. Part II: Comparison with existing techniques in an operational ensemble. Mon. Wea. Rev., 150, 28592882, https://doi.org/10.1175/MWR-D-21-0316.1.

    • Search Google Scholar
    • Export Citation
  • Miles, N. L., J. Verlinde, and E. E. Clothiaux, 2000: Cloud droplet size distributions in low-level stratiform clouds. J. Atmos. Sci., 57, 295311, https://doi.org/10.1175/1520-0469(2000)057<0295:CDSDIL>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Mosegaard, K., and A. Tarantola, 2002: Probabilistic approach to inverse problems. International Handbook of Earthquake and Engineering Seismology, Vol. 1, Academic Press, 237–265, https://www.gfy.ku.dk/∼klaus/papers/B9-RevisedChap16.pdf.

  • Ollinaho, P., P. Bechtold, M. Leutbecher, M. Laine, A. Solonen, H. Haario, and H. Järvinen, 2013a: Parameter variations in prediction skill optimization at ECMWF. Nonlinear Processes Geophys., 20, 10011010, https://doi.org/10.5194/npg-20-1001-2013.

    • Search Google Scholar
    • Export Citation
  • Ollinaho, P., M. Laine, A. Solonen, H. Haario, and H. Järvinen, 2013b: NWP model forecast skill optimization via closure parameter variations. Quart. J. Roy. Meteor. Soc., 139, 15201532, https://doi.org/10.1002/qj.2044.

    • Search Google Scholar
    • Export Citation
  • Ollinaho, P., and Coauthors, 2017: Towards process-level representation of model uncertainties: Stochastically perturbed parametrizations in the ECMWF ensemble. Quart. J. Roy. Meteor. Soc., 143, 408422, https://doi.org/10.1002/qj.2931.

    • Search Google Scholar
    • Export Citation
  • Osorio, S. C., D. Martín Pérez, K.-I. Ivarsson, K. P. Nielsen, W. C. de Rooy, E. Gleeson, and E. McAufield, 2022: Impact of the microphysics in HARMONIE-AROME on fog. Atmosphere, 13, 2127, https://doi.org/10.3390/atmos13122127.

    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer, 2009: Stochastic parametrization and model uncertainty. ECMWF Tech. Memo. 598, 44 pp., https://www.ecmwf.int/en/elibrary/75936-stochastic-parametrization-and-model-uncertainty.

  • Pilosov, M., C. del Castillo-Negrete, T. Y. Yen, T. Butler, and C. Dawson, 2023: Parameter estimation with maximal updated densities. Comput. Methods Appl. Mech. Eng., 407, 115906, https://doi.org/10.1016/j.cma.2023.115906.

    • Search Google Scholar
    • Export Citation
  • Pinty, J.-P., and P. Jabouille, 1998: A mixed-phase cloud parameterization for use in mesoscale non-hydrostatic model: Simulations of a squall line and of orographic precipitations. Proc. Conf. on Cloud Physics, Everett, WA, Amer. Meteor. Soc., 217–220, http://mesonh.aero.obs-mip.fr/mesonh/dir_publication/pinty_jabouille_ams_ccp1998.pdf.

  • Poku, C., A. N. Ross, A. M. Blyth, A. A. Hill, and J. D. Price, 2019: How important are aerosol–fog interactions for the successful modelling of nocturnal radiation fog? Weather, 74, 237243, https://doi.org/10.1002/wea.3503.

    • Search Google Scholar
    • Export Citation
  • Poole, D., and A. E. Raftery, 2000: Inference for deterministic simulation models: The Bayesian melding approach. J. Amer. Stat. Assoc., 95, 12441255, https://doi.org/10.1080/01621459.2000.10474324.

    • Search Google Scholar
    • Export Citation
  • Posselt, D. J., 2016: A Bayesian examination of deep convective squall-line sensitivity to changes in cloud microphysical parameters. J. Atmos. Sci., 73, 637665, https://doi.org/10.1175/JAS-D-15-0159.1.

    • Search Google Scholar
    • Export Citation
  • Posselt, D. J., and T. Vukicevic, 2010: Robust characterization of model physics uncertainty for simulations of deep moist convection. Mon. Wea. Rev., 138, 15131535, https://doi.org/10.1175/2009MWR3094.1.

    • Search Google Scholar
    • Export Citation
  • Price, J. D., and Coauthors, 2018: LANFEX: A field and modeling study to improve our understanding and forecasting of radiation fog. Bull. Amer. Meteor. Soc., 99, 20612077, https://doi.org/10.1175/BAMS-D-16-0299.1.

    • Search Google Scholar
    • Export Citation
  • Qian, Y., and Coauthors, 2018: Parametric sensitivity and uncertainty quantification in the version 1 of E3SM atmosphere model based on short perturbed parameter ensemble simulations. J. Geophys. Res. Atmos., 123, 13 04613 073, https://doi.org/10.1029/2018JD028927.

    • Search Google Scholar
    • Export Citation
  • Roeckner, E., and Coauthors, 2003: The atmospheric general circulation model ECHAM5, Part I: Model description. Max-Planck-Institut für Meteorologie Tech. Rep. 349, 140 pp., https://pure.mpg.de/rest/items/item_995269_2/component/file_995268/content.

  • Rumbell, T., J. Parikh, J. Kozloski, and V. Gurev, 2023: Novel and flexible parameter estimation methods for data-consistent inversion in mechanistic modelling. Roy. Soc. Open Sci., 10, 230668, https://doi.org/10.1098/rsos.230668.

    • Search Google Scholar
    • Export Citation
  • Sakradzija, M., and D. Klocke, 2018: Physically constrained stochastic shallow convection in realistic kilometer-scale simulations. J. Adv. Model. Earth Syst., 10, 27552776, https://doi.org/10.1029/2018MS001358.

    • Search Google Scholar
    • Export Citation
  • Seity, Y., P. Brousseau, S. Malardel, G. Hello, P. Bénard, F. Bouttier, C. Lac, and V. Masson, 2011: The AROME-France convective-scale operational model. Mon. Wea. Rev., 139, 976991, https://doi.org/10.1175/2010MWR3425.1.

    • Search Google Scholar
    • Export Citation
  • Shiogama, H., M. Watanabe, T. Ogura, T. Yokohata, and M. Kimoto, 2014: Multi-parameter multi-physics ensemble (MPMPE): A new approach exploring the uncertainties of climate sensitivity. Atmos. Sci. Lett., 15, 97102, https://doi.org/10.1002/asl2.472.

    • Search Google Scholar
    • Export Citation
  • Smith, D. K. E., I. A. Renfrew, J. D. Price, and S. R. Dorling, 2018: Numerical modelling of the evolution of the boundary layer during a radiation fog event. Weather, 73, 310316, https://doi.org/10.1002/wea.3305.

    • Search Google Scholar
    • Export Citation
  • Solonen, A., P. Ollinaho, M. Laine, H. Haario, J. Tamminen, and H. Järvinen, 2012: Efficient MCMC for climate model parameter estimation: Parallel adaptive chains and early rejection. Bayesian Anal., 7, 715736, https://doi.org/10.1214/12-BA724.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., J.-W. Bao, and T. T. Warner, 2000: Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems. Mon. Wea. Rev., 128, 20772107, https://doi.org/10.1175/1520-0493(2000)128<2077:UICAMP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Termonia, P., and Coauthors, 2018: The ALADIN system and its canonical model configurations AROME CY41T1 and ALARO CY40T1. Geosci. Model Dev., 11, 257281, https://doi.org/10.5194/gmd-11-257-2018.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., J. Berner, M. Frediani, J. A. Otkin, and S. M. Griffin, 2021: A stochastic parameter perturbation method to represent uncertainty in a microphysics scheme. Mon. Wea. Rev., 149, 14811497, https://doi.org/10.1175/MWR-D-20-0077.1.

    • Search Google Scholar
    • Export Citation
  • Tran, A., and T. Wildey, 2021: Solving stochastic inverse problems for property–Structure linkages using data-consistent inversion and machine learning. JOM, 73, 7289, https://doi.org/10.1007/s11837-020-04432-w.

    • Search Google Scholar
    • Export Citation
  • Tuppi, L., P. Ollinaho, M. Ekblom, V. Shemyakin, and H. Järvinen, 2020: Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an example. Geosci. Model Dev., 13, 57995812, https://doi.org/10.5194/gmd-13-5799-2020.

    • Search Google Scholar
    • Export Citation
  • Tuppi, L., M. Ekblom, P. Ollinaho, and H. Järvinen, 2023: Simultaneous optimization of 20 key parameters of the Integrated Forecasting System of ECMWF using OpenIFS. Part I: Effect on deterministic forecasts. Mon. Wea. Rev., 151, 13251346, https://doi.org/10.1175/MWR-D-22-0209.1.

    • Search Google Scholar
    • Export Citation
  • van der Vaart, A. W., 1998: Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, 443 pp.

  • van Lier-Walqui, M., T. Vukicevic, and D. J. Posselt, 2012: Quantification of cloud microphysical parameterization uncertainty using radar reflectivity. Mon. Wea. Rev., 140, 34423466, https://doi.org/10.1175/MWR-D-11-00216.1.

    • Search Google Scholar
    • Export Citation
  • van Lier-Walqui, M., T. Vukicevic, and D. J. Posselt, 2014: Linearization of microphysical parameterization uncertainty using multiplicative process perturbation parameters. Mon. Wea. Rev., 142, 401413, https://doi.org/10.1175/MWR-D-13-00076.1.

    • Search Google Scholar
    • Export Citation
  • van Lier-Walqui, M., H. Morrison, M. R. Kumjian, K. J. Reimel, O. P. Prat, S. Lunderman, and M. Morzfeld, 2020: A Bayesian approach for statistical–physical bulk parameterization of rain microphysics. Part II: Idealized Markov chain Monte Carlo experiments. J. Atmos. Sci., 77, 10431064, https://doi.org/10.1175/JAS-D-19-0071.1.

    • Search Google Scholar
    • Export Citation
  • Wimmer, M., 2021: Représentation des erreurs de modélisation dans le système de prévision d’ensemble régional PEARO. Ph.D. thesis, Université Paul Sabatier-Toulouse III, 293 pp., https://tel.archives-ouvertes.fr/tel-03616084/document.

  • Wimmer, M., L. Raynaud, L. Descamps, L. Berre, and Y. Seity, 2022: Sensitivity analysis of the convective-scale AROME model to physical and dynamical parameters. Quart. J. Roy. Meteor. Soc., 148, 920942, https://doi.org/10.1002/qj.4239.

    • Search Google Scholar
    • Export Citation
  • Yang, B., and Coauthors, 2013: Uncertainty quantification and parameter tuning in the CAM5 Zhang-McFarlane convection scheme and impact of improved convection on the global circulation and climate. J. Geophys. Res. Atmos., 118, 395415, https://doi.org/10.1029/2012JD018213.

    • Search Google Scholar
    • Export Citation
Save
  • Arribas, A., K. B. Robertson, and K. R. Mylne, 2005: Test of a poor man’s ensemble prediction system for short-range probability forecasting. Mon. Wea. Rev., 133, 18251839, https://doi.org/10.1175/MWR2911.1.

    • Search Google Scholar
    • Export Citation
  • Aster, R. C., B. Borchers, and C. H. Thurber, 2018: Parameter Estimation and Inverse Problems. Elsevier, 404 pp., https://www.sciencedirect.com/book/9780128046517/parameter-estimation-and-inverse-problems.

  • Baker, L. H., A. C. Rudd, S. Migliorini, and R. N. Bannister, 2014: Representation of model error in a convective-scale ensemble prediction system. Nonlinear Processes Geophys., 21, 1939, https://doi.org/10.5194/npg-21-19-2014.

    • Search Google Scholar
    • Export Citation
  • Boutle, I., J. D. Price, I. Kudzotsa, H. Kokkola, and S. Romakkaniemi, 2018: Aerosol-fog interaction and the transition to well-mixed radiation fog. Atmos. Chem. Phys., 18, 78277840, https://doi.org/10.5194/acp-18-7827-2018.

    • Search Google Scholar
    • Export Citation
  • Boutle, I., and Coauthors, 2022: Demistify: A large-eddy simulation (LES) and single-column model (SCM) intercomparison of radiation fog. Atmos. Chem. Phys., 22, 319333, https://doi.org/10.5194/acp-22-319-2022.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., and L. Raynaud, 2018: Clustering and selection of boundary conditions for limited-area ensemble prediction. Quart. J. Roy. Meteor. Soc., 144, 23812391, https://doi.org/10.1002/qj.3304.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., B. Vié, O. Nuissier, and L. Raynaud, 2012: Impact of stochastic physics in a convection-permitting ensemble. Mon. Wea. Rev., 140, 37063721, https://doi.org/10.1175/MWR-D-12-00031.1.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., L. Raynaud, O. Nuissier, and B. Ménétrier, 2016: Sensitivity of the AROME ensemble to initial and surface perturbations during HyMeX. Quart. J. Roy. Meteor. Soc., 142, 390403, https://doi.org/10.1002/qj.2622.

    • Search Google Scholar
    • Export Citation
  • Bouttier, F., A. Fleury, T. Bergot, and S. Riette, 2022: A single-column comparison of model-error representations for ensemble prediction. Bound.-Layer Meteor., 183, 167197, https://doi.org/10.1007/s10546-021-00682-6.

    • Search Google Scholar
    • Export Citation
  • Bowler, N. E., A. Arribas, K. R. Mylne, K. B. Robertson, and S. E. Beare, 2008: The MOGREPS short-range ensemble prediction system. Quart. J. Roy. Meteor. Soc., 134, 703722, https://doi.org/10.1002/qj.234.

    • Search Google Scholar
    • Export Citation
  • Brousseau, P., L. Berre, F. Bouttier, and G. Desroziers, 2011: Background-error covariances for a convective-scale data-assimilation system: AROME–France 3D-Var. Quart. J. Roy. Meteor. Soc., 137, 409422, https://doi.org/10.1002/qj.750.

    • Search Google Scholar
    • Export Citation
  • Buizza, R., M. Miller, and T. N. Palmer, 1999: Stochastic representation of model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125, 28872908, https://doi.org/10.1002/qj.49712556006.

    • Search Google Scholar
    • Export Citation
  • Burnet, F., and Coauthors, 2020: The SOuth West FOGs 3D experiment for processes study (SOFOG3D) project. EGU General Assembly 2020, Online, European Geosciences Union, Abstract EGU2020-17836, https://doi.org/10.5194/egusphere-egu2020-17836.

  • Butler, T., J. Jakeman, and T. Wildey, 2018: Combining push-forward measures and Bayes’ rule to construct consistent solutions to stochastic inverse problems. SIAM J. Sci. Comput., 40, A984A1011, https://doi.org/10.1137/16M1087229.

    • Search Google Scholar
    • Export Citation
  • Butler, T., T. Wildey, and T. Y. Yen, 2020: Data-consistent inversion for stochastic input-to-output maps. Inverse Prob., 36, 085015, https://doi.org/10.1088/1361-6420/ab8f83.

    • Search Google Scholar
    • Export Citation
  • Butler, T., T. Wildey, and W. Zhang, 2022: LP convergence of approximate maps and probability densities for forward and inverse problems in uncertainty quantification. Int. J. Uncertainty Quantif., 12, 6592.

    • Search Google Scholar
    • Export Citation
  • Christensen, H. M., I. M. Moroz, and T. N. Palmer, 2015: Stochastic and perturbed parameter representations of model uncertainty in convection parameterization. J. Atmos. Sci., 72, 25252544, https://doi.org/10.1175/JAS-D-14-0250.1.

    • Search Google Scholar
    • Export Citation
  • Couvreux, F., and Coauthors, 2021: Process-based climate model development harnessing machine learning: I. A calibration tool for parameterization improvement. J. Adv. Model. Earth Syst., 13, e2020MS002217, https://doi.org/10.1029/2020MS002217.

    • Search Google Scholar
    • Export Citation
  • Descamps, L., C. Labadie, A. Joly, E. Bazile, P. Arbogast, and P. Cébron, 2015: PEARP, the Météo-France short-range ensemble prediction system. Quart. J. Roy. Meteor. Soc., 141, 16711685, https://doi.org/10.1002/qj.2469.

    • Search Google Scholar
    • Export Citation
  • Eidhammer, T., and Coauthors, 2024: An extensible Perturbed Parameter Ensemble (PPE) for the Community Atmosphere Model version 6. EGUsphere, ttps://doi.org/10.5194/egusphere-2023-2165.

    • Search Google Scholar
    • Export Citation
  • Fresnay, S., A. Hally, C. Garnaud, E. Richard, and D. Lambert, 2012: Heavy precipitation events in the Mediterranean: Sensitivity to cloud physics parameterisation uncertainties. Nat. Hazards Earth Syst. Sci., 12, 26712688, https://doi.org/10.5194/nhess-12-2671-2012.

    • Search Google Scholar
    • Export Citation
  • Frogner, I.-L., U. Andrae, P. Ollinaho, A. Hally, K. Hämäläinen, J. Kauhanen, K.-I. Ivarsson, and D. Yazgi, 2022: Model uncertainty representation in a convection-permitting ensemble–SPP and SPPT in HarmonEPS. Mon. Wea. Rev., 150, 775795, https://doi.org/10.1175/MWR-D-21-0099.1.

    • Search Google Scholar
    • Export Citation
  • García-Moya, J.-A., A. Callado, P. Escribà, C. Santos, D. Santos-Muñoz, and J. Simarro, 2011: Predictability of short-range forecasting: A multimodel approach. Tellus, 63A, 550563, https://doi.org/10.1111/j.1600-0870.2010.00506.x.

    • Search Google Scholar
    • Export Citation
  • Gebhardt, C., S. E. Theis, M. Paulat, and Z. B. Bouallègue, 2011: Uncertainties in COSMO-DE precipitation forecasts introduced by model perturbations and variation of lateral boundaries. Atmos. Res., 100, 168177, https://doi.org/10.1016/j.atmosres.2010.12.008.

    • Search Google Scholar
    • Export Citation
  • Gelman, A., and Coauthors, 2020: Bayesian workflow. arXiv, 2011.01808v1, https://doi.org/10.48550/arXiv.2011.01808.

  • Golaz, J.-C., V. E. Larson, J. A. Hansen, D. P. Schanen, and B. M. Griffin, 2007: Elucidating model inadequacies in a cloud parameterization by use of an ensemble-based calibration framework. Mon. Wea. Rev., 135, 40774096, https://doi.org/10.1175/2007MWR2008.1.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchell, 1996: A system simulation approach to ensemble prediction. Mon. Wea. Rev., 124, 12251242, https://doi.org/10.1175/1520-0493(1996)124<1225:ASSATE>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Igel, A. L., and S. C. van den Heever, 2017: The importance of the shape of cloud droplet size distributions in shallow cumulus clouds. Part I: Bin microphysics simulations. J. Atmos. Sci., 74, 249258, https://doi.org/10.1175/JAS-D-15-0382.1.

    • Search Google Scholar
    • Export Citation
  • Jackson, C. S., M. K. Sen, G. Huerta, Y. Deng, and K. P. Bowman, 2008: Error reduction and convergence in climate prediction. J. Climate, 21, 66986709, https://doi.org/10.1175/2008JCLI2112.1.

    • Search Google Scholar
    • Export Citation
  • Jankov, I., and Coauthors, 2017: A performance comparison between multiphysics and stochastic approaches within a North American RAP ensemble. Mon. Wea. Rev., 145, 11611179, https://doi.org/10.1175/MWR-D-16-0160.1.

    • Search Google Scholar
    • Export Citation
  • Jankov, I., J. Beck, J. Wolff, M. Harrold, J. B. Olson, T. Smirnova, C. Alexander, and J. Berner, 2019: Stochastically perturbed parameterizations in an HRRR-based ensemble. Mon. Wea. Rev., 147, 153173, https://doi.org/10.1175/MWR-D-18-0092.1.

    • Search Google Scholar
    • Export Citation
  • Järvinen, H., P. Räisänen, M. Laine, J. Tamminen, A. Ilin, E. Oja, A. Solonen, and H. Haario, 2010: Estimation of ECHAM5 climate model closure parameters with adaptive MCMC. Atmos. Chem. Phys., 10, 999310 002, https://doi.org/10.5194/acp-10-9993-2010.

    • Search Google Scholar
    • Export Citation
  • Järvinen, H., M. Laine, A. Solonen, and H. Haario, 2012: Ensemble prediction and parameter estimation system: The concept. Quart. J. Roy. Meteor. Soc., 138, 281288, https://doi.org/10.1002/qj.923.

    • Search Google Scholar
    • Export Citation
  • Johnson, J. S., Z. Cui, L. A. Lee, J. P. Gosling, A. M. Blyth, and K. S. Carslaw, 2015: Evaluating uncertainty in convective cloud microphysics using statistical emulation. J. Adv. Model. Earth Syst., 7, 162187, https://doi.org/10.1002/2014MS000383.

    • Search Google Scholar
    • Export Citation
  • Kalina, E. A., I. Jankov, T. Alcott, J. Olson, J. Beck, J. Berner, D. Dowell, and C. Alexander, 2021: A progress report on the development of the High-Resolution Rapid Refresh ensemble. Wea. Forecasting, 36, 791804, https://doi.org/10.1175/WAF-D-20-0098.1.

    • Search Google Scholar
    • Export Citation
  • Laine, M., A. Solonen, H. Haario, and H. Järvinen, 2012: Ensemble prediction and parameter estimation system: The method. Quart. J. Roy. Meteor. Soc., 138, 289297, https://doi.org/10.1002/qj.922.

    • Search Google Scholar
    • Export Citation
  • Lang, S. T. K., S.-J. Lock, M. Leutbecher, P. Bechtold, and R. M. Forbes, 2021: Revision of the stochastically perturbed parametrisations model uncertainty scheme in the integrated forecasting system. Quart. J. Roy. Meteor. Soc., 147, 13641381, https://doi.org/10.1002/qj.3978.

    • Search Google Scholar
    • Export Citation
  • Lee, L. A., K. S. Carslaw, K. J. Pringle, G. W. Mann, and D. V. Spracklen, 2011: Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters. Atmos. Chem. Phys., 11, 12 25312 273, https://doi.org/10.5194/acp-11-12253-2011.

    • Search Google Scholar
    • Export Citation
  • Lestringant, R., and T. Bergot, 2021: Analysis of small-scale spatial variability of fog at Paris Charles de Gaulle airport. Atmosphere, 12, 1406, https://doi.org/10.3390/atmos12111406.

    • Search Google Scholar
    • Export Citation
  • Leutbecher, M., and Coauthors, 2017: Stochastic representations of model uncertainties at ECMWF: State of the art and future vision. Quart. J. Roy. Meteor. Soc., 143, 23152339, https://doi.org/10.1002/qj.3094.

    • Search Google Scholar
    • Export Citation
  • Lock, S.-J., S. T. K. Lang, M. Leutbecher, R. J. Hogan, and F. Vitart, 2019: Treatment of model uncertainty from radiation by the Stochastically Perturbed Parametrization Tendencies (SPPT) scheme and associated revisions in the ECMWF ensembles. Quart. J. Roy. Meteor. Soc., 145, 7589, https://doi.org/10.1002/qj.3570.

    • Search Google Scholar
    • Export Citation
  • Malardel, S., 2008: MUSC: (Modèle Unifié, Simple Colonne) for Arpege-Aladin-Arome-Alaro-Hirlam-(IFS) (CY31T1 version). Tech. Rep., 17 pp., https://www.umr-cnrm.fr/gmapdoc/IMG/pdf_DOC_1D_MODEL.pdf.

  • Mattis, S. A., K. R. Steffen, T. Butler, C. N. Dawson, and D. Estep, 2022: Learning quantities of interest from dynamical systems for observation-consistent inversion. Comput. Methods Appl. Mech. Eng., 388, 114230, https://doi.org/10.1016/j.cma.2021.114230.

    • Search Google Scholar
    • Export Citation
  • McCabe, A., R. Swinbank, W. Tennant, and A. Lock, 2016: Representing model uncertainty in the Met Office convection-permitting ensemble prediction system and its impact on fog forecasting. Quart. J. Roy. Meteor. Soc., 142, 28972910, https://doi.org/10.1002/qj.2876.

    • Search Google Scholar
    • Export Citation
  • Mckay, M. D., R. J. Beckman, and W. J. Conover, 2000: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 42, 5561, https://doi.org/10.1080/00401706.2000.10485979.

    • Search Google Scholar
    • Export Citation
  • McTaggart-Cowan, R., and Coauthors, 2022a: Using stochastically perturbed parameterizations to represent model uncertainty. Part I: Implementation and parameter sensitivity. Mon. Wea. Rev., 150, 28292858, https://doi.org/10.1175/MWR-D-21-0315.1.

    • Search Google Scholar
    • Export Citation
  • McTaggart-Cowan, R., L. Separovic, M. Charron, X. Deng, N. Gagnon, P. L. Houtekamer, and A. Patoine, 2022b: Using stochastically perturbed parameterizations to represent model uncertainty. Part II: Comparison with existing techniques in an operational ensemble. Mon. Wea. Rev., 150, 28592882, https://doi.org/10.1175/MWR-D-21-0316.1.

    • Search Google Scholar
    • Export Citation
  • Miles, N. L., J. Verlinde, and E. E. Clothiaux, 2000: Cloud droplet size distributions in low-level stratiform clouds. J. Atmos. Sci., 57, 295311, https://doi.org/10.1175/1520-0469(2000)057<0295:CDSDIL>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Mosegaard, K., and A. Tarantola, 2002: Probabilistic approach to inverse problems. International Handbook of Earthquake and Engineering Seismology, Vol. 1, Academic Press, 237–265, https://www.gfy.ku.dk/∼klaus/papers/B9-RevisedChap16.pdf.

  • Ollinaho, P., P. Bechtold, M. Leutbecher, M. Laine, A. Solonen, H. Haario, and H. Järvinen, 2013a: Parameter variations in prediction skill optimization at ECMWF. Nonlinear Processes Geophys., 20, 10011010, https://doi.org/10.5194/npg-20-1001-2013.

    • Search Google Scholar
    • Export Citation
  • Ollinaho, P., M. Laine, A. Solonen, H. Haario, and H. Järvinen, 2013b: NWP model forecast skill optimization via closure parameter variations. Quart. J. Roy. Meteor. Soc., 139, 15201532, https://doi.org/10.1002/qj.2044.

    • Search Google Scholar
    • Export Citation
  • Ollinaho, P., and Coauthors, 2017: Towards process-level representation of model uncertainties: Stochastically perturbed parametrizations in the ECMWF ensemble. Quart. J. Roy. Meteor. Soc., 143, 408422, https://doi.org/10.1002/qj.2931.

    • Search Google Scholar
    • Export Citation
  • Osorio, S. C., D. Martín Pérez, K.-I. Ivarsson, K. P. Nielsen, W. C. de Rooy, E. Gleeson, and E. McAufield, 2022: Impact of the microphysics in HARMONIE-AROME on fog. Atmosphere, 13, 2127, https://doi.org/10.3390/atmos13122127.

    • Search Google Scholar
    • Export Citation
  • Palmer, T. N., R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer, 2009: Stochastic parametrization and model uncertainty. ECMWF Tech. Memo. 598, 44 pp., https://www.ecmwf.int/en/elibrary/75936-stochastic-parametrization-and-model-uncertainty.

  • Pilosov, M., C. del Castillo-Negrete, T. Y. Yen, T. Butler, and C. Dawson, 2023: Parameter estimation with maximal updated densities. Comput. Methods Appl. Mech. Eng., 407, 115906, https://doi.org/10.1016/j.cma.2023.115906.

    • Search Google Scholar
    • Export Citation
  • Pinty, J.-P., and P. Jabouille, 1998: A mixed-phase cloud parameterization for use in mesoscale non-hydrostatic model: Simulations of a squall line and of orographic precipitations. Proc. Conf. on Cloud Physics, Everett, WA, Amer. Meteor. Soc., 217–220, http://mesonh.aero.obs-mip.fr/mesonh/dir_publication/pinty_jabouille_ams_ccp1998.pdf.

  • Poku, C., A. N. Ross, A. M. Blyth, A. A. Hill, and J. D. Price, 2019: How important are aerosol–fog interactions for the successful modelling of nocturnal radiation fog? Weather, 74, 237243, https://doi.org/10.1002/wea.3503.

    • Search Google Scholar
    • Export Citation
  • Poole, D., and A. E. Raftery, 2000: Inference for deterministic simulation models: The Bayesian melding approach. J. Amer. Stat. Assoc., 95, 12441255, https://doi.org/10.1080/01621459.2000.10474324.

    • Search Google Scholar
    • Export Citation
  • Posselt, D. J., 2016: A Bayesian examination of deep convective squall-line sensitivity to changes in cloud microphysical parameters. J. Atmos. Sci., 73, 637665, https://doi.org/10.1175/JAS-D-15-0159.1.

    • Search Google Scholar
    • Export Citation
  • Posselt, D. J., and T. Vukicevic, 2010: Robust characterization of model physics uncertainty for simulations of deep moist convection. Mon. Wea. Rev., 138, 15131535, https://doi.org/10.1175/2009MWR3094.1.

    • Search Google Scholar
    • Export Citation
  • Price, J. D., and Coauthors, 2018: LANFEX: A field and modeling study to improve our understanding and forecasting of radiation fog. Bull. Amer. Meteor. Soc., 99, 20612077, https://doi.org/10.1175/BAMS-D-16-0299.1.

    • Search Google Scholar
    • Export Citation
  • Qian, Y., and Coauthors, 2018: Parametric sensitivity and uncertainty quantification in the version 1 of E3SM atmosphere model based on short perturbed parameter ensemble simulations. J. Geophys. Res. Atmos., 123, 13 04613 073, https://doi.org/10.1029/2018JD028927.

    • Search Google Scholar
    • Export Citation
  • Roeckner, E., and Coauthors, 2003: The atmospheric general circulation model ECHAM5, Part I: Model description. Max-Planck-Institut für Meteorologie Tech. Rep. 349, 140 pp., https://pure.mpg.de/rest/items/item_995269_2/component/file_995268/content.

  • Rumbell, T., J. Parikh, J. Kozloski, and V. Gurev, 2023: Novel and flexible parameter estimation methods for data-consistent inversion in mechanistic modelling. Roy. Soc. Open Sci., 10, 230668, https://doi.org/10.1098/rsos.230668.

    • Search Google Scholar
    • Export Citation
  • Sakradzija, M., and D. Klocke, 2018: Physically constrained stochastic shallow convection in realistic kilometer-scale simulations. J. Adv. Model. Earth Syst., 10, 27552776, https://doi.org/10.1029/2018MS001358.

    • Search Google Scholar
    • Export Citation
  • Seity, Y., P. Brousseau, S. Malardel, G. Hello, P. Bénard, F. Bouttier, C. Lac, and V. Masson, 2011: The AROME-France convective-scale operational model. Mon. Wea. Rev., 139, 976991, https://doi.org/10.1175/2010MWR3425.1.

    • Search Google Scholar
    • Export Citation
  • Shiogama, H., M. Watanabe, T. Ogura, T. Yokohata, and M. Kimoto, 2014: Multi-parameter multi-physics ensemble (MPMPE): A new approach exploring the uncertainties of climate sensitivity. Atmos. Sci. Lett., 15, 97102, https://doi.org/10.1002/asl2.472.

    • Search Google Scholar
    • Export Citation
  • Smith, D. K. E., I. A. Renfrew, J. D. Price, and S. R. Dorling, 2018: Numerical modelling of the evolution of the boundary layer during a radiation fog event. Weather, 73, 310316, https://doi.org/10.1002/wea.3305.

    • Search Google Scholar
    • Export Citation
  • Solonen, A., P. Ollinaho, M. Laine, H. Haario, J. Tamminen, and H. Järvinen, 2012: Efficient MCMC for climate model parameter estimation: Parallel adaptive chains and early rejection. Bayesian Anal., 7, 715736, https://doi.org/10.1214/12-BA724.

    • Search Google Scholar
    • Export Citation
  • Stensrud, D. J., J.-W. Bao, and T. T. Warner, 2000: Using initial condition and model physics perturbations in short-range ensemble simulations of mesoscale convective systems. Mon. Wea. Rev., 128, 20772107, https://doi.org/10.1175/1520-0493(2000)128<2077:UICAMP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Termonia, P., and Coauthors, 2018: The ALADIN system and its canonical model configurations AROME CY41T1 and ALARO CY40T1. Geosci. Model Dev., 11, 257281, https://doi.org/10.5194/gmd-11-257-2018.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., J. Berner, M. Frediani, J. A. Otkin, and S. M. Griffin, 2021: A stochastic parameter perturbation method to represent uncertainty in a microphysics scheme. Mon. Wea. Rev., 149, 14811497, https://doi.org/10.1175/MWR-D-20-0077.1.

    • Search Google Scholar
    • Export Citation
  • Tran, A., and T. Wildey, 2021: Solving stochastic inverse problems for property–Structure linkages using data-consistent inversion and machine learning. JOM, 73, 7289, https://doi.org/10.1007/s11837-020-04432-w.

    • Search Google Scholar
    • Export Citation
  • Tuppi, L., P. Ollinaho, M. Ekblom, V. Shemyakin, and H. Järvinen, 2020: Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an example. Geosci. Model Dev., 13, 57995812, https://doi.org/10.5194/gmd-13-5799-2020.

    • Search Google Scholar
    • Export Citation
  • Tuppi, L., M. Ekblom, P. Ollinaho, and H. Järvinen, 2023: Simultaneous optimization of 20 key parameters of the Integrated Forecasting System of ECMWF using OpenIFS. Part I: Effect on deterministic forecasts. Mon. Wea. Rev., 151, 13251346, https://doi.org/10.1175/MWR-D-22-0209.1.

    • Search Google Scholar
    • Export Citation
  • van der Vaart, A. W., 1998: Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics, Cambridge University Press, 443 pp.

  • van Lier-Walqui, M., T. Vukicevic, and D. J. Posselt, 2012: Quantification of cloud microphysical parameterization uncertainty using radar reflectivity. Mon. Wea. Rev., 140, 34423466, https://doi.org/10.1175/MWR-D-11-00216.1.

    • Search Google Scholar
    • Export Citation
  • van Lier-Walqui, M., T. Vukicevic, and D. J. Posselt, 2014: Linearization of microphysical parameterization uncertainty using multiplicative process perturbation parameters. Mon. Wea. Rev., 142, 401413, https://doi.org/10.1175/MWR-D-13-00076.1.

    • Search Google Scholar
    • Export Citation
  • van Lier-Walqui, M., H. Morrison, M. R. Kumjian, K. J. Reimel, O. P. Prat, S. Lunderman, and M. Morzfeld, 2020: A Bayesian approach for statistical–physical bulk parameterization of rain microphysics. Part II: Idealized Markov chain Monte Carlo experiments. J. Atmos. Sci., 77, 10431064, https://doi.org/10.1175/JAS-D-19-0071.1.

    • Search Google Scholar
    • Export Citation
  • Wimmer, M., 2021: Représentation des erreurs de modélisation dans le système de prévision d’ensemble régional PEARO. Ph.D. thesis, Université Paul Sabatier-Toulouse III, 293 pp., https://tel.archives-ouvertes.fr/tel-03616084/document.

  • Wimmer, M., L. Raynaud, L. Descamps, L. Berre, and Y. Seity, 2022: Sensitivity analysis of the convective-scale AROME model to physical and dynamical parameters. Quart. J. Roy. Meteor. Soc., 148, 920942, https://doi.org/10.1002/qj.4239.

    • Search Google Scholar
    • Export Citation
  • Yang, B., and Coauthors, 2013: Uncertainty quantification and parameter tuning in the CAM5 Zhang-McFarlane convection scheme and impact of improved convection on the global circulation and climate. J. Geophys. Res. Atmos., 118, 395415, https://doi.org/10.1029/2012JD018213.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Illustration of the propagation of the initial density through the model. Parameter values are sampled from (left) πinit. The QoI associated with these parameter values, evaluated using (middle) the forward model, are a sample of (right) the predicted density.

  • Fig. 2.

    Evolution of (a) potential temperature and (b) cloud liquid water content profiles during the simulation of the LANFEX case by AROME-SCM. The simulation is initialized at 1700 UTC, and times indicated on the graphs are simulation times.

  • Fig. 3.

    Sensitivity of the cloud droplet size distribution to different values of the parameters (a) N0 and (b) ν.

  • Fig. 4.

    Sensitivity of the LWP to different values of the parameters (a) N0 and (b) ν in the simulation of the LANFEX case by AROME-SCM. The simulation is initialized at 1700 UTC, and times indicated on the graphs are simulation times.

  • Fig. 5.

    Cloud liquid water profiles at +18 h in two RPP ensembles of 100 AROME-SCM simulations of the LANFEX case, using (a) uniform or (b) lognormal distributions. The black line represents the reference (unperturbed) AROME-SCM simulation. The red dashed line is the ensemble mean, and the red shaded area corresponds to the interval between the 10th and 90th percentiles of the ensemble.

  • Fig. 6.

    The surrogate model constructed with 2640 AROME-SCM simulations of the LANFEX case.

  • Fig. 7.

    The push-forward of the initial and updated densities.

  • Fig. 8.

    The sample of around 1800 values from the updated parameter density. The red dots on the graph correspond to the subsample of 100 values used to produce the new RPP ensemble CAL_1D in section 5c.

  • Fig. 9.

    The posterior sample obtained using one observation: d = ql,ref(+18 h, 100 m). The histograms represent (a) N0 and (b) ν values of the posterior sample and (c) the push-forward of the posterior sample through the model.

  • Fig. 10.

    The posterior sample obtained using nine observations. The histograms represent (a) N0 and (b) ν values of the posterior sample and (c) the push-forward of the posterior sample through the model.

  • Fig. 11.

    (a) Cloud liquid water profiles and (b) standard deviation at +18 h in two RPP ensembles of 100 AROME-SCM simulations of the LANFEX case: one using N0 and ν values sampled from lognormal distributions (in red) and one using the updated parameter distribution obtained with data-consistent inversion (in blue). The black line represents the reference AROME-SCM simulation. The dashed lines represent ensemble means, and the shaded areas correspond to the interval between the 10th and 90th percentiles for each ensemble.

  • Fig. 12.

    (a) Visibility at the surface and (b) cloud liquid water content at 10 m in the reference AROME simulation at +28 h (0700 UTC). The pink marker on the maps indicates the location at which the profiles of Fig. 15 have been plotted.

  • Fig. 13.

    Difference of cloud liquid water content at 10 m at +28 h between the ensemble mean and the reference AROME simulation (see Fig. 12b) for three different experiments: (a) UNIF_3D, (b) LOG_3D, and (c) CAL_3D, where N0 and ν values have been sampled from uniform, lognormal, or the data-consistent updated distribution, respectively. The color scale of the first map is not the same as the two others because of larger differences in the UNIF_3D experiment.

  • Fig. 14.

    Standard deviation of cloud water content at 10 m at +28 h between ensemble members for the three experiments (as in Fig. 13). The same color scale is used for the three maps.

  • Fig. 15.

    Vertical profiles of cloud liquid water content at +28 h, extracted at the point indicated by the pink star on the maps in Fig. 12. The graphs show (a) the mean profile of each ensemble, as well as the control, and (b) the standard deviation between members. The green, red, and blue lines are for the UNIF_3D, LOG_3D, and CAL_3D experiment, respectively.

  • Fig. 16.

    Time evolution of the absolute difference of cloud liquid water content at 10 m between the ensemble mean and the control simulation (solid lines) and the standard deviation between ensemble members (dashed lines), both averaged over the simulation domain. The UNIF_3D experiment is represented in green, the LOG_3D experiment is in red, and the CAL_3D experiment is in blue.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 673 673 358
PDF Downloads 138 138 45