Search Results
You are looking at 1 - 10 of 27 items for
- Author or Editor: Tomislava Vukicevic x
- Refine by Access: All Content x
Abstract
The possibility of using forecast errors originating from the finite-time dominant linear modes for the prediction of forecast skill for a primitive equation regional forecast model is studied. This is similar to the method for skill prediction suggested by several other authors using simplified models. Two main problems associated with a sophisticated forecast model not considered in these other studies are investigated. 1) The number of degrees of freedom is typically too large for the evaluation of the spectrum of dominant modes associated with the linear error evolution equation, and 2) many different, physically meaningful, error measures may be used for this model and the dominant linear modes may be sensitive to the selection of the error measure.
It is shown first that the finite-time dominant linear solutions can be computed with sufficient accuracy for the complex forecast model using standard “power” method with a small number of iterations for all error measures considered in this study. The forecast skill is then estimated using the nonlinear forecast errors originating from the initial errors that are defined by these measure-dependent dominant solutions. These results show that the estimated forecast skill is very sensitive to the choice of error measure used for the computation of the finite-time dominant modes.
Abstract
The possibility of using forecast errors originating from the finite-time dominant linear modes for the prediction of forecast skill for a primitive equation regional forecast model is studied. This is similar to the method for skill prediction suggested by several other authors using simplified models. Two main problems associated with a sophisticated forecast model not considered in these other studies are investigated. 1) The number of degrees of freedom is typically too large for the evaluation of the spectrum of dominant modes associated with the linear error evolution equation, and 2) many different, physically meaningful, error measures may be used for this model and the dominant linear modes may be sensitive to the selection of the error measure.
It is shown first that the finite-time dominant linear solutions can be computed with sufficient accuracy for the complex forecast model using standard “power” method with a small number of iterations for all error measures considered in this study. The forecast skill is then estimated using the nonlinear forecast errors originating from the initial errors that are defined by these measure-dependent dominant solutions. These results show that the estimated forecast skill is very sensitive to the choice of error measure used for the computation of the finite-time dominant modes.
Abstract
The hypothesis that the short-time evolution of forecast errors originating from initial data uncertainties can be approximated by linear model solutions is investigated using a realistic prognostic model. A tangent linear limited-area model based on a state of the art mesoscale numerical forecast model is developed. The linearization is performed with respect to a temporally and spatially varying basic state. The basic state fields are produced by the nonlinear model using observed data.
The tangent model solutions and the error fields based on the nonlinear integrations are compared. The results demonstrate that the initial error evolution is well represented by the tangent model for periods of 1–1.5 days duration. The linear model solutions based on the time-independent basic state are also good approximations of the real-error evolutions, providing the prognostic fields are not changing rapidly in time.
The application of the linear model for estimating appropriate initial perturbation for the initial error sensitivity study is illustrated using a simple method. Comparison between the nonlinear integrations based on the unstable initial perturbation and an arbitrarily selected initial perturbation shows that the latter initialization can produce misleading results.
Abstract
The hypothesis that the short-time evolution of forecast errors originating from initial data uncertainties can be approximated by linear model solutions is investigated using a realistic prognostic model. A tangent linear limited-area model based on a state of the art mesoscale numerical forecast model is developed. The linearization is performed with respect to a temporally and spatially varying basic state. The basic state fields are produced by the nonlinear model using observed data.
The tangent model solutions and the error fields based on the nonlinear integrations are compared. The results demonstrate that the initial error evolution is well represented by the tangent model for periods of 1–1.5 days duration. The linear model solutions based on the time-independent basic state are also good approximations of the real-error evolutions, providing the prognostic fields are not changing rapidly in time.
The application of the linear model for estimating appropriate initial perturbation for the initial error sensitivity study is illustrated using a simple method. Comparison between the nonlinear integrations based on the unstable initial perturbation and an arbitrarily selected initial perturbation shows that the latter initialization can produce misleading results.
Abstract
No abstract available
Abstract
No abstract available
Abstract
Assimilation of cloud-affected infrared radiances from the Geostationary Operational Environmental Satellite-8 (GOES-8) is performed using a four-dimensional variational data assimilation (4DVAR) system designated as the Regional Atmospheric Modeling Data Assimilation System (RAMDAS). A cloud mask is introduced in order to limit the assimilation to points that have the same type of cloud in the model and observations, increasing the linearity of the minimization problem. A series of experiments is performed to determine the sensitivity of the assimilation to factors such as the maximum-allowed residual in the assimilation, the magnitude of the background error decorrelation length for water variables, the length of the assimilation window, and the inclusion of other data such as ground-based data including data from the Atmospheric Emitted Radiance Interferometer (AERI), a microwave radiometer, radiosonde, and cloud radar. In addition, visible and near-infrared satellite data are included in a separate experiment. The assimilation results are validated using independent ground-based data. The introduction of the cloud mask where large residuals are allowed has the greatest positive impact on the assimilation. Extending the length of the assimilation window in conjunction with the use of the cloud mask results in a better-conditioned minimization, as well as a smoother response of the model state to the assimilation.
Abstract
Assimilation of cloud-affected infrared radiances from the Geostationary Operational Environmental Satellite-8 (GOES-8) is performed using a four-dimensional variational data assimilation (4DVAR) system designated as the Regional Atmospheric Modeling Data Assimilation System (RAMDAS). A cloud mask is introduced in order to limit the assimilation to points that have the same type of cloud in the model and observations, increasing the linearity of the minimization problem. A series of experiments is performed to determine the sensitivity of the assimilation to factors such as the maximum-allowed residual in the assimilation, the magnitude of the background error decorrelation length for water variables, the length of the assimilation window, and the inclusion of other data such as ground-based data including data from the Atmospheric Emitted Radiance Interferometer (AERI), a microwave radiometer, radiosonde, and cloud radar. In addition, visible and near-infrared satellite data are included in a separate experiment. The assimilation results are validated using independent ground-based data. The introduction of the cloud mask where large residuals are allowed has the greatest positive impact on the assimilation. Extending the length of the assimilation window in conjunction with the use of the cloud mask results in a better-conditioned minimization, as well as a smoother response of the model state to the assimilation.
Abstract
The authors propose a new procedure. designated the adjoint-based genesis diagnostic (AGD) procedure, for studying triggering mechanism and the subsequent genesis of the synoptic phenomena of interest. This procedure makes use of a numerical model sensitivity to initial conditions and the nonlinear evolution of the initial perturbations that are designed using this sensitivity. The model sensitivity is evaluated using the associated adjoint model. This study uses the dry version of the National Center for Atmospheric Research Mesoscale Adjoint Modeling System (MAMS) for the numerical experiments. The authors apply the AGD procedure to two cases of Alpine lee cyclogenesis that were observed during the Alpine Experiment special observation period. The results show that the sensitivity fields that are produced by the adjoint model and the associated initial perturbations are readily related to the probable triggering mechanisms for these cyclones. Additionally, the nonlinear evolution of these initial perturbations points toward the physical processes involved in the lee cyclone formation. The AGD experiments for a weak cyclone case indicate that the MAMS forecast model has an underrepresented topographic forcing due to the sigma vertical coordinate and that this model error can be compensated by adjustments in the initial conditions that are related to the triggering mechanism, which is not associated with the topographic blocking mechanism.
Abstract
The authors propose a new procedure. designated the adjoint-based genesis diagnostic (AGD) procedure, for studying triggering mechanism and the subsequent genesis of the synoptic phenomena of interest. This procedure makes use of a numerical model sensitivity to initial conditions and the nonlinear evolution of the initial perturbations that are designed using this sensitivity. The model sensitivity is evaluated using the associated adjoint model. This study uses the dry version of the National Center for Atmospheric Research Mesoscale Adjoint Modeling System (MAMS) for the numerical experiments. The authors apply the AGD procedure to two cases of Alpine lee cyclogenesis that were observed during the Alpine Experiment special observation period. The results show that the sensitivity fields that are produced by the adjoint model and the associated initial perturbations are readily related to the probable triggering mechanisms for these cyclones. Additionally, the nonlinear evolution of these initial perturbations points toward the physical processes involved in the lee cyclone formation. The AGD experiments for a weak cyclone case indicate that the MAMS forecast model has an underrepresented topographic forcing due to the sigma vertical coordinate and that this model error can be compensated by adjustments in the initial conditions that are related to the triggering mechanism, which is not associated with the topographic blocking mechanism.
Abstract
The influence of one-way interacting lateral bounday conditions upon the predictability of flows in boundeddomains is studied using the barotropic nondivergent model in global and local domains. Past studies haveattempted to reconcile the apparent contradiction between pessimistic forecast of predictability theory and thehigh predictability actually found in regional models. Those investigations have emphasized the rather differentspectra and forcing mechanisms that are not considered in the theoretical estimates. We demonstrate that thepredictability remains high in an unforced, inertially driven local flow characterized by a typical synoptic scalespectrum, and constrained only by lateral boundary specification. We also offer a possible reconciliation ofthese results with the classical theory. It is shown that one-way interacting boundary conditions enhance thepredictability of flow in a local region which, without the boundary constraints, has limited predictabifity. Thedegree of this boundary constraint is dependent on the size of the domain, on the nature of flow in the domainand on the scale structure of the error field. The boundary constraint is particularly strong when a substantialportion of the larger scale flow in the domain is imposed through the boundary condition. In that case, smallscale initial uncertainties have limited interaction with the basic flow field because of scale separation andbecause the largest scales in the domain do not react to internal dynamics.
Abstract
The influence of one-way interacting lateral bounday conditions upon the predictability of flows in boundeddomains is studied using the barotropic nondivergent model in global and local domains. Past studies haveattempted to reconcile the apparent contradiction between pessimistic forecast of predictability theory and thehigh predictability actually found in regional models. Those investigations have emphasized the rather differentspectra and forcing mechanisms that are not considered in the theoretical estimates. We demonstrate that thepredictability remains high in an unforced, inertially driven local flow characterized by a typical synoptic scalespectrum, and constrained only by lateral boundary specification. We also offer a possible reconciliation ofthese results with the classical theory. It is shown that one-way interacting boundary conditions enhance thepredictability of flow in a local region which, without the boundary constraints, has limited predictabifity. Thedegree of this boundary constraint is dependent on the size of the domain, on the nature of flow in the domainand on the scale structure of the error field. The boundary constraint is particularly strong when a substantialportion of the larger scale flow in the domain is imposed through the boundary condition. In that case, smallscale initial uncertainties have limited interaction with the basic flow field because of scale separation andbecause the largest scales in the domain do not react to internal dynamics.
Abstract
The authors show that the linear approximation errors in the presence of a discontinuous convective parameterization operator are large for a small number of grid points where the noise produced by the convective parameterization is largest. These errors are much smaller for “smooth convective” points in the integration domain and for the nonconvective regions. Decreasing of the amplitude of initial perturbations does not reduce the errors in noisy points. This result indicates that the tangent linear model solution is erroneous in these points due to the linearization that does not include the linear variations of regime changes (i.e., due to use of standard method).
The authors then show that the quality of local four-dimensional variational (4DVAR) data assimilation results is correlated with the linearization errors: Slower convergence is associated with large errors. Consequently, the 4DVAR assimilation results are different for different convective points in the integration domain. The negative effect of linearization errors is not, however, significant for the cases that are studied. Erroneous points slightly degrade 4DVAR results in the remaining points. This degradation is reflected in decreased monotonicity of the cost function gradient reduction with iterations.
These results suggest that there is a probability for locally bad 4DVAR assimilation results when using standard adjoints of discontinuous parameterizations. In practice, when using for example observations, this is unlikely to cause errors that are larger than errors associated with other approximations and uncertainties in the data assimilation integrations such as the linear approximation errors and the uncertainties associated with the background and model errors statistics. This conclusion is similar to the conclusions of prior 4DVAR assimilation studies that use the standard adjoints but unlike in these studies the results in the current study show that 1) the linearization errors are nonnegligible for small-amplitude initial perturbations and 2) the assimilation results are locally and even globally affected by these errors.
Abstract
The authors show that the linear approximation errors in the presence of a discontinuous convective parameterization operator are large for a small number of grid points where the noise produced by the convective parameterization is largest. These errors are much smaller for “smooth convective” points in the integration domain and for the nonconvective regions. Decreasing of the amplitude of initial perturbations does not reduce the errors in noisy points. This result indicates that the tangent linear model solution is erroneous in these points due to the linearization that does not include the linear variations of regime changes (i.e., due to use of standard method).
The authors then show that the quality of local four-dimensional variational (4DVAR) data assimilation results is correlated with the linearization errors: Slower convergence is associated with large errors. Consequently, the 4DVAR assimilation results are different for different convective points in the integration domain. The negative effect of linearization errors is not, however, significant for the cases that are studied. Erroneous points slightly degrade 4DVAR results in the remaining points. This degradation is reflected in decreased monotonicity of the cost function gradient reduction with iterations.
These results suggest that there is a probability for locally bad 4DVAR assimilation results when using standard adjoints of discontinuous parameterizations. In practice, when using for example observations, this is unlikely to cause errors that are larger than errors associated with other approximations and uncertainties in the data assimilation integrations such as the linear approximation errors and the uncertainties associated with the background and model errors statistics. This conclusion is similar to the conclusions of prior 4DVAR assimilation studies that use the standard adjoints but unlike in these studies the results in the current study show that 1) the linearization errors are nonnegligible for small-amplitude initial perturbations and 2) the assimilation results are locally and even globally affected by these errors.
Abstract
In the current study, a technique that offers a way to evaluate ensemble forecast uncertainties produced either by initial conditions or different model versions, or both, is presented. The technique consists of first diagnosing the performance of the forecast ensemble and then optimizing the ensemble forecast using results of the diagnosis. The technique is based on the explicit evaluation of probabilities that are associated with the Gaussian stochastic representation of the weather analysis and forecast. It combines an ensemble technique for evaluating the analysis error covariance and the standard Monte Carlo approach for computing samples from a known Gaussian distribution. The technique was demonstrated in a tutorial manner on two relatively simple examples to illustrate the impact of ensemble characteristics including ensemble size, various observation strategies, and configurations including different model versions and varying initial conditions. In addition, the authors assessed improvements in the consensus forecasts gained by optimal weighting of the ensemble members based on time-varying, prior-probabilistic skill measures. The results with different observation configurations indicate that, as observations become denser, there is a need for larger-sized ensembles and/or more accuracy among individual members for the ensemble forecast to exhibit prediction skill. The main conclusions relative to ensembles built up with different physics configurations were, first, that almost all members typically exhibited some skill at some point in the model run, suggesting that all should be retained to acquire the best consensus forecast; and, second, that the normalized probability metric can be used to determine what sets of weights or physics configurations are performing best. A comparison of forecasts derived from a simple ensemble mean to forecasts from a mean developed from variably weighting the ensemble members based on prior performance by the probabilistic measure showed that the latter had substantially reduced mean absolute error. The study also indicates that a weighting scheme that utilized more prior cycles showed additional reduction in forecast error.
Abstract
In the current study, a technique that offers a way to evaluate ensemble forecast uncertainties produced either by initial conditions or different model versions, or both, is presented. The technique consists of first diagnosing the performance of the forecast ensemble and then optimizing the ensemble forecast using results of the diagnosis. The technique is based on the explicit evaluation of probabilities that are associated with the Gaussian stochastic representation of the weather analysis and forecast. It combines an ensemble technique for evaluating the analysis error covariance and the standard Monte Carlo approach for computing samples from a known Gaussian distribution. The technique was demonstrated in a tutorial manner on two relatively simple examples to illustrate the impact of ensemble characteristics including ensemble size, various observation strategies, and configurations including different model versions and varying initial conditions. In addition, the authors assessed improvements in the consensus forecasts gained by optimal weighting of the ensemble members based on time-varying, prior-probabilistic skill measures. The results with different observation configurations indicate that, as observations become denser, there is a need for larger-sized ensembles and/or more accuracy among individual members for the ensemble forecast to exhibit prediction skill. The main conclusions relative to ensembles built up with different physics configurations were, first, that almost all members typically exhibited some skill at some point in the model run, suggesting that all should be retained to acquire the best consensus forecast; and, second, that the normalized probability metric can be used to determine what sets of weights or physics configurations are performing best. A comparison of forecasts derived from a simple ensemble mean to forecasts from a mean developed from variably weighting the ensemble members based on prior performance by the probabilistic measure showed that the latter had substantially reduced mean absolute error. The study also indicates that a weighting scheme that utilized more prior cycles showed additional reduction in forecast error.
Abstract
This study explores the functional relationship between model physics parameters and model output variables for the purpose of 1) characterizing the sensitivity of the simulation output to the model formulation and 2) understanding model uncertainty so that it can be properly accounted for in a data assimilation framework. A Markov chain Monte Carlo algorithm is employed to examine how changes in cloud microphysical parameters map to changes in output precipitation, liquid and ice water path, and radiative fluxes for an idealized deep convective squall line. Exploration of the joint probability density function (PDF) of parameters and model output state variables reveals a complex relationship between parameters and model output that changes dramatically as the system transitions from convective to stratiform. Persistent nonuniqueness in the parameter–state relationships is shown to be inherent in the construction of the cloud microphysical and radiation schemes and cannot be mitigated by reducing observation uncertainty. The results reinforce the importance of including uncertainty in model configuration in ensemble prediction and data assimilation, and they indicate that data assimilation efforts that include parameter estimation would benefit from including additional constraints based on known physical relationships between model physics parameters to render a unique solution.
Abstract
This study explores the functional relationship between model physics parameters and model output variables for the purpose of 1) characterizing the sensitivity of the simulation output to the model formulation and 2) understanding model uncertainty so that it can be properly accounted for in a data assimilation framework. A Markov chain Monte Carlo algorithm is employed to examine how changes in cloud microphysical parameters map to changes in output precipitation, liquid and ice water path, and radiative fluxes for an idealized deep convective squall line. Exploration of the joint probability density function (PDF) of parameters and model output state variables reveals a complex relationship between parameters and model output that changes dramatically as the system transitions from convective to stratiform. Persistent nonuniqueness in the parameter–state relationships is shown to be inherent in the construction of the cloud microphysical and radiation schemes and cannot be mitigated by reducing observation uncertainty. The results reinforce the importance of including uncertainty in model configuration in ensemble prediction and data assimilation, and they indicate that data assimilation efforts that include parameter estimation would benefit from including additional constraints based on known physical relationships between model physics parameters to render a unique solution.