Search Results
You are looking at 1 - 10 of 57 items for
- Author or Editor: Carolyn A. Reynolds x
- Refine by Access: All Content x
Abstract
The impact of negative dissipation on posttime analysis and forecast correction techniques is examined in a simplified context. The experiments are conducted using a three-level quasigeostrophic model (with a nonsingular tangent propagator matrix) under a perfect-model assumption. Corrections to the initial analysis errors are obtained by operating on the forecast error with (i) the full inverse of the forward tangent propagator, (ii) an inverse composed of a subset of the first leading singular vectors (pseudoinverse), and (iii) the tangent equations with a negative time step (backward integration). When the forecast error is known exactly, using negative dissipation during the full-inverse or backward-integration calculation results in an analysis-error estimate that projects too weakly onto the leading singular vectors and too strongly onto the decaying singular vectors. These discrepancies are small for weak dissipation but increase as the dissipation strength is increased.
When the forecast error is known inexactly, negative dissipation provides a beneficial damping of the backward-in-time growth of uncertainties present in the forecast error. This damping effect is found to be due to a fairly uniform change in the singular values, not changes in the singular vectors. However, even for very strong negative dissipation, the uncertainty in the forecast error still grows during the full inverse or backward integration. Therefore, the analysis error estimate will still be dominated by the trailing singular vectors, which represent the decaying part of the initial error. This is in contrast to the pseudoinverse technique, which, like the adjoint sensitivity, is dominated by the fastest growing part of the initial error, and is therefore relatively insensitive to the analysis uncertainty contained within the forecast error. Thus, while full-inverse and backward-integration calculations may provide an analysis perturbation that results in a significantly improved forecast, the analysis error estimate is accurate only when the forecast error is known exactly (i.e., perfect model experiments), regardless of the sign of the dissipation. These results hold for both global and regional forecast errors.
Abstract
The impact of negative dissipation on posttime analysis and forecast correction techniques is examined in a simplified context. The experiments are conducted using a three-level quasigeostrophic model (with a nonsingular tangent propagator matrix) under a perfect-model assumption. Corrections to the initial analysis errors are obtained by operating on the forecast error with (i) the full inverse of the forward tangent propagator, (ii) an inverse composed of a subset of the first leading singular vectors (pseudoinverse), and (iii) the tangent equations with a negative time step (backward integration). When the forecast error is known exactly, using negative dissipation during the full-inverse or backward-integration calculation results in an analysis-error estimate that projects too weakly onto the leading singular vectors and too strongly onto the decaying singular vectors. These discrepancies are small for weak dissipation but increase as the dissipation strength is increased.
When the forecast error is known inexactly, negative dissipation provides a beneficial damping of the backward-in-time growth of uncertainties present in the forecast error. This damping effect is found to be due to a fairly uniform change in the singular values, not changes in the singular vectors. However, even for very strong negative dissipation, the uncertainty in the forecast error still grows during the full inverse or backward integration. Therefore, the analysis error estimate will still be dominated by the trailing singular vectors, which represent the decaying part of the initial error. This is in contrast to the pseudoinverse technique, which, like the adjoint sensitivity, is dominated by the fastest growing part of the initial error, and is therefore relatively insensitive to the analysis uncertainty contained within the forecast error. Thus, while full-inverse and backward-integration calculations may provide an analysis perturbation that results in a significantly improved forecast, the analysis error estimate is accurate only when the forecast error is known exactly (i.e., perfect model experiments), regardless of the sign of the dissipation. These results hold for both global and regional forecast errors.
Abstract
The sensitivity of the atmospheric general circulation model of the Navy Operational Global Atmospheric Prediction System to a parameterization of convective triggering by atmospheric boundary layer thermals is investigated. The study focuses on the western Pacific warm pool region and examines the results of seasonal integrations of the model for the winter of 1987/88. A parameterization for thermal triggering of deep convection is presented that is based on a classification of the unstable boundary layer. Surface-based deep convection is allowed only for boundary layer regimes associated with the presence of thermals. The regime classification is expressed in terms of a Richardson number that reflects the relative significance of buoyancy and shear in the boundary layer. By constraining deep convection to conditions consistent with the occurrence of thermals (high buoyancy to shear ratios), there is a significant decrease in precipitation over the southern portion of the northeast trade wind zone in the tropical Pacific and along the ITCZ. This decrease in precipitation allows for an increased flux of moisture into the region south of the equator corresponding to the warmest portion of the Pacific warm pool. Improvements in the simulated distribution of precipitation, precipitable water, and low-level winds in the tropical Pacific are demonstrated. Over the western Pacific, the transition from free convective conditions associated with thermals to forced convective conditions is found to be primarily due to variations in mixed layer wind speed. Low-level winds thus play the major role in regulating the ability of thermals to initiate deep convection. The lack of coupling with the ocean in these simulations may possibly produce a distorted picture in this regard.
Abstract
The sensitivity of the atmospheric general circulation model of the Navy Operational Global Atmospheric Prediction System to a parameterization of convective triggering by atmospheric boundary layer thermals is investigated. The study focuses on the western Pacific warm pool region and examines the results of seasonal integrations of the model for the winter of 1987/88. A parameterization for thermal triggering of deep convection is presented that is based on a classification of the unstable boundary layer. Surface-based deep convection is allowed only for boundary layer regimes associated with the presence of thermals. The regime classification is expressed in terms of a Richardson number that reflects the relative significance of buoyancy and shear in the boundary layer. By constraining deep convection to conditions consistent with the occurrence of thermals (high buoyancy to shear ratios), there is a significant decrease in precipitation over the southern portion of the northeast trade wind zone in the tropical Pacific and along the ITCZ. This decrease in precipitation allows for an increased flux of moisture into the region south of the equator corresponding to the warmest portion of the Pacific warm pool. Improvements in the simulated distribution of precipitation, precipitable water, and low-level winds in the tropical Pacific are demonstrated. Over the western Pacific, the transition from free convective conditions associated with thermals to forced convective conditions is found to be primarily due to variations in mixed layer wind speed. Low-level winds thus play the major role in regulating the ability of thermals to initiate deep convection. The lack of coupling with the ocean in these simulations may possibly produce a distorted picture in this regard.
Abstract
A suite of high-resolution two-dimensional ensemble simulations are used to investigate the predictability of mountain waves, wave breaking, and downslope windstorms. For relatively low hills and mountains, perturbation growth is weak and ensemble spread is small. Gravity waves and wave breaking associated with higher mountains exhibit rapid perturbation growth and large ensemble variance. Near the regime boundary between mountain waves and wave breaking, a bimodal response is apparent with large ensemble variance. Several ensemble members exhibit a trapped wave response and others reveal a hydraulic jump and large-amplitude breaking in the stratosphere. The bimodality of the wave response brings into question the appropriateness of commonly used ensemble statistics, such as the ensemble mean, in these situations. Small uncertainties in the initial state within observational error limits result in significant ensemble spread in the strength of the downslope wind speed, wave breaking, and wave momentum flux. These results indicate that the theoretical transition across the regime boundary for gravity wave breaking can be interpreted as a finite-width or blurred transition zone from a practical predictability standpoint.
Abstract
A suite of high-resolution two-dimensional ensemble simulations are used to investigate the predictability of mountain waves, wave breaking, and downslope windstorms. For relatively low hills and mountains, perturbation growth is weak and ensemble spread is small. Gravity waves and wave breaking associated with higher mountains exhibit rapid perturbation growth and large ensemble variance. Near the regime boundary between mountain waves and wave breaking, a bimodal response is apparent with large ensemble variance. Several ensemble members exhibit a trapped wave response and others reveal a hydraulic jump and large-amplitude breaking in the stratosphere. The bimodality of the wave response brings into question the appropriateness of commonly used ensemble statistics, such as the ensemble mean, in these situations. Small uncertainties in the initial state within observational error limits result in significant ensemble spread in the strength of the downslope wind speed, wave breaking, and wave momentum flux. These results indicate that the theoretical transition across the regime boundary for gravity wave breaking can be interpreted as a finite-width or blurred transition zone from a practical predictability standpoint.
Abstract
In this paper it is argued that ensemble prediction systems can be devised in such a way that physical parameterizations of subgrid-scale motions are utilized in a stochastic manner, rather than in a deterministic way as is typically done. This can be achieved within the context of current physical parameterization schemes in weather and climate prediction models. Parameterizations are typically used to predict the evolution of grid-mean quantities because of unresolved subgrid-scale processes. However, parameterizations can also provide estimates of higher moments that could be used to constrain the random determination of the future state of a certain variable. The general equations used to estimate the variance of a generic variable are briefly discussed, and a simplified algorithm for a stochastic moist convection parameterization is proposed as a preliminary attempt. Results from the implementation of this stochastic convection scheme in the Navy Operational Global Atmospheric Prediction System (NOGAPS) ensemble are presented. It is shown that this method is able to generate substantial tropical perturbations that grow and “migrate” to the midlatitudes as forecast time progresses while moving from the small scales where the perturbations are forced to the larger synoptic scales. This stochastic convection method is able to produce substantial ensemble spread in the Tropics when compared with results from ensembles created from initial-condition perturbations. Although smaller, there is still a sizeable impact of the stochastic convection method in terms of ensemble spread in the extratropics. Preliminary simulations with initial-condition and stochastic convection perturbations together in the same ensemble system show a promising increase in ensemble spread and a decrease in the number of outliers in the Tropics.
Abstract
In this paper it is argued that ensemble prediction systems can be devised in such a way that physical parameterizations of subgrid-scale motions are utilized in a stochastic manner, rather than in a deterministic way as is typically done. This can be achieved within the context of current physical parameterization schemes in weather and climate prediction models. Parameterizations are typically used to predict the evolution of grid-mean quantities because of unresolved subgrid-scale processes. However, parameterizations can also provide estimates of higher moments that could be used to constrain the random determination of the future state of a certain variable. The general equations used to estimate the variance of a generic variable are briefly discussed, and a simplified algorithm for a stochastic moist convection parameterization is proposed as a preliminary attempt. Results from the implementation of this stochastic convection scheme in the Navy Operational Global Atmospheric Prediction System (NOGAPS) ensemble are presented. It is shown that this method is able to generate substantial tropical perturbations that grow and “migrate” to the midlatitudes as forecast time progresses while moving from the small scales where the perturbations are forced to the larger synoptic scales. This stochastic convection method is able to produce substantial ensemble spread in the Tropics when compared with results from ensembles created from initial-condition perturbations. Although smaller, there is still a sizeable impact of the stochastic convection method in terms of ensemble spread in the extratropics. Preliminary simulations with initial-condition and stochastic convection perturbations together in the same ensemble system show a promising increase in ensemble spread and a decrease in the number of outliers in the Tropics.
Abstract
The rate at which the leading singular vectors converge toward a single pattern for increasing optimization times is examined within the context of a T21 L3 quasigeostrophic model. As expected, the final-time backward singular vectors converge toward the backward Lyapunov vector, while the initial-time forward singular vectors converge toward the forward Lyapunov vector. Although there is significant case-to-case variability, in general this convergence does not occur over timescales for which the tangent approximation is valid (i.e., less than 5 days). However, a significant portion of the leading Lyapunov vector is contained within the subspace spanned by an ensemble composed of the first 30 singular vectors optimized over 2 or 3 days. Also as expected, the final-time leading singular vectors become independent of metric as optimization time is increased. Given an initial perturbation that has a white spectrum with respect to the initial-time singular vectors, the percent of the final-time perturbation explained by the leading singular vector is significant and increases as optimization time increases. However, even for 10-day optimization times, the leading singular vector accounts for, on average, only 23% to 28% of the total evolved global perturbation variance depending on the metric and trajectory.
Abstract
The rate at which the leading singular vectors converge toward a single pattern for increasing optimization times is examined within the context of a T21 L3 quasigeostrophic model. As expected, the final-time backward singular vectors converge toward the backward Lyapunov vector, while the initial-time forward singular vectors converge toward the forward Lyapunov vector. Although there is significant case-to-case variability, in general this convergence does not occur over timescales for which the tangent approximation is valid (i.e., less than 5 days). However, a significant portion of the leading Lyapunov vector is contained within the subspace spanned by an ensemble composed of the first 30 singular vectors optimized over 2 or 3 days. Also as expected, the final-time leading singular vectors become independent of metric as optimization time is increased. Given an initial perturbation that has a white spectrum with respect to the initial-time singular vectors, the percent of the final-time perturbation explained by the leading singular vector is significant and increases as optimization time increases. However, even for 10-day optimization times, the leading singular vector accounts for, on average, only 23% to 28% of the total evolved global perturbation variance depending on the metric and trajectory.
Abstract
Singular vector (SV) sensitivity, calculated using the adjoint model of the U.S. Navy Operation Global Atmosphere Prediction System (NOGAPS), is used to study the dynamics associated with tropical cyclone evolution. For each model-predicted tropical cyclone, SVs are constructed that optimize perturbation energy within a 20° by 20° latitude/longitude box centered on the 48-h forecast position of the cyclone. The initial SVs indicate regions where the 2-day forecast of the storm is very sensitive to changes in the analysis. Composites of the SVs for straight-moving cyclones and non-straight-moving cyclones that occurred in the Northern Hemisphere during its summer season in 2003 are examined. For both groups, the initial-time SV sensitivity exhibits a maximum within an annulus approximately 500 km from the center of the storms, in the region where the potential vorticity gradient of the vortex first changes sign. In the azimuthal direction, the composite initial-time SV maximum for the straight-moving group is located in the rear right quadrant with respect to the storm motion. The composite based on the non-straight-moving cyclones does not have a preferred quadrant in the vicinity of the storms and has larger amplitude away from the cyclones compared with the straight-moving storms, indicating more environmental influence on these storms. For both groups, the maximum initial sensitive areas are collocated with regions of flow moving toward the storm.
While the initial SV maximum is located where the potential vorticity gradient changes sign, the final SV maximum is located where the potential vorticity gradient is a maximum. Examinations of individual cases demonstrate how SV sensitivity can be used to identify specific environmental influences on the storms. The relationship between the SV sensitivity and the potential vorticity is discussed. The results support the utility of SVs in applications to phenomena beyond midlatitude baroclinic systems.
Abstract
Singular vector (SV) sensitivity, calculated using the adjoint model of the U.S. Navy Operation Global Atmosphere Prediction System (NOGAPS), is used to study the dynamics associated with tropical cyclone evolution. For each model-predicted tropical cyclone, SVs are constructed that optimize perturbation energy within a 20° by 20° latitude/longitude box centered on the 48-h forecast position of the cyclone. The initial SVs indicate regions where the 2-day forecast of the storm is very sensitive to changes in the analysis. Composites of the SVs for straight-moving cyclones and non-straight-moving cyclones that occurred in the Northern Hemisphere during its summer season in 2003 are examined. For both groups, the initial-time SV sensitivity exhibits a maximum within an annulus approximately 500 km from the center of the storms, in the region where the potential vorticity gradient of the vortex first changes sign. In the azimuthal direction, the composite initial-time SV maximum for the straight-moving group is located in the rear right quadrant with respect to the storm motion. The composite based on the non-straight-moving cyclones does not have a preferred quadrant in the vicinity of the storms and has larger amplitude away from the cyclones compared with the straight-moving storms, indicating more environmental influence on these storms. For both groups, the maximum initial sensitive areas are collocated with regions of flow moving toward the storm.
While the initial SV maximum is located where the potential vorticity gradient changes sign, the final SV maximum is located where the potential vorticity gradient is a maximum. Examinations of individual cases demonstrate how SV sensitivity can be used to identify specific environmental influences on the storms. The relationship between the SV sensitivity and the potential vorticity is discussed. The results support the utility of SVs in applications to phenomena beyond midlatitude baroclinic systems.
Abstract
Two versions of the Navy Operational Global Atmospheric Prediction System (NOGAPS) global ensemble, with and without a stochastic convection scheme, are compared regarding their performance in predicting the development and evolution of tropical cyclones. Forecasts of four typhoons, one tropical storm, and two selected nondeveloping tropical systems from The Observing System Research and Predictability Experiment (THORPEX) Pacific Asian Regional Campaign and Tropical Cyclone Structure 2008 (T-PARC/TCS-08) field program during August and September 2008 are evaluated. It is found that stochastic convection substantially increases the spread in ensemble storm tracks and in the vorticity and height fields in the vicinity of the storm. Stochastic convection also has an impact on the number of ensemble members predicting genesis. One day prior to the system being declared a tropical depression, on average, 31% of the ensemble members predict storm development when the ensemble includes initial perturbations only. When stochastic convection is included, this percentage increases to 50%, but the number of “false alarms” for two nondeveloping systems also increases. However, the increase in false alarms is smaller than the increase in correct development predictions, indicating that stochastic convection may have the potential for improving tropical cyclone forecasting.
Abstract
Two versions of the Navy Operational Global Atmospheric Prediction System (NOGAPS) global ensemble, with and without a stochastic convection scheme, are compared regarding their performance in predicting the development and evolution of tropical cyclones. Forecasts of four typhoons, one tropical storm, and two selected nondeveloping tropical systems from The Observing System Research and Predictability Experiment (THORPEX) Pacific Asian Regional Campaign and Tropical Cyclone Structure 2008 (T-PARC/TCS-08) field program during August and September 2008 are evaluated. It is found that stochastic convection substantially increases the spread in ensemble storm tracks and in the vorticity and height fields in the vicinity of the storm. Stochastic convection also has an impact on the number of ensemble members predicting genesis. One day prior to the system being declared a tropical depression, on average, 31% of the ensemble members predict storm development when the ensemble includes initial perturbations only. When stochastic convection is included, this percentage increases to 50%, but the number of “false alarms” for two nondeveloping systems also increases. However, the increase in false alarms is smaller than the increase in correct development predictions, indicating that stochastic convection may have the potential for improving tropical cyclone forecasting.
Abstract
Computational models based on discrete dynamical equations are a successful way of approaching the problem of predicting or forecasting the future evolution of dynamical systems. For linear and mildly nonlinear models, the solutions of the numerical algorithms on which they are based converge to the analytic solutions of the underlying differential equations for small time steps and grid sizes. In this paper, the authors investigate the time step sensitivity of three nonlinear atmospheric models of different levels of complexity: the Lorenz equations, a quasigeostrophic (QG) model, and a global weather prediction system (NOGAPS). It is illustrated here how, for chaotic systems, numerical convergence cannot be guaranteed forever. The time of decoupling of solutions for different time steps follows a logarithmic rule (as a function of time step) similar for the three models. In regimes that are not fully chaotic, the Lorenz equations are used to illustrate how different time steps may lead to different model climates and even different regimes. A simple model of truncation error growth in chaotic systems is proposed. This model decomposes the error onto its stable and unstable components and reproduces well the short- and medium-term behavior of the QG model truncation error growth, with an initial period of slow growth (a plateau) before the exponential growth phase. Experiments with NOGAPS suggest that truncation error can be a substantial component of total forecast error of the model. Ensemble simulations with NOGAPS show that using different time steps may be a simple and natural way of introducing an important component of model error in ensemble design.
Abstract
Computational models based on discrete dynamical equations are a successful way of approaching the problem of predicting or forecasting the future evolution of dynamical systems. For linear and mildly nonlinear models, the solutions of the numerical algorithms on which they are based converge to the analytic solutions of the underlying differential equations for small time steps and grid sizes. In this paper, the authors investigate the time step sensitivity of three nonlinear atmospheric models of different levels of complexity: the Lorenz equations, a quasigeostrophic (QG) model, and a global weather prediction system (NOGAPS). It is illustrated here how, for chaotic systems, numerical convergence cannot be guaranteed forever. The time of decoupling of solutions for different time steps follows a logarithmic rule (as a function of time step) similar for the three models. In regimes that are not fully chaotic, the Lorenz equations are used to illustrate how different time steps may lead to different model climates and even different regimes. A simple model of truncation error growth in chaotic systems is proposed. This model decomposes the error onto its stable and unstable components and reproduces well the short- and medium-term behavior of the QG model truncation error growth, with an initial period of slow growth (a plateau) before the exponential growth phase. Experiments with NOGAPS suggest that truncation error can be a substantial component of total forecast error of the model. Ensemble simulations with NOGAPS show that using different time steps may be a simple and natural way of introducing an important component of model error in ensemble design.
Abstract
The statistics of model temporal variability ought to be the same as those of the filtered version of reality that the model is designed to represent. Here, simple diagnostics are introduced to quantify temporal variability on different time scales and are then applied to NCEP and CMC global ensemble forecasting systems. These diagnostics enable comparison of temporal variability in forecasts with temporal variability in the initial states from which the forecasts are produced. They also allow for an examination of how day-to-day variability in the forecast model changes as forecast integration time increases. Because the error in subsequent analyses will differ, it is shown that forecast temporal variability should lie between corresponding analysis variability and analysis variability minus 2 times the analysis error variance. This expectation is not always met and possible causes are discussed. The day-to-day variability in NCEP forecasts steadily decreases at a slow rate as forecast time increases. In contrast, temporal variability increases during the first few days in the CMC control forecasts, and then levels off, consistent with a spinup of the forecasts starting from overly smoothed analyses. The diagnostics successfully reflect a reduction in the temporal variability of the CMC perturbed forecasts after a system upgrade. The diagnostics also illustrate a shift in variability maxima from storm-track regions for 1-day variability to blocking regions for 10-day variability. While these patterns are consistent with previous studies examining temporal variability on different time scales, they have the advantage of being obtainable without the need for extended (e.g., multimonth) forecast integrations.
Abstract
The statistics of model temporal variability ought to be the same as those of the filtered version of reality that the model is designed to represent. Here, simple diagnostics are introduced to quantify temporal variability on different time scales and are then applied to NCEP and CMC global ensemble forecasting systems. These diagnostics enable comparison of temporal variability in forecasts with temporal variability in the initial states from which the forecasts are produced. They also allow for an examination of how day-to-day variability in the forecast model changes as forecast integration time increases. Because the error in subsequent analyses will differ, it is shown that forecast temporal variability should lie between corresponding analysis variability and analysis variability minus 2 times the analysis error variance. This expectation is not always met and possible causes are discussed. The day-to-day variability in NCEP forecasts steadily decreases at a slow rate as forecast time increases. In contrast, temporal variability increases during the first few days in the CMC control forecasts, and then levels off, consistent with a spinup of the forecasts starting from overly smoothed analyses. The diagnostics successfully reflect a reduction in the temporal variability of the CMC perturbed forecasts after a system upgrade. The diagnostics also illustrate a shift in variability maxima from storm-track regions for 1-day variability to blocking regions for 10-day variability. While these patterns are consistent with previous studies examining temporal variability on different time scales, they have the advantage of being obtainable without the need for extended (e.g., multimonth) forecast integrations.