Search Results

You are looking at 1 - 10 of 19 items for

  • Author or Editor: Justin McLay x
  • Refine by Access: All Content x
Clear All Modify Search
Justin G. McLay

Abstract

It is shown that sequences of lagged ensemble-derived probability forecasts can be treated as being realizations of a discrete, finite-step Markov chain. A reforecast ensemble dataset is used to explore this idea for the case in which the Markov chain has 12 states and 15 steps and the probability forecasts are for the event for which the 500-hPa geopotential height exceeds the climatological value at a specified point. Results suggest that the transition probabilities of the Markov chain are best modeled as first order if they are obtained from the reforecast ensemble dataset using maximum likelihood estimation. Most of the first-order-estimated transition probabilities are statistically significant. Also, the transition probabilities are inhomogeneous, and all states in the chain communicate. A variety of potential decision support applications for the Markov chain parameters are highlighted. In particular, the transition probabilities allow calculation of the conditional probability of taking protective action and calculation of the conditional expected expense when used with static cost–loss decision models. Also, the transition probabilities facilitate optimized decisions when incorporated into dynamic decision models. Decision model test scenarios can be obtained using cluster analysis and conditional most likely sequences, and these scenarios reveal the key patterns traced by the Markov chain.

Full access
Justin G. McLay

Abstract

Monte Carlo simulation of sequences of lagged ensemble probability forecasts is undertaken using Markov transition law estimated from a reforecast ensemble. A simple three-state, three-action dynamic decision model is then applied to the Monte Carlo sequence realizations using a basket of cost functions, and the resulting expense incurred by the decision model is conditioned upon the structure of the sequence realizations. Findings show that the greatest average expense is incurred by “sneak” and “volatile” sequence structures, which are structures characterized by large and rapid increases in event probability at short lag times. These findings are simple quantitative illustration of the adage that large run-to-run variability of forecasts can be troublesome to a decision maker. The experiments also demonstrate how even small improvements in the amount of advance warning of an event can translate into a substantial reduction in decision expense. In general, the conditioned decision expense is found to be sensitive to sequence structure for a given cost function, to the parameters of a given cost function, and to the choice of cost function itself.

Full access
Justin G. McLay
and
Jonathan E. Martin

Abstract

Two regional local energetics composites of tropospheric-deep cyclone decay were constructed based upon 49 cyclones in the Gulf of Alaska region and 18 cyclones in the Bering Sea region whose decay was marked by rapid surface cyclolysis. Both composites indicate that surface drag is only a secondary sink of eddy kinetic energy (EKE) during the decay. This result holds even when a generous accounting is made for uncertainty in the surface drag calculation. The subordinate role of surface drag in the Gulf of Alaska region composite is particularly interesting, given that the cyclones in this composite decay in close proximity to rugged and extensive high-elevation terrain. Both composites also display two of the fundamental characteristics of the downstream development model of cyclone decay: the role of radiative dispersion as the chief sink of EKE during decay, and the occurrence of prominent downstream EKE dispersion. Furthermore, the two composites illustrate that an unusually pronounced decline in baroclinic conversion occurs simultaneously with the intense radiative dispersion. Taken together, these results suggest that the energetic decay of cyclones marked by rapid surface cyclolysis is driven from the upper troposphere, not from the surface.

Some notable differences also emerge from the two composites. Considerable downstream development occurs in the immediate vicinity of the decaying cyclone in the Bering Sea region composite, but not in the Gulf of Alaska region composite. Meanwhile, the areal extent of the downstream dispersion is greater in the Gulf of Alaska region composite. The latter circumstance suggests that decay events in the Gulf of Alaska region, while not producing significant downstream development in their near vicinity, may have important energetic implications for subsequent development farther downstream over North America. The composites also indicate that the decline of EKE in the vicinity of the decaying cyclone is more pronounced in the Gulf of Alaska region. In the Bering Sea region composite, this EKE is maintained via a persistent convergence of ageostrophic geopotential flux (AGF) that emanates from regions well south of the primary cyclone. Similar evidence for the influence of upstream disturbances on the cyclone decay does not appear in the Gulf of Alaska region composite.

Full access
Justin G. McLay
and
Elizabeth Satterfield

Abstract

A forecast “bust” or “dropout” can be defined as an intermittent but significant loss of model forecast performance. Deterministic forecast dropouts are typically defined in terms of the 500-hPa geopotential height (Φ500) anomaly correlation coefficient (ACC) in the Northern Hemisphere (NH) dropping below a predefined threshold. This study first presents a multimodel comparison of dropouts in the Navy Global Environmental Model (NAVGEM) deterministic forecast with the ensemble control members from the Environment and Climate Change Canada (ECCC) Global Ensemble Prediction System (GEPS) and the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS). Then, the relationship between dropouts and large-scale pattern variability is investigated, focusing on the temporal variability and correlation of flow indices surrounding dropout events. Finally, three severe dropout events are examined from an ensemble perspective. The main findings of this work are the following: 1) forecast dropouts exhibit some relation between models; 2) although forecast dropouts do not have a single cause, the most severe dropouts in NAVGEM can be linked to specific behavior of the large-scale flow indices, that is, they tend to follow periods of rapidly escalating volatility of the flow indices, and they tend to occur during intervals where the AO and Pacific North American (PNA) indices are exhibiting unusually strong interdependence; and 3) for the dropout events examined from an ensemble perspective, the NAVGEM ensemble spread does not provide a strong signal of elevated potential for very large forecast errors.

Free access
Qingyun Zhao
,
Tracy Haack
,
Justin McLay
, and
Carolyn Reynolds

Abstract

An ensemble forecast system has been developed at the Naval Research Laboratory to improve the analyses and forecasts of atmospheric refractivity for electromagnetic (EM) propagation with the intention of accounting for uncertainties in model forecast errors. Algorithms for a matrix of ensemble statistics have been developed to analyze the probability, location, intensity, and structure of ducting of various types. Major parameters of ducting layers and their ensemble statistics are calculated from the ensemble forecasts. Their relationships to the large-scale and mesoscale environment are also investigated. The Wallops Island field experiment from late April to early May 2000 is selected to evaluate the system. During the spring season, this coastal region maintains a strong sea surface temperature gradient between cold shelf waters and the warm Gulf Stream, where the boundaries between land, the coastal water, and the Gulf Stream have a strong influence on marine boundary layer structures and the formation of ducting layers. Sounding profiles during the field experiment are used in the study to further understand the structures of the ducting layers and also to validate the ensemble forecast system. While some advantages of the ensemble system over the deterministic forecast for atmospheric refractivity prediction in the boundary layer are studied and demonstrated in this study, the weaknesses of the current ensemble system are revealed for future improvement of the system.

Full access
Carolyn A. Reynolds
,
Joao Teixeira
, and
Justin G. McLay

Abstract

The impact of stochastic convection on ensembles produced using the ensemble transform (ET) initial perturbation scheme is examined. This note compares the behavior of ensemble forecasts based only on initial ET perturbations with the behavior of ensemble forecasts based on the ET initial perturbations and forecasts that include stochastic convection. It is illustrated that despite the fact that stochastic convection occurs only after the forecast integrations have started, it induces changes in the initial perturbations as well. This is because the ET is a “cycling” scheme, in which previous short-term forecasts are used to produce the initial perturbations for the current forecast. The stochastic convection scheme induces rapid perturbation growth in regions where convection is active, primarily in the tropics. When combined with the ET scheme, this results in larger initial perturbation variance in the tropics, and, because of a global constraint on total initial perturbation variance, smaller initial perturbation variance in the extratropics. Thus, the inclusion of stochastic convection helps to mitigate a problem found in the practical implementation of the ET, namely, that of too little initial variance in the tropics and too much in the extratropics. Various skill scores show that stochastic convection improves ensemble performance in the tropics, with little impact to modest improvement in the extratropics. Experiments performed using the initial perturbations from the control ensemble run but forecast integrations using the stochastic convection scheme indicate that the improved performance of the stochastic convection ensemble at early forecast times is due to both “indirect” changes in the initial perturbations and “direct” changes in the forecast. At later forecast times, it appears that most of the improvement can be gained through stochastic convection alone.

Full access
Justin G. McLay
,
Craig H. Bishop
, and
Carolyn A. Reynolds

Abstract

The ensemble transform (ET) scheme changes forecast perturbations into analysis perturbations whose amplitudes and directions are consistent with a user-provided estimate of analysis error covariance. A practical demonstration of the ET scheme was undertaken using Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System (NAVDAS) analysis error variance estimates and the Navy Operational Global Atmospheric Prediction System (NOGAPS) numerical weather prediction (NWP) model. It was found that the ET scheme produced forecast ensembles that were comparable to or better in a variety of measures than those produced by the Fleet Numerical and Oceanography Center (FNMOC) bred-growing modes (BGM) scheme. Also, the demonstration showed that the introduction of stochastic perturbations into the ET forecast ensembles led to a substantial improvement in the agreement between the ET and NAVDAS analysis error variances. This finding is strong evidence that even a small-sized ET ensemble is capable of obtaining good agreement between the ET and NAVDAS analysis error variances, provided that NWP model deficiencies are accounted for. Last, since the NAVDAS analysis error covariance estimate is diagonal and hence ignores multivariate correlations, it was of interest to examine the ET analysis perturbations’ spatial correlation. Tests showed that the ET analysis perturbations exhibited statistically significant, realistic multivariate correlations.

Full access
Daniel Hodyss
,
Justin G. McLay
,
Jon Moskaitis
, and
Efren A. Serra

Abstract

Stochastic parameterization has become commonplace in numerical weather prediction (NWP) models used for probabilistic prediction. Here a specific stochastic parameterization will be related to the theory of stochastic differential equations and shown to be affected strongly by the choice of stochastic calculus. From an NWP perspective the focus will be on ameliorating a common trait of the ensemble distributions of tropical cyclone (TC) tracks (or position); namely, that they generally contain a bias and an underestimate of the variance. With this trait in mind the authors present a stochastic track variance inflation parameterization. This parameterization makes use of a properly constructed stochastic advection term that follows a TC and induces its position to undergo Brownian motion. A central characteristic of Brownian motion is that its variance increases with time, which allows for an effective inflation of an ensemble’s TC track variance. Using this stochastic parameterization the authors present a comparison of the behavior of TCs from the perspective of the stochastic calculi of Itô and Stratonovich within an operational NWP model. The central difference between these two perspectives as pertains to TCs is shown to be properly predicted by the stochastic calculus and the Itô correction. In the cases presented here these differences will manifest as overly intense TCs, which, depending on the strength of the forcing, could lead to problems with numerical stability and physical realism.

Full access
Daniel Hodyss
,
Elizabeth Satterfield
,
Justin McLay
,
Thomas M. Hamill
, and
Michael Scheuerer

Abstract

Ensemble postprocessing is frequently applied to correct biases and deficiencies in the spread of ensemble forecasts. Methods involving weighted, regression-corrected forecasts address the typical biases and underdispersion of ensembles through a regression correction of ensemble members followed by the generation of a probability density function (PDF) from the weighted sum of kernels fit around each corrected member. The weighting step accounts for the situation where the ensemble is constructed from different model forecasts or generated in some way that creates ensemble members that do not represent equally likely states. In the present work, it is shown that an overweighting of climatology in weighted, regression-corrected forecasts can occur when one first performs a regression-based correction before weighting each member. This overweighting of climatology results in an increase in the mean-squared error of the mean of the predicted PDF. The overweighting of climatology is illustrated in a simulation study and a real-data study, where the reference is generated through a direct application of Bayes’s rule. The real-data example is a comparison of a particular method referred to as Bayesian model averaging (BMA) and a direct application of Bayes’s rule for ocean wave heights using U.S. Navy and National Weather Service global deterministic forecasts. This direct application of Bayes’s rule is shown to not overweight climatology and may be a low-cost replacement for the generally more expensive weighted, regression-correction methods.

Full access
Qingyun Zhao
,
Qin Xu
,
Yi Jin
,
Justin McLay
, and
Carolyn Reynolds

Abstract

The time-expanded sampling (TES) method, designed to improve the effectiveness and efficiency of ensemble-based data assimilation and subsequent forecast with reduced ensemble size, is tested with conventional and satellite data for operational applications constrained by computational resources. The test uses the recently developed ensemble Kalman filter (EnKF) at the Naval Research Laboratory (NRL) for mesoscale data assimilation with the U.S. Navy’s mesoscale numerical weather prediction model. Experiments are performed for a period of 6 days with a continuous update cycle of 12 h. Results from the experiments show remarkable improvements in both the ensemble analyses and forecasts with TES compared to those without. The improvements in the EnKF analyses by TES are very similar across the model’s three nested grids of 45-, 15-, and 5-km grid spacing, respectively. This study demonstrates the usefulness of the TES method for ensemble-based data assimilation when the ensemble size cannot be sufficiently large because of operational constraints in situations where a time-critical environment assessment is needed or the computational resources are limited.

Full access