Search Results

You are looking at 1 - 10 of 29 items for

  • Author or Editor: Sue Ellen Haupt x
  • Refine by Access: All Content x
Clear All Modify Search
Grant Branstator and Sue Ellen Haupt

Abstract

A linear empirical model of barotropic atmospheric dynamics is constructed in which the streamfunction tendency field is optimally predicted using the concurrent streamfunction state as a predictor. The prediction equations are those resulting from performing a linear regression between tendency and state vectors. Based on the formal analogy between this model and the linear nondivergent barotropic vorticity equation, this empirical model is applied to problems normally addressed with a conventional model based on physical principles. It is found to qualitatively represent the horizontal dispersion of energy and to skillfully predict how a general circulation model will respond to steady tropical heat sources. Analysis of model solutions indicates that the empirical model’s dynamics include processes that are not represented by conventional nondivergent linear models. Most significantly, the influence of internally generated midlatitude divergence anomalies and of anomalous vorticity fluxes by high-frequency transients associated with low-frequency anomalies are automatically incorporated into the empirical model. The results suggest the utility of empirical models of atmospheric dynamics in situations where estimates of the response to external forcing are needed or as a standard of comparison in efforts to make models based on physical principles more complete.

Full access
Sue Ellen Haupt, George S. Young, and Christopher T. Allen

Abstract

A methodology for characterizing emission sources is presented that couples a dispersion and transport model with a pollution receptor model. This coupling allows the use of the backward (receptor) model to calibrate the forward (dispersion) model, potentially across a wide range of meteorological conditions. Moreover, by using a receptor model one can calibrate from observations taken in a multisource setting. This approach offers practical advantages over calibrating via single-source artificial release experiments. A genetic algorithm is used to optimize the source calibration factors that couple the two models. The ability of the genetic algorithm to correctly couple these two models is demonstrated for two separate source–receptor configurations using synthetic meteorological and receptor data. The calibration factors underlying the synthetic data are successfully reconstructed by this optimization process. A Monte Carlo technique is used to compute error bounds for the resulting estimates of the calibration factors. By creating synthetic data with random noise, it is possible to quantify the robustness of the model's results in the face of variability. When white noise is incorporated into the synthetic pollutant signal at the receptors, the genetic algorithm is still able to compute the calibration factors of the coupled model up to a signal-to-noise ratio of about 2. Beyond that level of noise, the average of many coupled model optimization runs still provides a reasonable estimate of the calibration factor until the noise is an order of magnitude greater than the signal. The calibration factor linking the dispersion to the receptor model provides an estimate of the uncertainty in the combined monitoring and modeling process. This approach recognizes the mismatch between the ensemble average dispersion modeling technology and matching a single realization time of monitored data.

Full access
Sue Ellen Haupt, James C. McWilliams, and Joseph J. Tribbia

Abstract

Modons in shear flow are computed as equilibrium solutions of the equivalent barotropic vorticity equation using a numerical Newton–Kantorovich iterative technique with double Fourier spectral expansion. The model is given a first guess of an exact prototype modon with a small shear flow imposed, then iterated to an equilibrium solution. Continuation (small-step extrapolation of the shear amplitude) is used to produce examples of modons embedded in moderate amplitude background shear flows. It is found that in the presence of symmetric shear, the modon is strengthened relative to the prototype. The best-fit phase speed for this case is significantly greater than the Doppler-shifted speed. Nonsymmetric shear strengthens the poles selectively: positive shear strengthens the low while weakening the high. The diagnosed functional relationship between the streamfunction in the traveling reference frame and the vorticity appears linear for all types of shear studied. The modons in symmetric shear are stable within time integrations, at least for small to moderate shear amplitude. Antisymmetric shear appears to trigger a tilting instability of the stationary state; yet coherence of the modon is maintained. This study strengthens the plausibility of using modons as a model of coherent structures in geophysical flow.

Full access
Steven J. Greybush, Sue Ellen Haupt, and George S. Young

Abstract

Previous methods for creating consensus forecasts weight individual ensemble members based upon their relative performance over the previous N days, implicitly making a short-term persistence assumption about the underlying flow regime. A postprocessing scheme in which model performance is linked to underlying weather regimes could improve the skill of deterministic ensemble model consensus forecasts. Here, principal component analysis of several synoptic- and mesoscale fields from the North American Regional Reanalysis dataset provides an objective means for characterizing atmospheric regimes. Clustering techniques, including K-means and a genetic algorithm, are developed that use the resulting principal components to distinguish among the weather regimes. This pilot study creates a weighted consensus from 48-h surface temperature predictions produced by the University of Washington Mesoscale Ensemble, a varied-model (differing physics and parameterization schemes) multianalysis ensemble with eight members. Different optimal weights are generated for each weather regime. A second regime-dependent consensus technique uses linear regression to predict the relative performance of the ensemble members based upon the principal components. Consensus forecasts obtained by the regime-dependent schemes are compared using cross validation with traditional N-day ensemble consensus forecasts for four locations in the Pacific Northwest, and show improvement over methods that rely on the short-term persistence assumption.

Full access
Christopher T. Allen, Sue Ellen Haupt, and George S. Young

Abstract

This paper extends the approach of coupling a forward-looking dispersion model with a backward model using a genetic algorithm (GA) by incorporating a more sophisticated dispersion model [the Second-Order Closure Integrated Puff (SCIPUFF) model] into a GA-coupled system. This coupled system is validated with synthetic and field experiment data to demonstrate the potential applicability of the coupled model to emission source characterization. The coupled model incorporating SCIPUFF is first validated with synthetic data produced by SCIPUFF to isolate issues related directly to SCIPUFF’s use in the coupled model. The coupled model is successful in characterizing sources even with a moderate amount of white noise introduced into the data. The similarity to corresponding results from previous studies using a more basic model suggests that the GA’s performance is not sensitive to the dispersion model used. The coupled model is then tested using data from the Dipole Pride 26 field tests to determine its ability to characterize actual pollutant measurements despite the stochastic scatter inherent in turbulent dispersion. Sensitivity studies are run on various input parameters to gain insight used to produce a multistage process capable of a higher-quality source characterization than that produced by a single pass. Overall, the coupled model performed well in identifying approximate locations, times, and amounts of pollutant emissions. These model runs demonstrate the coupled model’s potential application to source characterization for real-world problems.

Full access
David John Gagne II, Sue Ellen Haupt, Douglas W. Nychka, and Gregory Thompson

Abstract

Deep learning models, such as convolutional neural networks, utilize multiple specialized layers to encode spatial patterns at different scales. In this study, deep learning models are compared with standard machine learning approaches on the task of predicting the probability of severe hail based on upper-air dynamic and thermodynamic fields from a convection-allowing numerical weather prediction model. The data for this study come from patches surrounding storms identified in NCAR convection-allowing ensemble runs from 3 May to 3 June 2016. The machine learning models are trained to predict whether the simulated surface hail size from the Thompson hail size diagnostic exceeds 25 mm over the hour following storm detection. A convolutional neural network is compared with logistic regressions using input variables derived from either the spatial means of each field or principal component analysis. The convolutional neural network statistically significantly outperforms all other methods in terms of Brier skill score and area under the receiver operator characteristic curve. Interpretation of the convolutional neural network through feature importance and feature optimization reveals that the network synthesized information about the environment and storm morphology that is consistent with our understanding of hail growth, including large lapse rates and a wind shear profile that favors wide updrafts. Different neurons in the network also record different storm modes, and the magnitude of the output of those neurons is used to analyze the spatiotemporal distributions of different storm modes in the NCAR ensemble.

Full access
Sue Ellen Haupt, Robert M. Rauber, Bruce Carmichael, Jason C. Knievel, and James L. Cogan

Abstract

The field of atmospheric science has been enhanced by its long-standing collaboration with entities with specific needs. This chapter and the two subsequent ones describe how applications have worked to advance the science at the same time that the science has served the needs of society. This chapter briefly reviews the synergy between the applications and advancing the science. It specifically describes progress in weather modification, aviation weather, and applications for security. Each of these applications has resulted in enhanced understanding of the physics and dynamics of the atmosphere, new and improved observing equipment, better models, and a push for greater computing power.

Full access
Sue Ellen Haupt, Jeffrey Copeland, William Y. Y. Cheng, Yongxin Zhang, Caspar Ammann, and Patrick Sullivan

Abstract

The National Center for Atmospheric Research and the National Renewable Energy Laboratory (NREL) collaborated to develop a method to assess the interannual variability of wind and solar power over the contiguous United States under current and projected future climate conditions, for use with NREL’s Regional Energy Deployment System (ReEDS) model. The team leveraged a reanalysis-derived database to estimate the wind and solar power resources and their interannual variability under current climate conditions (1985–2005). Then, a projected future climate database for the time range of 2040–69 was derived on the basis of the North American Regional Climate Change Assessment Program (NARCCAP) regional climate model (RCM) simulations driven by free-running atmosphere–ocean general circulation models. To compare current and future climate variability, the team developed a baseline by decomposing the current climate reanalysis database into self-organizing maps (SOMs) to determine the predominant modes of variability. The current climate patterns found were compared with those of an NARCCAP-based future climate scenario, and the CRCM–CCSM combination was chosen to describe the future climate scenario. The future climate scenarios’ data were projected onto the Climate Four Dimensional Data Assimilation reanalysis SOMs. The projected future climate database was then created by resampling the reanalysis on the basis of the frequency of occurrence of the future SOM patterns, adjusting for the differences in magnitude of the wind speed or solar irradiance between the current and future climate conditions. Comparison of the changes in the frequency of occurrence of the SOM modes between current and future climate conditions indicates that the annual mean wind speed and solar irradiance could be expected to change by up to 10% (increasing or decreasing regionally).

Full access
Walter C. Kolczynski Jr., David R. Stauffer, Sue Ellen Haupt, Naomi S. Altman, and Aijun Deng

Abstract

The uncertainty in meteorological predictions is of interest for applications ranging from economic to recreational to public safety. One common method to estimate uncertainty is by using meteorological ensembles. These ensembles provide an easily quantifiable measure of the uncertainty in the forecast in the form of the ensemble variance. However, ensemble variance may not accurately reflect the actual uncertainty, so any measure of uncertainty derived from the ensemble should be calibrated to provide a more reliable estimate of the actual uncertainty in the forecast. A previous study introduced the linear variance calibration (LVC) as a simple method to determine the ensemble variance to error variance relationship and demonstrated this technique on real ensemble data. The LVC parameters, the slopes, and y intercepts, however, are generally different from the ideal values.

This current study uses a stochastic model to examine the LVC in a controlled setting. The stochastic model is capable of simulating underdispersive and overdispersive ensembles as well as perfectly reliable ensembles. Because the underlying relationship is specified, LVC results can be compared to theoretical values of the slope and y intercept. Results indicate that all types of ensembles produce calibration slopes that are smaller than their theoretical values for ensemble sizes less than several hundred members, with corresponding y intercepts greater than their theoretical values. This indicates that all ensembles, even otherwise perfect ensembles, should be calibrated if the ensemble size is less than several hundred.

In addition, it is shown that an adjustment factor can be computed for inadequate ensemble size. This adjustment factor is independent of the stochastic model and is applicable to any linear regression of error variance on ensemble variance. When applied to experiments using the stochastic model, the adjustment produces LVC parameters near their theoretical values for all ensemble sizes. Although the adjustment is unnecessary when applying LVC, it allows for a more accurate assessment of the reliability of ensembles, and a fair comparison of the reliability for differently sized ensembles.

Full access
Jared A. Lee, Walter C. Kolczynski, Tyler C. McCandless, and Sue Ellen Haupt

Abstract

Ensembles of numerical weather prediction (NWP) model predictions are used for a variety of forecasting applications. Such ensembles quantify the uncertainty of the prediction because the spread in the ensemble predictions is correlated to forecast uncertainty. For atmospheric transport and dispersion and wind energy applications in particular, the NWP ensemble spread should accurately represent uncertainty in the low-level mean wind. To adequately sample the probability density function (PDF) of the forecast atmospheric state, it is necessary to account for several sources of uncertainty. Limited computational resources constrain the size of ensembles, so choices must be made about which members to include. No known objective methodology exists to guide users in choosing which combinations of physics parameterizations to include in an NWP ensemble, however. This study presents such a methodology.

The authors build an NWP ensemble using the Advanced Research Weather Research and Forecasting Model (ARW-WRF). This 24-member ensemble varies physics parameterizations for 18 randomly selected 48-h forecast periods in boreal summer 2009. Verification focuses on 2-m temperature and 10-m wind components at forecast lead times from 12 to 48 h. Various statistical guidance methods are employed for down-selection, calibration, and verification of the ensemble forecasts. The ensemble down-selection is accomplished with principal component analysis. The ensemble PDF is then statistically dressed, or calibrated, using Bayesian model averaging. The postprocessing techniques presented here result in a recommended down-selected ensemble that is about half the size of the original ensemble yet produces similar forecast performance, and still includes critical diversity in several types of physics schemes.

Full access