Search Results

You are looking at 1 - 10 of 21 items for

  • Author or Editor: Brian Ancell x
  • All content x
Clear All Modify Search
Brian C. Ancell

Abstract

Mesoscale atmospheric data assimilation is becoming an integral part of numerical weather prediction. Modern computational resources now allow assimilation and subsequent forecasting experiments ranging from resolutions of tens of kilometers over regional domains to smaller grids that employ storm-scale assimilation. To assess the value of the high-resolution capabilities involved with assimilation and forecasting at different scales, analyses and forecasts must be carefully evaluated to understand 1) whether analysis benefits gained at finer scales persist into the forecast relative to downscaled runs begun from lower-resolution analyses, 2) how the positive analysis effects of bias removal evolve into the forecast, and 3) how digital filter initialization affects analyses and forecasts. This study applies a 36- and 4-km ensemble Kalman filter over 112 assimilation cycles to address these important issues, which could all be relevant to a variety of short-term, high-resolution, real-time forecasting applications.

It is found that with regard to surface wind and temperature, analysis improvements gained at higher resolution persist throughout the 12-h forecast window relative to downscaled, high-resolution forecasts begun from analyses on the coarser grid. Aloft, however, no forecast improvements were found with the high-resolution analysis/forecast runs. Surface wind and temperature bias removal, while clearly improving surface analyses, degraded surface forecasts and showed little forecast influence aloft. Digital filter initialization degraded temperature analyses with or without bias removal, degraded wind analyses when bias removal was used, but improved wind analyses when bias removal was absent. No forecast improvements were found with digital filter initialization. The consequences of these results with regard to operational assimilation/forecasting systems on nested grids are discussed.

Full access
Brian C. Ancell

Abstract

Ensemble forecasting is becoming an increasingly important aspect of numerical weather prediction. As ensemble perturbation evolution becomes more nonlinear as a forecast evolves, the ensemble mean can diverge from the model attractor on which ensemble members are constrained. In turn, the ensemble mean can become increasingly unrealistic, and although statistically best on average, it can provide poor forecast guidance for specific high-impact events. This study uses an ensemble Kalman filter to investigate this behavior at the synoptic scale for landfalling midlatitude cyclones. This work also aims to understand the best way to select “best members” closest to the mean that both behave realistically and possess the statistically beneficial qualities of the mean. It is found that substantial nonlinearity emerges within forecast times of a day, which roughly agrees with previous research addressing synoptic-scale nonlinearity more generally. The evolving nonlinearity results in unrealistic behavior of the ensemble mean that significantly underestimates precipitation and wind speeds associated with the cyclones. Choosing a single ensemble member closest to the ensemble mean over the entire forecast window provides forecasts that are unable to produce the relatively small errors of the ensemble mean. However, since different ensemble members are closest to the ensemble mean at different forecast times, the best forecast is composed of different ensemble members throughout the forecast window. The benefits and limitations of applying this methodology to improve forecasts of synoptic-scale high-impact weather events are discussed.

Full access
Brian C. Ancell

Abstract

Ensemble sensitivity can reveal features in the flow at early forecast times that are relevant to the predictability of chosen high-impact forecast aspects (e.g., heavy precipitation) later in time. In an operational environment, it thus might be possible to choose ensemble subsets with improved predictability over the full ensemble if members with the smallest errors in regions of large ensemble sensitivity can be identified. Since numerous observations become available hourly, such a technique is feasible and could be executed well before the next assimilation/extended forecast cycle, potentially adding valuable lead time to forecasts of high-impact weather events. Here, a sensitivity-based technique that chooses subsets of forecasts initialized from an 80-member ensemble Kalman filter (EnKF) is tested by ranking 6-h errors in sensitive regions toward improving 24-h forecasts of landfalling midlatitude cyclones on the west coast of North America. The technique is first tested within an idealized framework with one of the ensemble members serving as truth. Subsequent experiments are performed in more realistic scenarios with an independent truth run, observation error added, and sparser observations. Results show the technique can indeed produce ensemble subsets that are improved relative to the full ensemble for 24-h forecasts of landfalling cyclones. Forecast errors are found to be smallest when the greatest 70% of ensemble sensitivity magnitudes with subsets of size 5–30 members are used, as well as when only the cases of the largest forecast spread are considered. Finally, this study presents considerations for extending this technique into fully realistic situations with regard to additional high-impact events.

Full access
Brian Ancell and Gregory J. Hakim

Abstract

The sensitivity of numerical weather forecasts to small changes in initial conditions is estimated using ensemble samples of analysis and forecast errors. Ensemble sensitivity is defined here by linear regression of analysis errors onto a given forecast metric. It is shown that ensemble sensitivity is proportional to the projection of the analysis-error covariance onto the adjoint-sensitivity field. Furthermore, the ensemble-sensitivity approach proposed here involves a small calculation that is easy to implement. Ensemble- and adjoint-based sensitivity fields are compared for a representative wintertime flow pattern near the west coast of North America for a 90-member ensemble of independent initial conditions derived from an ensemble Kalman filter. The forecast metric is taken for simplicity to be the 24-h forecast of sea level pressure at a single point in western Washington State. Results show that adjoint and ensemble sensitivities are very different in terms of location, scale, and magnitude. Adjoint-sensitivity fields reveal mesoscale lower-tropospheric structures that tilt strongly upshear, whereas ensemble-sensitivity fields emphasize synoptic-scale features that tilt modestly throughout the troposphere and are associated with significant weather features at the initial time. Optimal locations for targeting can easily be determined from ensemble sensitivity, and results indicate that the primary targeting locations are located away from regions of greatest adjoint and ensemble sensitivity. It is shown that this method of targeting is similar to previous ensemble-based methods that estimate forecast-error variance reduction, but easily allows for the application of statistical confidence measures to deal with sampling error.

Full access
Lynn A. McMurdie and Brian Ancell

Abstract

The predictability of North Pacific cyclones can vary widely, from highly accurate prediction of storm intensity and location to forecast position errors of hundreds of kilometers and central pressure errors of tens of hectopascals. In this study, a Weather Research and Forecasting Model (WRF) ensemble Kalman filter is used to investigate predictability of landfalling cyclones on the west coast of North America over two winter seasons (2008/09 and 2009/10). Predictability is defined as the ensemble spread of cyclone central pressure at the final forecast time (24 h) where large spread means low predictability. Both ensemble spread and ensemble initial-condition sensitivity are examined for a wide variety of cyclones that occurred during the two seasons. Storms that are deepening and track from the southwest exhibit the largest ensemble initial-condition sensitivity and highest ensemble spread compared to decaying storms and storms that track from other directions. Storms that end south of 40°N, typically slow moving storms from the northwest, exhibit higher predictability regardless of whether or not they are deepening or decaying. Cyclones with large ensemble spread and low sensitivity are mature cyclones whose low predictability likely results from large initial-condition spread instead of large perturbation growth. These results highlight particular synoptic situations and cyclone characteristics that are associated with low predictability and can potentially be used to improve forecasts through improved observational coverage.

Full access
Austin Coleman and Brian Ancell

Abstract

Ensemble sensitivity analysis (ESA) is a useful and computationally inexpensive tool for analyzing how features in the flow at early forecast times affect different relevant forecast features later in the forecast. Given the frequency of observations measured between model initialization times that remain unused, ensemble sensitivity may be used to increase predictability and forecast accuracy through an objective ensemble subsetting technique. This technique identifies ensemble members with the smallest errors in regions of high sensitivity to produce a smaller, more accurate ensemble subset. Ensemble subsets can significantly reduce synoptic-scale forecast errors, but applying this strategy to convective-scale forecasts presents additional challenges. Objective verification of the sensitivity-based ensemble subsetting technique is conducted for ensemble forecasts of 2–5-km updraft helicity (UH) and simulated reflectivity. Many degrees of freedom are varied to identify the lead times, subset sizes, forecast thresholds, and atmospheric predictors that provide most forecast benefit. Subsets vastly reduce error of UH forecasts in an idealized framework but tend to degrade fractions skill scores and reliability in a real-world framework. Results reveal this discrepancy is a result of verifying probabilistic UH forecasts with storm-report-based observations, which effectively dampens technique performance. The potential of ensemble subsetting and likely other postprocessing techniques is limited by tuning UH forecasts to predict severe reports. Additional diagnostic ideas to improve postprocessing tool optimization for convection-allowing models are discussed.

Restricted access
Nicholas H. Smith and Brian C. Ancell

Abstract

This work investigates the sensitivity of wind speed forecasts during wind ramp events to parameters within a numerical weather prediction model boundary layer physics scheme. In a novel way, it explores how these sensitivities vary across 1) ensemble members with different initial conditions, 2) different times during the events, 3) different types of ramp-causing events, and 4) different horizontal grid spacing. Previous research finds that a small number of parameters in the surface layer and boundary layer schemes are responsible for the majority of the forecast uncertainty. In this study, the values of parameters within the Mellor–Yamada–Nakahishi–Niino (MYNN) boundary layer scheme and the MM5 surface layer scheme of the Weather Research and Forecasting (WRF) Model are perturbed in a systematic way to evaluate parametric sensitivity for two types of specific ramp-causing phenomena: marine pushes and stable mix-out events. This work is part of the Department of Energy’s Second Wind Forecast Improvement Project (WFIP2). A major finding of this study is that there are large differences in parametric sensitivity between members of the same initial condition ensemble for all cases. These variations in sensitivity are the result of differences in the atmospheric state within the initial condition ensemble, and the parametric sensitivity changes over the course of each forecast. Finally, parametric sensitivity changes between event type and with model resolution. These conclusions are particularly relevant for future sensitivity studies and efforts at model tuning.

Free access
Michael A. Hollan and Brian C. Ancell

Abstract

The use of ensembles in numerical weather prediction models is becoming an increasingly effective method of forecasting. Many studies have shown that using the mean of an ensemble as a deterministic solution produces the most accurate forecasts. However, the mean will eventually lose its usefulness as a deterministic forecast in the presence of nonlinearity. At synoptic scales, this appears to occur between 12- and 24-h forecast time, and on storm scales it may occur significantly faster due to stronger nonlinearity. When this does occur, the question then becomes the following: Should the mean still be adhered to, or would a different approach produce better results? This paper will investigate the usefulness of the mean within a WRF Model utilizing an ensemble Kalman filter for severe convective events.

To determine when the mean becomes unrealistic, the divergence of the mean of the ensemble (“mean”) and a deterministic forecast initialized from a set of mean initial conditions (“control”) are examined. It is found that significant divergence between the mean and control emerges no later than 6 h into a convective event. The mean and control are each compared to observations, with the control being more accurate for nearly all forecasts studied. For the case where the mean provides a better forecast than the control, an approach is offered to identify the member or group of members that is closest to the mean. Such a forecast will contain similar forecast errors as the mean, but unlike the mean, will be on an actual forecast trajectory.

Full access
Nicholas H. Smith and Brian C. Ancell

Abstract

Wind ramps present a significant challenge to the wind energy industry and are a source of inefficiency for wind farm owners and power grid operators. One approach to investigating wind ramp predictability is ensemble sensitivity analysis (ESA), which relates a scalar response function to an atmospheric variable at an earlier time. Applying ESA to wind ramps is challenging because the transient nature of the events makes it difficult to capture the ramp with a traditional response function that is fixed in space and time. This study introduces four response functions that are allowed to vary in space and time in order to identify key features of the wind ramp, such as the timing of the ramp and the largest horizontal extent of the ramp. Comparing these event-based response functions to a traditional response function reveals key differences in the sensitivity, which indicates that different aspects of the wind ramp event are sensitive to different atmospheric features. The use of multiple response functions is shown to provide a more complete understanding of the ramp event when compared to using only a traditional response function. Observation targeting is addressed by manipulating the ESA fields of six synoptically driven wind ramp events, with results showing that the horizontal location of the optimal target region varies widely between cases and a single observation location likely would not provide benefit to each case. These results indicate that a dynamic observing system would be preferable to a fixed observation for improving wind ramp forecasts.

Full access
Brian C. Ancell and Clifford F. Mass

Abstract

In this paper, the variation of adjoint sensitivities as horizontal and vertical resolutions are changed is investigated. The fifth-generation Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5) and its adjoint are used with consistent physics to generate adjoint sensitivities over a 24-h period. The sensitivities are generated with respect to a response function defined as the lowest sigma level perturbation pressure over a region of northwestern Oregon. It is found that the scale, magnitude, and structure of sensitivity with respect to initial temperature varies significantly as grid spacing is decreased from 216 to 24 km. As found in other adjoint studies at relatively coarse resolution, low-level, upshear-tilted, subsynoptic-scale sensitivities were apparent, with the wavelike sensitivity pattern decreasing significantly in scale and spatial extent with increased horizontal resolution. It is also found that perturbation growth rates depend on horizontal resolution, with the adjoint sensitivities predicting larger changes in the response function with increased horizontal resolution. Relatively little change in sensitivity structure and growth rates occurred when the vertical resolution was varied from 10 to 50 vertical levels. It is shown that a majority of the predicted change in the response function comes from the very small proportion of the domain occupied by sensitive regions. Last, the accuracy of the tangent linear approximation is examined, and it is found that for perturbations made in sensitive regions, the tangent linear approximation degrades at finer grid spacing. The implications of these results are discussed for methodologies utilizing adjoint sensitivities, such as four-dimensional variational data assimilation and targeted observations strategies.

Full access