Hidden Error Variance Theory. Part II: An Instrument That Reveals Hidden Error Variance Distributions from Ensemble Forecasts and Observations

Craig H. Bishop Naval Research Laboratory, Monterey, California

Search for other papers by Craig H. Bishop in
Current site
Google Scholar
PubMed
Close
,
Elizabeth A. Satterfield National Research Council, Monterey, California

Search for other papers by Elizabeth A. Satterfield in
Current site
Google Scholar
PubMed
Close
, and
Kevin T. Shanley Department of Mechanical Engineering, Clarkson University, Potsdam, New York

Search for other papers by Kevin T. Shanley in
Current site
Google Scholar
PubMed
Close
Restricted access

Abstract

In Part I of this study, a model of the distribution of true error variances given an ensemble variance is shown to be defined by six parameters that also determine the optimal weights for the static and flow-dependent parts of hybrid error variance models. Two of the six parameters (the climatological mean of forecast error variance and the climatological minimum of ensemble variance) are straightforward to estimate. The other four parameters are (i) the variance of the climatological distribution of the true conditional error variances, (ii) the climatological minimum of the true conditional error variance, (iii) the relative variance of the distribution of ensemble variances given a true conditional error variance, and (iv) the parameter that defines the mean response of the ensemble variances to changes in the true error variance. These parameters are hidden because they are defined in terms of condition-dependent forecast error variance, which is unobservable if the condition is not sufficiently repeatable. Here, a set of equations that enable these hidden parameters to be accurately estimated from a long time series of (observation minus forecast, ensemble variance) data pairs is presented. The accuracy of the equations is demonstrated in tests using data from long data assimilation cycles with differing model error variance parameters as well as synthetically generated data. This newfound ability to estimate these hidden parameters provides new tools for assessing the quality of ensemble forecasts, tuning hybrid error variance models, and postprocessing ensemble forecasts.

Corresponding author address: Craig H. Bishop, Marine Meteorology Division, Naval Research Laboratory, 7 Grace Hopper Ave., Stop 2, Bldg. 702, Room 212, Monterey, CA 93943-5502. E-mail: bishop@nrlmry.navy.mil

Abstract

In Part I of this study, a model of the distribution of true error variances given an ensemble variance is shown to be defined by six parameters that also determine the optimal weights for the static and flow-dependent parts of hybrid error variance models. Two of the six parameters (the climatological mean of forecast error variance and the climatological minimum of ensemble variance) are straightforward to estimate. The other four parameters are (i) the variance of the climatological distribution of the true conditional error variances, (ii) the climatological minimum of the true conditional error variance, (iii) the relative variance of the distribution of ensemble variances given a true conditional error variance, and (iv) the parameter that defines the mean response of the ensemble variances to changes in the true error variance. These parameters are hidden because they are defined in terms of condition-dependent forecast error variance, which is unobservable if the condition is not sufficiently repeatable. Here, a set of equations that enable these hidden parameters to be accurately estimated from a long time series of (observation minus forecast, ensemble variance) data pairs is presented. The accuracy of the equations is demonstrated in tests using data from long data assimilation cycles with differing model error variance parameters as well as synthetically generated data. This newfound ability to estimate these hidden parameters provides new tools for assessing the quality of ensemble forecasts, tuning hybrid error variance models, and postprocessing ensemble forecasts.

Corresponding author address: Craig H. Bishop, Marine Meteorology Division, Naval Research Laboratory, 7 Grace Hopper Ave., Stop 2, Bldg. 702, Room 212, Monterey, CA 93943-5502. E-mail: bishop@nrlmry.navy.mil
Save
  • Atger, F., 1999: The skill of ensemble prediction systems. Mon. Wea. Rev., 127, 19411953.

  • Bishop, C. H., and E. A. Satterfield, 2013: Hidden error variance theory. Part I: Exposition and analytic model. Mon. Wea. Rev., 141, 14541468.

    • Search Google Scholar
    • Export Citation
  • Bishop, C. H., B. J. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Mon. Wea. Rev., 129, 420436.

    • Search Google Scholar
    • Export Citation
  • Brier, G. W., 1950: Verification of forecasts expressed in terms of probability. Mon. Wea. Rev., 78, 13.

  • Casati, B., and Coauthors, 2008: Forecast verification: Current status and future directions. Meteor. Appl., 15, 318.

  • DelSole, T., 2004: Predictability and information theory. Part I: Measures of predictability. J. Atmos. Sci., 61, 24252440.

  • Fortin, V., A.-C. Favre, and M. Saïd, 2006: Probabilistic forecasting from ensemble prediction systems: Improving upon the best-member method by using a different weight and dressing kernel for each member. Quart. J. Roy. Meteor. Soc., 132, 13491369.

    • Search Google Scholar
    • Export Citation
  • Gneiting, T., F. Balabdaoui, and A. E. Raftery, 2007: Probabilistic forecasts, calibration and sharpness. J. Roy. Stat. Soc., 69B, 243268.

    • Search Google Scholar
    • Export Citation
  • Hamill, T. M., 2001: Interpretation of rank histograms for verifying ensemble forecasts. Mon. Wea. Rev., 129, 550560.

  • Hamill, T. M., and C. Snyder, 2000: A hybrid ensemble Kalman filter–3D variational analysis scheme. Mon. Wea. Rev., 128, 29052919.

  • Hersbach, H., 2000: Decomposition of the continuous ranked probability score for ensemble prediction systems. Wea. Forecasting, 15, 559570.

    • Search Google Scholar
    • Export Citation
  • Houtekamer, P. L., L. Lefaivre, J. Derome, H. Ritchie, and H. L. Mitchell, 1996: A system simulation approach to ensemble prediction. Mon. Wea. Rev., 124, 12251242.

    • Search Google Scholar
    • Export Citation
  • Kleeman, R., 2002: Measuring dynamical prediction utility using relative entropy. J. Atmos. Sci., 59, 20572072.

  • Lorenz, E. N., 1996: Predictability—A problem solved. Proc. Predictability, Reading, United Kingdom, ECMWF.

  • Lorenz, E. N., 2005: Designing chaotic models. J. Atmos. Sci., 62, 15741587.

  • Mason, S. J., and N. E. Graham, 2002: Areas beneath the relative operating characteristics (ROC) and relative operating levels (ROL) curves: Statistical significance and interpretation. Quart. J. Roy. Meteor. Soc., 128, 21452166.

    • Search Google Scholar
    • Export Citation
  • Molteni, F., R. Buizza, T. N. Palmer, and T. Petroliagis, 1996: The ECMWF Ensemble Prediction System: Methodology and validation. Quart. J. Roy. Meteor. Soc., 122, 73119.

    • Search Google Scholar
    • Export Citation
  • Raftery, A. E., T. Gneiting, F. Balabdaoui, and M. Polakowski, 2005: Using Bayesian model averaging to calibrate forecast ensembles. Mon. Wea. Rev., 133, 11551174.

    • Search Google Scholar
    • Export Citation
  • Roulston, M. S., and L. A. Smith, 2002: Evaluating probabilistic forecasts using information theory. Mon. Wea. Rev., 130, 16531660.

  • Roulston, M. S., and L. A. Smith, 2003: Combining dynamical and statistical ensembles. Tellus, 55A, 1630.

  • Toth, Z., and E. Kalnay, 1993: Ensemble forecasting at NMC: The generation of perturbations. Bull. Amer. Meteor. Soc., 74, 23172330.

  • Vrugt, J. A., M. P. Clark, C. G. H. Diks, Q. Duan, and B. A. Robinson, 2006: Multi-objective calibration of forecast ensembles using Bayesian model averaging. Geophys. Res. Lett., 33, L19817, doi:10.1029/2006GL027126.

    • Search Google Scholar
    • Export Citation
  • Wang, X., and C. H. Bishop, 2005: Improvement of ensemble reliability with a new dressing kernel. Quart. J. Roy. Meteor. Soc., 131, 965986.

    • Search Google Scholar
    • Export Citation
  • Wilks, D. S., 2001: A skill score based on economic value for probability forecasts. Meteor. Appl., 8, 209219.

  • Wilks, D. S., and T. M. Hamill, 2007: Comparison of ensemble-MOS methods using GFS reforecasts. Mon. Wea. Rev., 135, 23792390.

  • Wilson, L. J., S. Beauregard, A. E. Raftery, and R. Verret, 2007: Calibrated surface temperature forecasts from the Canadian Ensemble Prediction System using Bayesian model averaging. Mon. Wea. Rev., 135, 13641385.

    • Search Google Scholar
    • Export Citation
All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 154 68 10
PDF Downloads 81 43 8