Search Results

You are looking at 1 - 4 of 4 items for :

  • Author or Editor: Huug van den Dool x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Wilbur Y. Chen
and
Huug Van den Dool

Abstract

Teleconnection patterns have been extensively investigated, mostly with linear analysis tools. The lesser-known asymmetric characteristics between positive and negative phases of prominent teleconnections are explored here. Substantial disparity between opposite phases can be found. The Pacific–North American (PNA) pattern exhibits a large difference in structure and statistical significance in its downstream action center, showing either a large impact over the U.S. southern third region or over the western North Atlantic Ocean. The North Atlantic–based patterns display significant impacts over the North Atlantic for large positive anomalies and even larger impacts over the European sector for large negative anomalies.

The monthly variance is distributed nearly evenly over the entire North Atlantic basin. A teleconnection pattern based on different regions of the basin has been known to assume different structure and time variations. The extent of statistical significance is investigated for three typical North Atlantic–associated patterns based separately on the eastern (EATL), western (WATL), and southern (SATL) regions of the North Atlantic. The EATL teleconnection pattern is similar to the classical North Atlantic Oscillation (NAO). The WATL pattern, however, is more similar to the Arctic (Annular) Oscillation (AO). The sensitivity of the North Atlantic–based teleconnection to a slight shift in base point can be fairly large: the pattern can be an AO or an NAO, with distinctive significance structure between them. Other discernible features are also presented.

Full access
Wilbur Y. Chen
and
Huug M. van den Dool

Abstract

A series of 90-day integrations by a low-resolution version (T40) of the National Meteorological Center's global spectral model was analyzed for its performance as well as its low-frequency variability behavior. In particular, 5-day mean 500-mb forecasts with leads up to 88 days were examined and compared with the observations. The forecast mean height decreased rapidly as forecast lead increased. A severe negative bias of the mean height in the Tropics was caused by a negative temperature bias and a drop of the surface pressure of about 2 mb. The forecast variance also dropped rapidly to a minimum of 75% of the atmospheric standard deviation before being stabilized at day 18. The model could not maintain large anomalous flows from the atmospheric initial conditions. However, it is quite capable of generating and maintaining large anomalies after drifting to its own climatology and temporal variability.

At extended ranges, the model showed better skill over the North Pacific than North Atlantic when the season advanced to the colder period of the DERF90 (dynamical extended-range forecasts 1990) experiments. The model also displayed dependence on circulation regimes, although the skill fluctuated widely from day to day in general. Blocking flows in the forecast were found to systematically retrogress to the Baffin Island area from the North Atlantic. Therefore, improvements of the model's systematic errors, including its drift, appear to be essential in order to achieve a higher level of forecast performance. However, no generalization can be made due to the usage of a low-resolution model and the experiments being carried out over a rather short time span, from only 3 May to 6 December 1990.

Full access
Jeffrey L. Anderson
and
Huug M. van den Dool

Abstract

The skill of a set of extended-range dynamical forecasts made with a modern numerical forecast model is examined. A forecast is said to be skillful if it produces a high quality forecast by correctly modeling some aspects of the dynamics of the real atmosphere; high quality forecasts may also occur by chance. The dangers of making a conclusion about model skill by verifying a single long-range forecast are pointed out by examples of apparently high “skill” verifications between extended-range forecasts and observed fields from entirely different years.

To avoid these problems, the entire distribution of forecast quality for a large set of forecasts as a function of lead time is examined. A set of control forecasts that clearly have no skill is presented. The quality distribution for the extended-range forecasts is compared to the distributions of quality for the no-skill control forecast set.

The extended-range forecast quality distributions are found to be essentially indistinguishable from those for the no-skill control at leads somewhat greater than 12 days. A search for individual forecasts with a “return of skill” at extended ranges is also made. Although it is possible to find individual forecasts that have a return of quality, a comparison to the no-skill controls demonstrates that these return of skill forecasts occur only as often as is expected by chance.

Full access
David A. Unger
,
Huug van den Dool
,
Edward O’Lenic
, and
Dan Collins

Abstract

A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.

Full access