Search Results

You are looking at 31 - 40 of 48 items for

  • Author or Editor: Huug van den Dool x
  • Refine by Access: All Content x
Clear All Modify Search
Li-Chuan Chen
,
Huug van den Dool
,
Emily Becker
, and
Qin Zhang

Abstract

In this study, precipitation and temperature forecasts during El Niño–Southern Oscillation (ENSO) events are examined in six models in the North American Multimodel Ensemble (NMME), including the CFSv2, CanCM3, CanCM4, the Forecast-Oriented Low Ocean Resolution (FLOR) version of GFDL CM2.5, GEOS-5, and CCSM4 models, by comparing the model-based ENSO composites to the observed. The composite analysis is conducted using the 1982–2010 hindcasts for each of the six models with selected ENSO episodes based on the seasonal oceanic Niño index just prior to the date the forecasts were initiated. Two types of composites are constructed over the North American continent: one based on mean precipitation and temperature anomalies and the other based on their probability of occurrence in a tercile-based system. The composites apply to monthly mean conditions in November, December, January, February, and March as well as to the 5-month aggregates representing the winter conditions. For anomaly composites, the anomaly correlation coefficient and root-mean-square error against the observed composites are used for the evaluation. For probability composites, a new probability anomaly correlation measure and a root-mean probability score are developed for the assessment. All NMME models predict ENSO precipitation patterns well during wintertime; however, some models have large discrepancies between the model temperature composites and the observed. The fidelity is greater for the multimodel ensemble as well as for the 5-month aggregates. February tends to have higher scores than other winter months. For anomaly composites, most models perform slightly better in predicting El Niño patterns than La Niña patterns. For probability composites, all models have superior performance in predicting ENSO precipitation patterns than temperature patterns.

Full access
Emily J. Becker
,
Huug van den Dool
, and
Malaquias Peña
Full access
Yun Fan
,
Huug M. van den Dool
, and
Wanru Wu

Abstract

Several land surface datasets, such as the observed Illinois soil moisture dataset; three retrospective offline run datasets from the Noah land surface model (LSM), Variable Infiltration Capacity (VIC) LSM, and Climate Prediction Center leaky bucket soil model; and three reanalysis datasets (North American Regional Reanalysis, NCEP/Department of Energy Global Reanalysis, and 40-yr ECMWF Re-Analysis), are used to study the spatial and temporal variability of soil moisture and its response to the major components of land surface hydrologic cycles: precipitation, evaporation, and runoff. Detailed analysis was performed on the evolution of the soil moisture vertical profile. Over Illinois, model simulations are compared to observations, but for the United States as a whole some impressions can be gained by comparing the multiple soil moisture–precipitation–evaporation–runoff datasets to one another. The magnitudes and partitioning of major land surface water balance components on seasonal–interannual time scales have been explored. It appears that evaporation has the most prominent annual cycle but its interannual variability is relatively small. For other water balance components, such as precipitation, runoff, and surface water storage change, the amplitudes of their annual cycles and interannual variations are comparable. This study indicates that all models have a certain capability to reproduce observed soil moisture variability on seasonal–interannual time scales, but offline runs are decidedly better than reanalyses (in terms of validation against observations) and more highly correlated to one another (in terms of intercomparison) in general. However, noticeable differences are also observed, such as the degree of simulated drought severity and the locations affected—this is due to the uncertainty in model physics, input forcing, and mode of running (interactive or offline), which continue to be major issues for land surface modeling.

Full access
Emily J. Becker
,
Huug van den Dool
, and
Malaquias Peña

Abstract

Forecasts for extremes in short-term climate (monthly means) are examined to understand the current prediction capability and potential predictability. This study focuses on 2-m surface temperature and precipitation extremes over North and South America, and sea surface temperature extremes in the Niño-3.4 and Atlantic hurricane main development regions, using the Climate Forecast System (CFS) global climate model, for the period of 1982–2010. The primary skill measures employed are the anomaly correlation (AC) and root-mean-square error (RMSE). The success rate of forecasts is also assessed using contingency tables.

The AC, a signal-to-noise skill measure, is routinely higher for extremes in short-term climate than those when all forecasts are considered. While the RMSE for extremes also rises, especially when skill is inherently low, it is found that the signal rises faster than the noise. Permutation tests confirm that this is not simply an effect of reduced sample size. Both 2-m temperature and precipitation forecasts have higher anomaly correlations in the area of South America than North America; credible skill in precipitation is very low over South America and absent over North America, even for extremes. Anomaly correlations for SST are very high in the Niño-3.4 region, especially for extremes, and moderate to high in the Atlantic hurricane main development region. Prediction skill for forecast extremes is similar to skill for observed extremes. Assessment of the potential predictability under perfect-model assumptions shows that predictability and prediction skill have very similar space–time dependence. While prediction skill is higher in CFS version 2 than in CFS version 1, the potential predictability is not.

Full access
Huug van den Dool
,
Emily Becker
,
Li-Chuan Chen
, and
Qin Zhang

Abstract

An ordinary regression of predicted versus observed probabilities is presented as a direct and simple procedure for minimizing the Brier score (BS) and improving the attributes diagram. The main example applies to seasonal prediction of extratropical sea surface temperature by a global coupled numerical model. In connection with this calibration procedure, the probability anomaly correlation (PAC) is developed. This emphasizes the exact analogy of PAC and minimizing BS to the widely used anomaly correlation (AC) and minimizing mean squared error in physical units.

Full access
David A. Unger
,
Huug van den Dool
,
Edward O’Lenic
, and
Dan Collins

Abstract

A regression model was developed for use with ensemble forecasts. Ensemble members are assumed to represent a set of equally likely solutions, one of which will best fit the observation. If standard linear regression assumptions apply to the best member, then a regression relationship can be derived between the full ensemble and the observation without explicitly identifying the best member for each case. The ensemble regression equation is equivalent to linear regression between the ensemble mean and the observation, but is applied to each member of the ensemble. The “best member” error variance is defined in terms of the correlation between the ensemble mean and the observations, their respective variances, and the ensemble spread. A probability density function representing the ensemble prediction is obtained from the normalized sum of the best-member error distribution applied to the regression forecast from each ensemble member. Ensemble regression was applied to National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) forecasts of seasonal mean Niño-3.4 SSTs on historical forecasts for the years 1981–2005. The skill of the ensemble regression was about the same as that of the linear regression on the ensemble mean when measured by the continuous ranked probability score (CRPS), and both methods produced reliable probabilities. The CFS spread appears slightly too high for its skill, and the CRPS of the CFS predictions can be slightly improved by reducing its ensemble spread to about 0.8 of its original value prior to regression calibration.

Full access
Huug M. van den Dool
,
Edward A. O'Lenic
, and
William H. Klein

Abstract

A time series of 43 years of observed monthly mean air temperature at 109 sites in the 48 contiguous United States is compared to monthly mean air temperature specified from hemispheric gridded 700-mb heights. Because both upper-air and surface data have problems that may limit their use in climate change studies, this comparison could be considered a mutual consistency check. Cooling (by about 0.5°C) from 1951 to about 1970 and subsequent warming (also by 0.5°C) that continues through the present are found in both datasets, indicating that these interdecadal changes are probably real.

In the List several years the specified temperatures were often colder than those observed. This prompted an investigation of whether the “residual” (specified minus observed) has recently been large (and negative) compared to the earlier part of the record. It was found that for the same 700-mb height field, surface temperatures were almost a degree Celsius warmer in the last few years than they were in the early 1950s, but considering the variability of the residuals over the 1950–92 period, the recent cold residuals may not yet be strikingly unusual.

By comparing the full set of 109 stations to a “clean” subset of 24, the impact of common problems in surface data (station relocation, urbanization, etc.) was found to be quite small. The rather favorable comparison of observed surface temperatures and specified surface temperatures (which suffer from upper-air analysis / observation changes over the years) indicates that their respective data problems do not appear to invalidate their use in studies of interdecadal temperature change.

Full access
Anthony G. Barnston
,
Michael K. Tippett
,
Huug M. van den Dool
, and
David A. Unger

Abstract

Since 2002, the International Research Institute for Climate and Society, later in partnership with the Climate Prediction Center, has issued an ENSO prediction product informally called the ENSO prediction plume. Here, measures to improve the reliability and usability of this product are investigated, including bias and amplitude corrections, the multimodel ensembling method, formulation of a probability distribution, and the format of the issued product. Analyses using a subset of the current set of plume models demonstrate the necessity to correct individual models for mean bias and, less urgent, also for amplitude bias, before combining their predictions. The individual ensemble members of all models are weighted equally in combining them to form a multimodel ensemble mean forecast, because apparent model skill differences, when not extreme, are indistinguishable from sampling error when based on a sample of 30 cases or less. This option results in models with larger ensemble numbers being weighted relatively more heavily. Last, a decision is made to use the historical hindcast skill to determine the forecast uncertainty distribution rather than the models’ ensemble spreads, as the spreads may not always reproduce the skill-based uncertainty closely enough to create a probabilistically reliable uncertainty distribution. Thus, the individual model ensemble members are used only for forming the models’ ensemble means and the multimodel forecast mean. In other situations, the multimodel member spread may be used directly. The study also leads to some new formats in which to more effectively show both the mean ENSO prediction and its probability distribution.

Full access
Jeffrey Anderson
,
Huug van den Dool
,
Anthony Barnston
,
Wilbur Chen
,
William Stern
, and
Jeffrey Ploshay

A statistical model and extended ensemble integrations of two atmospheric general circulation models (GCMs) are used to simulate the extratropical atmospheric response to forcing by observed SSTs for the years 1980 through 1988. The simulations are compared to observations using the anomaly correlation and root-mean-square error of the 700-hPa height field over a region encompassing the extratropical North Pacific Ocean and most of North America. On average, the statistical model is found to produce considerably better simulations than either numerical model, even when simple statistical corrections are used to remove systematic errors from the numerical model simulations. In the mean, the simulation skill is low, but there are some individual seasons for which all three models produce simulations with good skill.

An approximate upper bound to the simulation skill that could be expected from a GCM ensemble, if the model's response to SST forcing is assumed to be perfect, is computed. This perfect model predictability allows one to make some rough extrapolations about the skill that could be expected if one could greatly improve the mean response of the GCMs without significantly impacting the variance of the ensemble. These perfect model predictability skills are better than the statistical model simulations during the summer, but for the winter, present-day statistical forecasts already have skill that is as high as the upper bound for the GCMs. Simultaneous improvements to the GCM mean response and reduction in the GCM ensemble variance would be required for these GCMs to do significantly better than the statistical model in winter. This does not preclude the possibility that, as is presently the case, a statistical blend of GCM and statistical predictions could produce a simulation better than either alone.

Because of the primitive state of coupled ocean–atmosphere GCMs, the vast majority of seasonal predictions currently produced by GCMs are performed using a two-tiered approach in which SSTs are first predicted and then used to force an atmospheric model; this motivates the examination of the simulation problem. However, it is straightforward to use the statistical model to produce true forecasts by changing its predictors from simultaneous to precursor SSTs. An examination of the decrease in skill of the statistical model when changed from simulation to prediction mode is extrapolated to draw conclusions about the skill to be expected from good coupled GCM predictions.

Full access
Yun Fan
,
Vladimir Krasnopolsky
,
Huug van den Dool
,
Chung-Yu Wu
, and
Jon Gottschalck

Abstract

Forecast skill from dynamical forecast models decreases quickly with projection time due to various errors. Therefore, postprocessing methods, from simple bias correction methods to more complicated multiple linear regression–based model output statistics, are used to improve raw model forecasts. Usually, these methods show clear forecast improvement over the raw model forecasts, especially for short-range weather forecasts. However, linear approaches have limitations because the relationship between predictands and predictors may be nonlinear. This is even truer for extended range forecasts, such as week-3–4 forecasts. In this study, neural network techniques are used to seek or model the relationships between a set of predictors and predictands, and eventually to improve week-3–4 precipitation and 2-m temperature forecasts made by the NOAA/NCEP Climate Forecast System. Benefitting from advances in machine learning techniques in recent years, more flexible and capable machine learning algorithms and availability of big datasets enable us not only to explore nonlinear features or relationships within a given large dataset, but also to extract more sophisticated pattern relationships and covariabilities hidden within the multidimensional predictors and predictands. Then these more sophisticated relationships and high-level statistical information are used to correct the model week-3–4 precipitation and 2-m temperature forecasts. The results show that to some extent neural network techniques can significantly improve the week-3–4 forecast accuracy and greatly increase the efficiency over the traditional multiple linear regression methods.

Restricted access