Search Results

You are looking at 21 - 28 of 28 items for

  • Author or Editor: H. M. van den Dool x
  • Refine by Access: All Content x
Clear All Modify Search
Y. Fan
,
H. M. Van den Dool
,
D. Lohmann
, and
K. Mitchell

Abstract

Land surface variables, such as soil moisture, are among the most important components of memory for the climate system. A more accurate and long time series of land surface data is very important for real-time drought monitoring, for understanding land surface–atmosphere interaction, and for improving weather and climate prediction. Thus, the ultimate goal of the present work is to produce a long-term “land reanalysis” with 1) retrospective and 2) real-time update components that are both generated in a manner that remains temporally homogeneous throughout the record. As the first step of the above goal, the retrospective component is reported here. Specifically, a 51-yr (1948–98) set of hourly land surface meteorological forcing is produced and used to execute the Noah land surface model, all on the 1/8° grid of the North American Land Data Assimilation System (NLDAS). The surface forcing includes air temperature, air humidity, surface pressure, wind speed, and surface downward shortwave and longwave radiation, all derived from the National Centers for Environmental Prediction–National Center For Atmospheric Research (NCEP–NCAR) Global Reanalysis. Additionally, a newly improved precipitation analysis is used to provide realistic hourly precipitation forcing on the NLDAS grid. Some unique procedures are described and applied to yield retroactive forcing that is temporally homogeneous over the 51 yr at the spatial and temporal resolution, including a terrain height adjustment that accounts for the terrain differences between the global reanalysis and the NLDAS. The land model parameters and fixed fields are derived from existing high-resolution datasets of vegetation, soil, and orography. The land reanalysis output from the Noah land surface model consists of eight energy balance components and skin temperature, which are output at 3-hourly intervals, and 15 other variables (i.e., water balance components, surface state variables, etc.), which are output at daily intervals for the period of 1 January 1948 through 31 December 1998.

Using soil moisture observations throughout Illinois over 1984–98 as validation, an improvement in the simulated soil moisture (of the Noah model versus a forerunner leaky bucket model) is illustrated in terms of an improved annual cycle (much better phasing) and somewhat higher anomaly correlation for the anomalies, especially in central and southern Illinois. Nonetheless, considerable room for model improvement remains. For example, the simulated anomalies are overly uniform in the vertical compared to the observations, and some likely routes for model improvement in this aspect are proposed.

Full access
H. M. van den Dool
,
S. Saha
, and
Å Johansson

Abstract

A new variant is proposed for calculating functions empirically and orthogonally from a given space–time dataset. The method is rooted in multiple linear regression and yields solutions that are orthogonal in one direction, either space or time. In normal setup, one searches for that point in space, the base point (predictor), which, by linear regression, explains the most of the variance at all other points (predictands) combined. The first spatial pattern is the regression coefficient between the base point and all other points, and the first time series is taken to be the time series of the raw data at the base point. The original dataset is next reduced; that is, what has been accounted for by the first mode is subtracted out. The procedure is repeated exactly as before for the second, third, etc., modes. These new functions are named empirical orthogonal teleconnections (EOTs). This is to emphasize the similarity of EOT to both teleconnections and (biorthogonal) empirical orthogonal functions (EOFs). One has to choose the orthogonal direction for EOT. In the above description of the normal space–time setup, picking successive base points in space, the time series are orthogonal. One can reverse the role of time and space—in this case one picks base points in time, and the spatial maps will be orthogonal. If the dataset contains biorthogonal modes, the EOTs are the same for both setups and are equal to the EOFs. When applied to four commonly used datasets, the procedure was found to work well in terms of explained variance (EV) and in terms of extracting familiar patterns. In all examples the EV for EOTs was only slightly less than the optimum obtained by EOF. A numerical recipe was given to calculate EOF, starting from EOT as an initial guess. When subjected to cross validation the EOTs seem to fare well in terms of explained variance on independent data (as good as EOF). The EOT procedure can be implemented very easily and has, for some (but not all) applications, advantages over EOFs. These novelties, advantages, and applications include the following. 1) One can pick certain modes (or base point) first—the order of the EOTs is free, and there is a near-infinite set of EOTs. 2) EOTs are linked to specific points in space or moments in time. 3) When linked to flow at specific moments in time, the EOT modes have undeniable physical reality. 4) When linked to flow at specific moments in time, EOTs appear to be building blocks for empirical forecast methods because one can naturally access the time derivative. 5) When linked to specific points in space, one has a rational basis to define strategically chosen points such that an analysis of the whole domain would benefit maximally from observations at these locations.

Full access
C. J. Kok
,
J. D. Opsteegh
, and
H. M. van den Dool

Abstract

Using a two-level linear, steady state model, we diagnose the 40-day mean response of a GCM to a tropical sea surface temperature (SST) anomaly. The time-mean anomalies produced by the GCM are simulated as linear response to the anomalous hemispheric distributions of latent heating, sensible heating and transient eddy forcing. Also, the anomalous effect of mountains, caused by anomalies in the zonal mean surface wind is taken into account. All anomalies are defined as the difference between perturbation and control runs. For our analysis, we have taken the tropical Atlantic SST anomaly experiment performed by Rowntree.

We have compared the linear model's response in temperature at 600 mb and winds at 400 mb with the same anomalous quantities produced by the GCM. The similarity between the time-mean anomalies of the GCM experiment and the linear model's response is very high. The pattern correlation coefficients are between 0.6 and 0.7 in the region between 30°N and 60°N. The response to each of the anomalous forcings separately is positively correlated with the GCM anomaly pattern. The amplitude of the response to anomalous forcing by transient eddies is a factor of two or three larger than the effects of anomalous sensible and latent heating. The anomalous effect of the orography is negligible.

Although intended to be a tropical SST anomaly GCM experiment, the difference between control and perturbation runs does not seem to be directly related to tropical heating near the SST anomaly. Instead, most of the forcing of anomalies in the midlatitudes took place in the midlatitudes itself and, in particular, the remote effects of forcing by tropical latent heat sources were minor.

Full access
J. L. Nap
,
H. M. Van Den Dool
, and
J. Oerlemans

Abstract

Monthly forecasts of temperature, rainfall and sunshine have been verified during the period 1970–79. The predictions were based on seven different schemes. Of the seven methods, five refer to De Bilt (The Netherlands), one to southeast England and one to the Federal Republic of Germany. The results are not very encouraging for any of the methods. The skill is negligible except for a few schemes that predicted the monthly mean temperature ∼ 10% better than climatology.

Full access
H. M. Van den Dool
,
Peitao Peng
,
Åke Johansson
,
Muthuvel Chelliah
,
Amir Shabbar
, and
Suranjana Saha

Abstract

The question of the impact of the Atlantic on North American (NA) seasonal prediction skill and predictability is examined. Basic material is collected from the literature, a review of seasonal forecast procedures in Canada and the United States, and some fresh calculations using the NCEP–NCAR reanalysis data.

The general impression is one of low predictability (due to the Atlantic) for seasonal mean surface temperature and precipitation over NA. Predictability may be slightly better in the Caribbean and the (sub)tropical Americas, even for precipitation. The NAO is widely seen as an agent making the Atlantic influence felt in NA. While the NAO is well established in most months, its prediction skill is limited. Year-round evidence for an equatorially displaced version of the NAO (named ED_NAO) carrying a good fraction of the variance is also found.

In general the predictability from the Pacific is thought to dominate over that from the Atlantic sector, which explains the minimal number of reported Atmospheric Model Intercomparison Project (AMIP) runs that explore Atlantic-only impacts. Caveats are noted as to the question of the influence of a single predictor in a nonlinear environment with many predictors. Skill of a new one-tier global coupled atmosphere–ocean model system at NCEP is reviewed; limited skill is found in midlatitudes and there is modest predictability to look forward to.

There are several signs of enthusiasm in the community about using “trends” (low-frequency variations): (a) seasonal forecast tools include persistence of last 10 years’ averaged anomaly (relative to the official 30-yr climatology), (b) hurricane forecasts are based largely on recognizing a global multidecadal mode (which is similar to an Atlantic trend mode in SST), and (c) two recent papers, one empirical and one modeling, giving equal roles to the (North) Pacific and Atlantic in “explaining” variations in drought frequency over NA on a 20 yr or longer time scale during the twentieth century.

Full access
Huug M. van den Dool
,
Edward A. O'Lenic
, and
William H. Klein

Abstract

A time series of 43 years of observed monthly mean air temperature at 109 sites in the 48 contiguous United States is compared to monthly mean air temperature specified from hemispheric gridded 700-mb heights. Because both upper-air and surface data have problems that may limit their use in climate change studies, this comparison could be considered a mutual consistency check. Cooling (by about 0.5°C) from 1951 to about 1970 and subsequent warming (also by 0.5°C) that continues through the present are found in both datasets, indicating that these interdecadal changes are probably real.

In the List several years the specified temperatures were often colder than those observed. This prompted an investigation of whether the “residual” (specified minus observed) has recently been large (and negative) compared to the earlier part of the record. It was found that for the same 700-mb height field, surface temperatures were almost a degree Celsius warmer in the last few years than they were in the early 1950s, but considering the variability of the residuals over the 1950–92 period, the recent cold residuals may not yet be strikingly unusual.

By comparing the full set of 109 stations to a “clean” subset of 24, the impact of common problems in surface data (station relocation, urbanization, etc.) was found to be quite small. The rather favorable comparison of observed surface temperatures and specified surface temperatures (which suffer from upper-air analysis / observation changes over the years) indicates that their respective data problems do not appear to invalidate their use in studies of interdecadal temperature change.

Full access
Ming Cai
,
Chul-Su Shin
,
H. M. van den Dool
,
Wanqiu Wang
,
S. Saha
, and
A. Kumar

Abstract

This paper analyzes long-term surface air temperature trends in a 25-yr (1982–2006) dataset of retrospective seasonal climate predictions made by the NCEP Climate Forecast System (CFS), a model that has its atmospheric greenhouse gases fixed at the 1988 concentration level. Although the CFS seasonal forecasts tend to follow the observed interannual variability very closely, there exists a noticeable time-dependent discrepancy between the CFS forecasts and observations, with a warm model bias before 1988 and a cold bias afterward except for a short-lived warm bias during 1992–94. The trend from warm to cold biases is likely caused by not including the observed increase in the anthropogenic greenhouse gases in the CFS, whereas the warm bias in 1992–94 reflects the absence of the anomalous aerosols released by the 1991 Mount Pinatubo eruption. Skill analysis of the CFS seasonal climate predictions with and without the warming trend suggests that the 1997–98 El Niño event contributes significantly to the record-breaking global warmth in 1998 whereas the record-breaking warm decade since 2000 is mainly due to the effects of the increased greenhouse gases. Implications for operational seasonal prediction will be discussed.

Full access
S. Saha
,
S. Nadiga
,
C. Thiaw
,
J. Wang
,
W. Wang
,
Q. Zhang
,
H. M. Van den Dool
,
H.-L. Pan
,
S. Moorthi
,
D. Behringer
,
D. Stokes
,
M. Peña
,
S. Lord
,
G. White
,
W. Ebisuzaki
,
P. Peng
, and
P. Xie

Abstract

The Climate Forecast System (CFS), the fully coupled ocean–land–atmosphere dynamical seasonal prediction system, which became operational at NCEP in August 2004, is described and evaluated in this paper. The CFS provides important advances in operational seasonal prediction on a number of fronts. For the first time in the history of U.S. operational seasonal prediction, a dynamical modeling system has demonstrated a level of skill in forecasting U.S. surface temperature and precipitation that is comparable to the skill of the statistical methods used by the NCEP Climate Prediction Center (CPC). This represents a significant improvement over the previous dynamical modeling system used at NCEP. Furthermore, the skill provided by the CFS spatially and temporally complements the skill provided by the statistical tools. The availability of a dynamical modeling tool with demonstrated skill should result in overall improvement in the operational seasonal forecasts produced by CPC.

The atmospheric component of the CFS is a lower-resolution version of the Global Forecast System (GFS) that was the operational global weather prediction model at NCEP during 2003. The ocean component is the GFDL Modular Ocean Model version 3 (MOM3). There are several important improvements inherent in the new CFS relative to the previous dynamical forecast system. These include (i) the atmosphere–ocean coupling spans almost all of the globe (as opposed to the tropical Pacific only); (ii) the CFS is a fully coupled modeling system with no flux correction (as opposed to the previous uncoupled “tier-2” system, which employed multiple bias and flux corrections); and (iii) a set of fully coupled retrospective forecasts covering a 24-yr period (1981–2004), with 15 forecasts per calendar month out to nine months into the future, have been produced with the CFS.

These 24 years of fully coupled retrospective forecasts are of paramount importance to the proper calibration (bias correction) of subsequent operational seasonal forecasts. They provide a meaningful a priori estimate of model skill that is critical in determining the utility of the real-time dynamical forecast in the operational framework. The retrospective dataset also provides a wealth of information for researchers to study interactive atmosphere–land–ocean processes.

Full access