Search Results

You are looking at 1 - 10 of 13 items for :

  • Author or Editor: H. M. van den Dool x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
H. M. Van Den Dool

Abstract

We address whether there are pairs of instantaneous 500-mb flow patterns that are, relative to the climatology, as much as possible each other's opposite (which we term “mirror images” or “antilogs”), and we investigate whether such flows are followed by opposite 12-h time tendencies.

Over eastern North America for almost all wintertime flow patterns in a 15-yr dataset it is almost as easy to find an antilog as an analog. Exceptions are very deep lows for which no mirror-imaged highs exist. In addition, antilogs make for 12-h height forecasts at a skill level almost as good as those based on analogs.

Therefore the multivariate height distribution (i.e., flow patterns) is almost symmetric, and mirror imaging an observed flow is likely to yield a physically plausible pattern although not necessarily observed so far. Note that time tendencies tend to be opposite for opposite initial conditions for short periods of time even though the perturbations (full anomalies) are not small. An explanation of the latter is sought by running a global barotropic model from both regular and mirror-imaged initial conditions. Out to 12 h the tendency of the midlatitude streamfunction is primarily determined by the linear part of the absolute vorticity advection. However, on small scales (i.e., the vorticity field) forecasts deteriorate after about 6 h when the nonlinear term is either omitted (linear run) or represented wrongly (in the mirror-imaged run).

Full access
H. M. van den Dool

Abstract

In the literature, the use of analogues for short-range weather forecasting has practically been discarded. This is because no good matches for today's extratropical large-scale flow patterns can be found in a 30-year data library. We propose here a limited-area model approach for Analogue-Forecasting (AF). In order to make a 12-hour AF valid at a target point, we require analogy in initial states only over a circle with radius of about 900 km. On a limited area there are usually several good analogues, sometimes to within observational error. Different historical analogues may be used at different target points.

The usefulness of the limited area approach is first demonstrated with some examples. We then present verification statistics of 3000 12-hour 500-mb height point forecasts in the Northern Hemisphere winter at 38°N, 80°W (over West Virginia, U.S.A.). In order to beat persistence at 12 hours at this point we need an analogue which differs by about 40 geopotential meters or less from the base case. This requirement is met almost all of the time using a 15-year dataset for analogue searching. We find a few percent of the analogue pairs to be within observational error. In the mean, over 3000 cases the initial discrepancy is 33 gpm. When averaging over the first five analogues 12-hour AF over the eastern United States can be characterized by a 52 gpm rms error and 0.95 (0.77) anomaly (tendency) correlation. The forecasts have the correct amplitude, i.e., no damping, in spite of the averaging over five individual forecasts. We then show an example of a 500-mb height forecast map on a (roughly) 2000 × 2000 km area over the eastern part of North America. Although different analogues were used to arrive at the 12 hour forecast at each of the 25 gridpoints, the resulting map looks meteorological and the forecast is moderately successful. A verification of a large set of 12-hour forecast maps shows that the height gradients are indeed forecast with some skill. We then proceed to make 24-hour point forecasts by finding historical limited-area matches to the 12-hour forecast maps. This second time step indicates that the AF-process holds up, with forecast accuracy increasing its margin over persistence.

Two applications are discussed. Comparing initial and 12-hour forecast error tells us something about “error growth” and predictability at that spot according to a perfect model. Given that there are usually several good analogues, Monte Carlo experiments and probabilistic forecasts naturally suggest themselves. In particular we find the spread of analogue forecasts to be an eminent predictor of forecast skill.

Refinements and applications and extension to longer range forecasts are discussed in the final section.

Full access
H. M. Van Den Dool

Abstract

A study of long records of monthly mean air temperature (MMAT) for many stations in the Netherlands indicates that the atmosphere's response to surface boundary forcing is often of a very simple local nature. In the Dutch area, the atmosphere seems to respond to a sea surface temperature (SST) anomaly in the North Sea with an air temperature anomaly of the same sign. Because of the abrupt change in lower boundary forcing near the coastline, very small spatial scales are introduced in air temperature anomalies at long time scales. Over the sea MMAT anomalies have much larger time scales than over the land; a similar increase in time scale can be found in the delay of the climatologically normal temperature with respect to the solar forcing. When extended to the United States, the study showed very similar results; that is, monthly mean surface air temperature (MMAT) anomalies live longest in areas where the air temperature response is slowest to the annual cycle in incoming radiation. Apart from boundary forcing by the oceans, the Gulf of Mexico and the Great Lakes, there is sonic evidence of forcing by snow-cover in the Northeast.

Since surface boundary forcing by SST anomalies can be quite persistent, MMAT anomalies are more predictable over the sea and in the coastal zone than in the interior of big land masses. This explains why Madden and Shea (1978) found that the potentially predictable part of the interannual variance in MMAT is largest in predominantly coastal areas, California, in particular. A sizable fraction of the potential predictability in these areas can be effected by such simple tools as linear regression onto antecedent MMAT.

Full access
H. M. van den dool

Abstract

The level of month-to-month persistence of anomalies in the monthly mean atmospheric circulation was determined from a 29-year data set of Northern Hemisphere analyses of 500 mb height, surface pressure and 500–1000 mb thickness. A well-defined annual march is found, with greatest persistence from January to February and from July to August. The minima occur in spring and fall. Expressed in a linear correlation coefficient the largest persistence amounts to no more than 0.3.

A qualitative explanation for the double peak in the annual march was sought in linear theory. The response of a stationary linear atmospheric model to the forcing of an anomalous heat source depends on the properties of the basic state around which the model is linearized. Model runs were made with climatological mean basic states corresponding to all 12 calendar months. In all runs the forcing was kept the same. As climatology changes least from January to February and from July to August, the model response to the constant forcing then is almost 100% persistent. The persistence is low from April to May and from October to November because in these months the basic state changes rather drastically.

Although the maxima in persistence on the monthly time scale in the observed circulation are indeed found in summer and winter, the level of persistence is far below 100%. This can be interpreted as observational evidence of the very large forcing of the time-mean atmosphere by high-frequency transient eddies. The forcing associated with long-lived anomalies in external factors (oceans, snow, etc.) seems to control only a small part of the observed anomalies in the atmospheric circulation.

Full access
J. Qin
and
H. M. van den Dool

Abstract

This paper presents a study on simple and inexpensive techniques for extension of NMC's Medium Range Forecasting (MRF) model. Three control forecasts are tested to make 1-day extensions of 500-mb height fields initiated from the MRF at days 0–9. They are persistence (PER), a divergent anomaly vorticity advection model (dAVA), and the empirical wave propagation (EWP) method.

First the traditional 1–10-day global forecasts made by the MRF and the three controls from a common set of 361 initial conditions are discussed. Taking this as a basis, 1-day extension control forecasts starting from MRF prediction over four successive winters are examined next. Experiments show that regardless of the presence or absence of the systematic error in the MRF model output, there exists some point (T 0 = n) into the forecast after which the 1-day extension of the day n MRF out to day n + 1 by a control forecast is as good as or better than the continued integration of the full blown MRF model. In particular, the EWP provides a 1-day extension that beats the MRF most consistently after about 6 days in the Northern Hemisphere. Decomposition of the forecasts in terms of zonal harmonics further indicates that the skill improvement over the MRF is primarily in the long waves, but contributions from shorter waves are not negligible.

Efforts have been made to understand the mechanisms by which simple methods are superior to complicated models for low-frequency prediction at extended range. It seems that at least two simplifications made in one or all of the control forecasts are crucial in outperforming the MRF beyond day 6. The first one is well known, that is, the contaminating effects of synoptic-scale baroclinic eddies have been filtered out in the simple models considered. More generally, the nonlinear terms (whether barotropic or baroclinic) contribute to skill deterioration beyond day 6. The second reason is the explicit elimination of the divergence process in the control forecasts, as the MRF model may contain significant errors in forecasting the divergence.

Full access
H. M. van den Dool
and
J. Qin

Abstract

A method for time interpolation based on the climatological speed of large-scale atmospheric waves that is empirically determined is proposed. When tested on a 7-yr dataset this method is found to be easy to use, has good accuracy, and is, in fact, considerably more accurate than the much used linear time interpolation. The gain in accuracy is particularly large for mobile synoptic waves. Several applications of a time-continuous description of the atmosphere are discussed.

Full access
Suranjana Saha
and
H. M. van den Dool

Abstract

An objective and practical limit of predictability for NWP models is proposed. The time T 0 is said to be the limit of predictability if model forecast beyond T 0 has no extra skill over persisting the T 0 forecast. The “skill” is measured here in terms of standard rms and anomaly correlation scores. For the NMC medium-range forecast model, T 0 is found to be 5–6 days for 250, 500 and 1000 mb height forecasts for the period 5 May–25 July 1987. The T 0 can also be interpreted as the time at which them is no longer skill in the prediction of the time derivative of the quantity under consideration.

Full access
H. M. van Den Dool
and
Suranjana Saha

Abstract

A method is proposed to calculate measures of forecast skill for high, medium and low temporal frequency variations in the atmosphere. This method is applied to a series of 128 consecutive 1 to 10-day forecasts produced at NMC with their operational global medium-range-forecast model during 1 May–5 September 1988. It is found that over this period, more than 50% of the variance in observed 500 mb height fields is found at periods of 18 days or longer. The. intuitive notion that the predictability time of a phenomenon should be proportional to its lifetime is found to be qualitatively correct; i.e., the low frequencies are predicted (at a given skill level) over a longer time than high frequencies. However, the current prediction skill in low frequencies is far below its potential if one assumes that for any frequency the predictability time scale ought to be equal to the lifetime scale. In the high frequencies, however, the current prediction skill has already reached its potential; i.e., cyclones are being predicted over a time comparable to their lifetime; i.e. 3 to 4 days. We offer some speculations as to why the low frequency variations in the atmosphere are so poorly predicted by our current state-of-the-art models. The conclusions are tested, and found to hold up, on a more recent dataset covering 10 December 1988–16 April 1989.

Full access
H. M. van den Dool
and
J. L. Nap

Abstract

Using a predetermined statistical scheme, forecasts are made of the daily air temperature (AT) at San Diego, starting from local antecedent information concerning AT and sea surface temperature (ST) only. These forecasts are verified by calculating skill scores (S) over 1948–79. in this maritime area such simple schemes turn out to have high S for lead times up to a mouth and small but positive S out to a year. Differences in S of the various one predictor schemes (STAT, STST, ATAT, ATST) are discussed; STST is far superior to any of the other three. For most schemes S is low in late summer, this is attributed to the shallowness of the ocean's mixed layer in that season. The effects of time averaging the predictor and/or predictand are discussed for the STAT scheme. For long enough lead times averaging appears to improve forecast skill. The localness of the prognostic information carried by ST is investigated by comparing S for San Diego and an inland station (Esondido). At a forecast lead time of three days S decreases by 50% over a distance of 25 km. Further analysis shows that this decay is primarily caused by a decrease in skill of daily maximum temperature forecasts.

In view of the similarity of the present results to those obtained at the Dutch coast, we conclude that local information about the state of the surface has probably enough prognostic potential to be incorporated in existing operational schemes of short and long range air temperature forecasts near oceans and lakes.

Full access
H. M. van den Dool
,
W. H. Klein
, and
J. E. Walsh

Abstract

Eighty years of monthly mean station temperatures are used to evaluate the persistence of monthly air temperature anomalies over the United States. The geographical and seasonal dependence of the monthly persistence are described in term of the day-to-day persistence of temperature anomalies, the influence of the large-scale atmospheric circulation, and inferred associations with the slowly varying properties of the earth's surface.

The monthly persistence is generally smallest in the continental interior and largest in coastal regions. The seasonality of this spatial pattern is quite small, although the continental interior is characterized by a summer maximum. For the country as a whole, persistence is highest (0.30) in winter and summer and least (0.15) in fall and spring. For both raw and detrended data, the anomaly pattern correlations at lags of two and three months are much larger than would be expected from a first-order Markov process.

The pattern of persistences computed using day-to-day autocorrelations shows that the presence of nearby bodies of water increases the month-to-month persistence over that to be expected from daily weather fluctuations. This finding is consistent with the results derived from an intuitive energy balance model in which the soil (or ocean) surface layers and the atmospheric boundary layer respond to prescribed daily fluctuations in the free atmosphere.

Local surface influences are also implied by the fact that the 700-mb circulation-derived anomalies of monthly temperature have fewer spatial degrees of freedom than do the actual anomalies. While the large-scale circulation accounts for about half of the winter temperature persistence, small-scale effects, as well as the effects of the antecedent month's circulation, contribute substantially to the persistence of summer temperatures.

Full access