## 1. Motivation

Most crop models and hydrologic response models ingest a realization (or a distribution of realizations) of daily weather produced by weather generators. To be immediately useful in these applications, seasonal climate forecasts issued for 3-month periods and larger regions need to be spatially downscaled, temporally disaggregated, and transformed into information suitable for weather generators.

Given a record of monthly precipitation over the 30-yr normal period for the location or area of interest, the spatial downscaling is a straightforward mapping of the forecast shift in the odds (Schneider and Garbrecht 2002). However, the subsequent disaggregation of the overlapping 3-month forecasts into a sequence of 1-month forecasts is a more subtle problem, with two different aspects. First, there is reasonable concern that any skillful forecast signal present at the 3-month scale will be diluted below acceptable levels at the 1-month scale. This aspect is not addressed in this paper. We assume that users will make a separate determination as to whether the forecasts are sufficiently skillful for their application. Second, the obvious mathematical approach to the problem of 3- to 1-month disaggregation is unsuccessful. Direct linear algebraic inversions were developed and tested by Wilks (2000) but resulted in wildly oscillating sequences of 1-month forecasts. This paper describes a heuristic method to disaggregate the National Oceanic and Atmospheric Administration/Climate Prediction Center (NOAA/CPC) 3-month probability of exceedance seasonal precipitation forecasts into a sequence of 1-month forecast departures from the 30-yr averages for each month. This method is part of an approach aimed at using the seasonal forecasts in weather generators different from that developed by Wilks (2002), beginning from the probability of exceedance forecasts rather than the categorical “tercile” forecasts.

## 2. Disaggregation method

There are many possible heuristic approaches to this type of disaggregation, limited only by one’s imagination. Deciding between different methods requires the development of evaluation metrics, which in turn are based on the intended use of the disaggregated forecasts. This means that the “best” method will depend on the application, and that no single technique will “fit all.” The method and metrics presented here are offered as one example that is currently being employed in agriculture and water resource applications research.

The forecasts of interest are the NOAA/CPC probability of exceedance forecasts (Barnston et al. 2000; information available online at http://www.cpc.ncep.noaa.gov/pacdir/NFORdir/HOME3.html), which specify both a smoothed 30-yr normal exceedance distribution and a forecast distribution (Fig. 1). A suite of forecasts is issued every month for 102 forecast divisions [composed of one or more NOAA/National Climatic Data Center (NCDC) climate divisions] covering the contiguous United States. (Fig. 2). We expect the majority of the reliable forecast information to be in the first-order statistics (mean or median), with less reliability in the higher-order statistics (variance, skewness). Visually, this translates into more confidence in the location of the center third of the distribution, with less confidence in the slope and shape. The quantity to be disaggregated is the departure of the forecast mean from the normal mean, expressed in inches of total precipitation. At this time we are not disaggregating any information relative to the higher-order statistics of the forecast. Issues related to the definition of precipitation variance, conditional probabilities, correlated temperature, and solar radiation sequences, as well as other aspects of daily weather generation, will be addressed elsewhere.

Exploratory analyses identified several possible approaches, including one technique using equally weighted averages of three independent estimates, one from each of the 3-month forecasts that included the month in question (Schneider and Garbrecht 2003). We have modified that method by adding weights from the historical contribution of monthly precipitation to each 3-month period.

*i*, ranging from 1 to 15. For example, if the forecasts are issued in December covering the next 15 months, then

*F*

_{1,2,3}is the first 3-month forecast departure in the sequence, or the January–February–March departure;

*F*

_{2,3,4}is the second in the period, or February–March–April. Similarly,

*M*is the 30-yr mean for month

_{i}*i*;

*M*

_{i}_{−1,}

_{i}_{,}

_{i}_{+1}is the 3-month mean centered on month

*i*. Then

*F*is the disaggregated forecast 1-month precipitation departure. The first month’s departure (

_{i}*F*

_{1}) is inferred from the three-category (tercile) monthly outlooks [see Briggs and Wilks (1996) for one approach], which we expect to be a more accurate representation of the CPC forecast intent than an individual estimate from

*F*

_{1,2,3}similar to Eq. (4). Then the remaining months are calculated as and Note that while the weights for individual months in any 3-month period sum to one {e.g., [

*M*

_{1}/(

*M*

_{1}+

*M*

_{2}+

*M*

_{3}) +

*M*

_{2}/(

*M*

_{1}+

*M*

_{2}+

*M*

_{3}) +

*M*

_{3}/(

*M*

_{1}+

*M*

_{2}+

*M*

_{3}) = 1]}, the weights in this method are not required to sum to unity. Also, the 30-yr period used to calculate the means should be that associated with the forecasts (currently 1971–2000).

We use four evaluation metrics. The first pair determines how much of the test signal can be recovered at the 1-month time scale. The second pair checks whether the total precipitation in the disaggregated monthly values is conserved over seasons and one year; we do not want a method that exaggerates departures more than is implied in the forecasts. While the forecasts are distributions describing a range of possible outcomes, we chose actual monthly precipitation data at the forecast division scale from 1971 to 2000 for the evaluation. We use observed monthly precipitation data rather than the forecast means or medians for two reasons: to increase the length of the test record (30 yr rather than 9), and to provide a rigorous test of the method using the wider variability of the actual precipitation. Forecast departures from means or medians vary smoothly month to month and are usually small (or zero) in magnitude, while monthly precipitation departures are highly variable in magnitude, with frequent reversals of sign.

The monthly data were summed into overlapping 3-month periods to use as an analog for the 3-month forecasts. Thirty-year means were calculated for all months and 3-month periods, and the 3-month forecast analogs were expressed as departures from average. Disaggregation results are compared to the original monthly precipitation departures. To simplify the presentation, the *F _{i}* values used in the metrics shown below were calculated for the 3d to 13th months using Eq. (2).

Figure 3 shows example time series of the disaggregation results for three different climates. The mean annual precipitation totals during 1971–2000 for the forecast divisions in central Oklahoma, coastal Washington, and southeastern Arizona were 35.1, 78.6, and 15.6 in., respectively. The disaggregated monthly departures are effectively a low band-passed reproduction of the original data, following the pattern of sustained departures (3 or more months), but usually underestimating the magnitudes of the largest individual departures.

The first metric is a calculation of root-mean-square errors (rmse’s) between the underlying 1-month data and the disaggregated results, and are listed by forecast division in Table 1, with a map showing the distribution of rmse with region in Fig. 4. Taken over all forecast divisions, the rmse average is 0.94 in., and the values tend to scale linearly with annual precipitation (i.e., wetter climates have larger rmse’s). This average is about 39% of the 30-yr monthly average precipitation for the contiguous United States. Table 1 also includes the ratio of the rmse to the standard deviation of all months in the test period for each forecast division, with the corresponding map shown in Fig. 5. This is a comparison between method error and the variability of the test data. The unitless ratio ranges from about 0.4 to 0.75, with an average over all forecast divisions of 0.58. This value does not follow mean precipitation in the simple regional manner that rmse does, and in some regions (e.g., the Pacific Northwest) it is inversely related.

The second metric is the linear regression between actual and disaggregated monthly values for each forecast division. The slopes of the regressions are typically about 0.4, with *R* values close to 0.75. Figures 6a, 7a and 8a show example scatterplots of the monthly data for the three forecast divisions shown in Fig. 3. These values are smaller than ideal and may be lower than one might require for a particular application; we will return to this point later.

For applications in agriculture and water resource management, it may be of equal or greater importance to produce accurate net precipitation departures over quarters, growing seasons, or years as it is to compute the magnitudes of individual months correctly. The third and fourth metrics (checking whether total forecast precipitation is conserved over longer periods) are scatterplots at the quarterly and annual scales. The quarterly plots are shown in Figs. 6b, 7b and 8b; the annuals in Figs. 6c, 7c and 8c. The magnitudes match better in summations of departures over time periods as long or longer than the 3-month analog test signal, as one would expect. The slopes of the regressions at the quarterly scale are typically larger than 0.7, with *R* values of approximately 0.94. At the annual scale, the slopes are greater than 0.9, and *R* values larger than 0.97 are common. Of note is the fact that the regression slopes never exceed unity; precipitation anomalies are not being unintentionally created by this method. We consider this attribute to be particularly important in our water resource applications.

*quarterly*sums will linearly regress with a slope of 1 against the actual data. This is easily done by adding a multiplier to adjust the method toward the desired result, such as where

*A*is the inverse of the slope from the linear regression of the quarterly sums for each forecast division. Averaged over all forecast divisions, the adjustment (

*A*) needed to maximize the fit at the quarterly time scale is 1.36 for this test data. The slope and intercept of the regressions will be increased by the same multiplier (note that the annual sums of departures are slightly overestimated as a result of this quarterly adjustment), but the regression coefficients do not change. The rmse errors for the monthly disaggregated values actually decrease slightly (see Table 2), and the magnitudes of the quarterly sums are the best possible match given this heuristic method. The adjusted time series are shown in Fig. 9.

## 3. Conclusions

Until such time that the long-range forecasts are issued in monthly increments, many application efforts will require a method to disaggregate the 3-month forecasts. Among the myriad possible options, the simple heuristic method presented here can produce reasonable sequences of forecasts of departures from monthly mean precipitation. The method’s performance is a bit weak at the monthly scale, but quite respectable at the quarter and annual scales using the four metrics employed here. It performs equally well across widely different precipitation regimes, and does a reasonable job reproducing the sudden onset of strong seasonal variations such as the southwest U.S. monsoon. All that is needed is a 30-yr record of monthly precipitation for the location or area of interest, which is already required for spatial downscaling of the forecasts, so application should be fairly straightforward. Using a multiplier, the method can also be easily adjusted to optimize its utility for a specific application at time scales shorter than a year.

## REFERENCES

Barnston, A. G., , He Y. , , and Unger D. A. , 2000: A forecast product that maximizes utility for state-of-the-art seasonal climate prediction.

,*Bull. Amer. Meteor. Soc.***81****,**1271–1279.Briggs, W. M., , and Wilks D. S. , 1996: Estimating monthly and seasonal distributions of temperature and precipitation using the new CPC long-range forecasts.

,*J. Climate***9****,**818–826.Schneider, J. M., , and Garbrecht J. D. , 2002: A blueprint for the use of NOAA/CPC precipitation climate forecasts in agricultural applications. Preprints,

*Third Symp. on Environmental Applications*, Orlando, FL, Amer. Meteor. Soc., J71–J77.Schneider, J. M., , and Garbrecht J. D. , 2003: Temporal disaggregation of probabilistic seasonal climate forecasts. Preprints,

*14th Conf. on Global Change and Climate Variations,*Long Beach, CA, Amer. Meteor. Soc., CD-ROM, 5.4.Wilks, D. S., 2000: On interpretation of probabilistic climate forecasts.

,*J. Climate***13****,**1965–1971.Wilks, D. S., 2002: Realizations of daily weather in forecast seasonal climate.

,*J. Hydrometeor.***3****,**195–207.

The rmse results for all forecast divisions (FDs) for 1971–2000; units are in. of precipitation. Rmse/*σ* is the ratio of the rmse to the standard deviation of the monthly precipitation (calculated across all months in the 30-yr study period)

Comparison of rmse statistics for monthly disaggregations for both the unadjusted and adjusted cases