Initial Validation of the Global Precipitation Climatology Project Monthly Rainfall over the United States

Witold F. Krajewski Iowa Institute of Hydraulic Research, University of Iowa, Iowa City, Iowa

Search for other papers by Witold F. Krajewski in
Current site
Google Scholar
PubMed
Close
,
Grzegorz J. Ciach Institute of Meteorology and Water Management, Warsaw, Poland

Search for other papers by Grzegorz J. Ciach in
Current site
Google Scholar
PubMed
Close
,
Jeffrey R. McCollum NOAA/NESDIS Office of Research and Applications, Camp Springs, Maryland

Search for other papers by Jeffrey R. McCollum in
Current site
Google Scholar
PubMed
Close
, and
Ciprian Bacotiu Iowa Institute of Hydraulic Research, University of Iowa, Iowa City, Iowa

Search for other papers by Ciprian Bacotiu in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

The Global Precipitation Climatology Project (GPCP) established a multiyear global dataset of satellite-based estimates of monthly rainfall accumulations averaged over a grid of 2.5° × 2.5° geographical boxes. This paper describes an attempt to quantify the error variance of these estimates at selected reference sites. Fourteen reference sites were selected over the United States at the GPCP grid locations where high-density rain gauge network and high-quality data are available. A rigorous methodology for estimation of the error statistics of the reference sites was applied. A method of separating the reference error variance from the observed mean square difference between the reference and the GPCP products was proposed and discussed. As a result, estimates of the error variance of the GPCP products were obtained. Two kinds of GPCP products were evaluated: 1) satellite-only products, and 2) merged products that incorporate some rain gauge data that were available to the project. The error analysis results show that the merged product is characterized by smaller errors, both in terms of bias as well as the random component. The bias is, on average, 0.88 for the merged product and 0.70 for the satellite-only product. The average random component is 21% for the merged product and 79% for the satellite-only product. The random error is worse in the winter than in the summer. The error estimates agree well with their counterparts produced by the GPCP.

Corresponding author address: Dr. Witold Krajewski, Iowa Institute of Hydraulic Research, University of Iowa, Iowa City, IA 52242.

Abstract

The Global Precipitation Climatology Project (GPCP) established a multiyear global dataset of satellite-based estimates of monthly rainfall accumulations averaged over a grid of 2.5° × 2.5° geographical boxes. This paper describes an attempt to quantify the error variance of these estimates at selected reference sites. Fourteen reference sites were selected over the United States at the GPCP grid locations where high-density rain gauge network and high-quality data are available. A rigorous methodology for estimation of the error statistics of the reference sites was applied. A method of separating the reference error variance from the observed mean square difference between the reference and the GPCP products was proposed and discussed. As a result, estimates of the error variance of the GPCP products were obtained. Two kinds of GPCP products were evaluated: 1) satellite-only products, and 2) merged products that incorporate some rain gauge data that were available to the project. The error analysis results show that the merged product is characterized by smaller errors, both in terms of bias as well as the random component. The bias is, on average, 0.88 for the merged product and 0.70 for the satellite-only product. The average random component is 21% for the merged product and 79% for the satellite-only product. The random error is worse in the winter than in the summer. The error estimates agree well with their counterparts produced by the GPCP.

Corresponding author address: Dr. Witold Krajewski, Iowa Institute of Hydraulic Research, University of Iowa, Iowa City, IA 52242.

Introduction

The Global Precipitation Climatology Project (GPCP) is an international effort to establish an ongoing dataset of monthly rainfall estimates for the entire globe (Huffman et al. 1997; Arkin and Xie 1994). The project started in 1986 and will extend to 2005. The data, prepared on a 2.5° × 2.5° latitude–longitude grid, can be used for validation of Global Climate Models (GCMs), as well as in many other studies of global and regional atmospheric circulation and other applications (Arkin and Ardanuy 1989). The primary GPCP product is a multisensor estimate of monthly rainfall based on observations from the geostationary and polar-orbiting satellites and rain gauge networks. For the dataset to be of high value in climate studies, it needs to be validated against an independent reference standard.

The objective of this paper is to document efforts to validate the GPCP estimates against such a standard, using rain gauge data not used in the development of the global estimates. For the validation, we focus our attention on several sites selected over the United States for the years 1987–96. In sections 2 and 3 below, we describe how we selected these sites and devised the validation methodologies.

In addition to the precipitation accumulation estimates, the GPCP dataset provides global maps of the corresponding uncertainty, given in the form of error variance. These were estimated by the GPCP using the heuristic approach proposed by Huffman (1997), which accounts for space–time sampling uncertainty of the various sensors. In our paper, error is defined as the difference between the unknown true monthly rainfall accumulation averaged over a given 2.5° × 2.5° grid box and the GPCP estimate. Validation is understood as description of the error probability distribution function or its characteristics.

Validation on a month-by-month and grid box-by-grid box basis was not feasible with the data available for this project, and thus, some pooling of data from different months was necessary. We assumed error stationarity in time, but only within a certain season, and from year to year for the same season. We distinguished two seasons: warm–summer (April–September) and cold–winter (October–March). Therefore, error description for each GPCP validation site grid box was accomplished based on about 60 months of data (6 months from each of the 10 years).

The validation described herein consists of several steps. First, we selected the validation sites. As part of the GPCP, validation data are collected at the Surface Reference Data Center (SRDC). [The SRDC was recently relocated from the National Climatological Data Center (NCDC) in Asheville, North Carolina, to the University of Oklahoma Environmental Validation Analysis Center (EVAC) in Norman, Oklahoma.] In principle, the set of validation sites should represent the variety of climatological conditions met throughout the world. In practice, however, these SRDC sites represent opportunistic data collection campaigns relying on the willingness of various countries with dense networks to provide good quality data. For this reason; the datasets collected at the SRDC are not homogeneous in quantity and quality. We selected sites over the Unites States based on certain quantitative criteria (which we describe later) without much regard to climatic differences.

The second step in our methodology was to estimate precipitation for the validation site and to characterize its uncertainty in a way consistent with the validation objectives. Thus, we estimated monthly rainfall for the validation site for each month of the GPCP period using a simple average of the rain gauge data from the gauges located inside of the grid box. The corresponding uncertainty was estimated using a modification of the method proposed by Morrissey et al. (1995).

The final step of our validation was to separate the error of the reference estimate from the difference between the GPCP and the reference estimates. We did this using a methodology similar to those proposed earlier by Barnston (1991) and Krajewski (1993). Ciach and Krajewski (1999) comprehensively discussed a similar approach for the determination of radar-rainfall error variance. As a result we obtained estimates of the error variance of the GPCP monthly estimates of precipitation. We then compared these estimates with the corresponding values provided by Huffman’s (1997) methodology.

Validation site selection

We designed the criteria for selecting the validation sites to ensure high quality in the reference (SRDC) product. To determine which GPCP grid boxes should be used, we analyzed the information contained in the NCDC Cooperative Observer Climatology Data. We began with the Cooperative Station History file (A. McNab 1998, personal communication). That file documents the entire history of each gauge, including information on gauge location, instrument type, missing data statistics, etc. We selected the GPCP 2.5° × 2.5° boxes that satisfied the following criteria.

  1. The box must include at least 25 gauges during the entire GPCP period.

  2. The box must include at least 20 gauges within the GPCP box enlarged by 20% during the recent 30-yr period.

  3. Each gauge remained in the same position over a period of at least 30 yr (from 1967 to 1996), which includes the GPCP period.

  4. There must be no gap in data for any given station during the GPCP period (1987–96); however, a 1-yr gap is allowed during the earlier period (1967–86).

These criteria are stringent, although arbitrary, and are based on the following considerations. From the studies conducted by Rudolf et al. (1994), Morrissey et al. (1995), and McCollum and Krajewski (1998), the required number of 25 gauges ensures low error in calculating the reference values based on the arithmetic average. The required duration of a 30-yr period ensures that the correlation function required for the uncertainty estimation will be calculated based on an adequate sample size. Sampling distribution of the correlation coefficient (Stuart and Ord 1994) is shown graphically in Fig. 1. It is apparent that for the sampling error to be much smaller, the record of data would have to be much longer than 30 yr. Such a requirement, in addition to being difficult to satisfy at many locations, would bring up questions of temporal long-term stationarity, instrument standards, etc. To avoid them, we decided to require the sample size to be at least 29 yr (allowing for 1 yr or the total of 12 monthly values missing).

Surprisingly, few sites in the United States climatological network archived by the NCDC satisfied these rigorous criteria. A preliminary selection of sites was determined based on the “open” and “close” dates in the NCDC station information file. However, many stations in this subset had to be culled because of substantial amounts of missing data that occurred when the stations were supposedly open. Consequently, the above criteria had to be somewhat compromised so that at least 14 sites could be used (Table 1; see also Fig. 2). This site selection does, in fact, represent a wide range of the climatic and topographic conditions found in the United States. The difficulties of estimating rainfall in coastal areas and in the presence of snow will be illustrated in section 4.

For the purpose of this paper, we distinguish four “primary” sites with the remaining 10 being designated “secondary” sites. The only reason for this distinction is that we developed and tested our methodology at the primary sites (this will be discussed in more detail) and applied it to the secondary sites.

We corrected all the rain gauge data for the wind effect. Groisman and Legates (1994) claim that the U.S. rain gauge network significantly underestimates precipitation accumulation. We applied the same correction as the one used in the GPCP for consistency reasons only. We believe, based on the simulation study of Habib et al. (1999), that monthly correction procedures like the one developed by Groisman and Legates (1994) overestimate the actual errors. The systematic underestimation errors due to the wind effect, when applied correctly, should not result in errors above 2%–4% for summer precipitation. Application of a correction formula that is based on the rainfall amount, season, and geographical location only introduces a random error to the “corrected” amount. It is hoped that this error is small and does not affect the results presented in this study.

Estimation of the reference data uncertainty

The problem of uncertainty quantification for mean areal rainfall estimates obtained from a number of rain gauges has long been studied in the literature (see, e.g., Zawadzki 1973; Bras and Rodriguez-Iturbe 1976; Tabios and Salas 1985; Morrissey et al. 1995). Using a statistical approach, which is consistent with the goals of our validation, leads to error variance estimates that depend on the spatial correlation structure of the rainfall field. Perhaps the most complete error variance estimation methodology has been proposed by Morrissey et al. (1995). Their approach accounts for the effects of the average network density (or the number of rain gauges), the network distribution in space (possible effects of clustering), and the correlation function of the rainfall process.

Assume that a validation site contains n gauges within a GPCP size box. Denoting the monthly rainfall measured at the ith gauge as Ri, the reference rainfall for the kth box for the month t [(k, t)] can be calculated as
i1520-0450-39-7-1071-e1
The error variance associated with the above estimate can be expressed in terms of the variance reduction factor (VRF):
σ2R̂(k,t)σ2R(k,t)k
where
i1520-0450-39-7-1071-e3
with σ2R(k,t) being the rainfall variance and σ2(k,t) the estimated mean square error. Equation (3) applies to a particular site and season, but for notational clarity we omitted this dependence. Briefly, the derivation of (2) is based on a square grid approximation of the spatial integrals of the covariance function. The sampling domain is divided into K grid boxes, over which the rainfall is estimated from the arithmetic mean of n gauges. The delta function, δ(i), denotes whether box i contains a rain gauge [δ(i) = 1 if it does and δ(i) = 0 if it does not]. The rainfall correlation function is ρ(di,j), where di,j is the distance between box i and box j.

The first term in (3) accounts for the spatial correlation among the n gauges. Given a typical geophysical correlation structure where the correlation decreases with distance, the closer the gauges are to each other the larger the effect this term has on the mean-squared error. The second term represents the average spatial correlation between each rain gauge location and all points within the averaging area. The centering of a gauge network within the averaging area decreases the magnitude of this term and results in a decrease in the mean-squared error. The third term represents the corrective factor of the effect the grid system exerts on the standard error, and for a large number of grid points this term can be neglected. The fourth term is the average spatial correlation among all K points, both with and without rain gauges, within the averaging domain with respect to the average spatial correlation function. Thus, it is the spatial arrangement of the network within the averaging area, and correlation distance of the rainfall process, that control the magnitude of the mean-squared error for a fixed number of rain gauges.

Convergence studies of error formula

For the GPCP validation sites, the network configuration is fixed, but different from site to site. The number of rain gauges also varies from site to site, as does the monthly rainfall correlation distance. Thus, the validation results should be expected to vary across the sites.

Based on formula (3), the variance reduction factor also depends on the number of grid divisions K. To establish the appropriate value of K we conducted a numerical experiment. Because the role of K is to provide accurate discrete approximations of the field covariance integrals, we studied the effect of K on the error of (2) for a wide range of covariance function shapes for four network configurations. The four network configurations correspond to the primary GPCP validation sites and are shown in Fig. 3. The general form of the normalized covariance model is
drr0exp[−(d/d0)g0
where d is the separation distance, and r0, d0, and g0 are the parameters. The parameter r0 (⩽1) determines the near-origin variability (the so-called “nugget effect,” Journel and Huijbregts 1978), d0 is the correlation (or e folding) distance, and g0 is the shape parameter. When r0 = 1 (and d = 0), the covariance reduces to the variance of the variable considered. Term (1 − r0) can also be interpreted as the normalized variance of the observational (measurement) error. For g0 > 1, the covariance decreases slowly near the origin, and for g0 approaching 0, the model degenerates to the pure nugget effect (covariance drops quickly and remains independent of the separation distance). If r0 = 1 and g0 = 1, the model reduces to an exponential covariance.

The convergence of the VRF error with increasing K is slowest for the pure nugget effect. For an exponential covariance with the d0 parameter on the order of the integration domain size, the convergence is rather fast. Figure 4 shows the VRF relative error convergence for several covariance models as a function of the domain discretization grid for the network configuration of the four primary sites (Fig. 3). In the plots, N2 = K, where K is the grid size that appears in (3). It is easy to observe that when N > 50, the error is under 1%, no matter what the model. We also observe that the network configuration at the four sites plays little role in the convergence. In reality, for the typical covariance of monthly rainfall, the grid size of 50 × 50 (i.e., N = 50) results in an error of less than 0.1%. However, for all the remaining calculations herein, the grid size of N = 100 is used.

Sensitivity to the covariance model selection

In this section we investigate the sensitivity of the VRF with respect to the covariance model type and parameters. The objective here is to determine how important it is to put effort into the covariance model identification and parameter estimation. In Fig. 5 we show the VRF for several covariance models and network configurations. From the plots, it is clear that the VRF’s sensitivity related to the model selection depends on the correlation distance. The factor is most sensitive for the distances around 100 km. For longer distances the sensitivity is less pronounced, but the relative errors can still be significant. The sensitivity is smallest for the exponential model and it decreases with the increase of the nugget effect (i.e., the reduction of r0).

We conclude that it is important to estimate the covariance model as well as possible (within the limits of the sampling and other uncertainties of the available data).

Covariance model identification and parameter estimation

The covariance model identification and its subsequent parameter estimation were based on the empirical correlations calculated for each pair of rain gauges for the particular validation site. As noted earlier, the correlations were calculated based on the 30 yr of historical data and carried out for two seasons: warm–summer (April–September) and cold–winter (October–March). This season definition is somewhat arbitrary in that we did not study (for lack of adequate data) the month-to-month variability of the correlation function. Thus, the empirical correlations were computed for each month within the given season based on sample sizes of about 30 months. For each pair of rain gauges we calculated the correlation coefficient and plotted it against the station separation distance. This procedure implies that our definition of the stochastic process of monthly rainfall assumes stationarity in time (within a season and from year to year for the same season).

We performed a limited data quality control in conjunction with the analysis of empirical correlations, applying a two-stage procedure to fit a correlation (covariance) model. First, we fit a one-parameter (with g0 = 1 and r0 = 1) model to all the points plotted in Fig. 6, as indicated by the thin continuous line. For the best-fit correlations, obtained using the least squares criterion, we calculated the sampling distribution based on a 95% confidence interval using the approximation formulae of Fisher (1921). This range is shown in Fig. 6 (region inside the thin broken lines). Next, we rejected all points that fell outside of this interval and repeated the model fit using the remaining points. This second fit, using now a two-parameter model (with g0 = 1), is shown in Fig. 6 as the thick continuous line. As is clear from the plots, this quality-control procedure does not remove many points and has rather small effect on the final model. It is also clear that the spatial correlation definitely has an exponential character, except for the summer in the California site that seems very “noisy” and has no clear pattern to its correlation. Still, we used an exponential model there.

Before we settled on the two-parameter exponential model, we also investigated a potential benefit (in terms of reduced least-square fit statistic) of using the three-parameter model (3). We concluded that the benefit was negligible and decided that the two-parameter model was a better, more robust choice. Table 2 summarizes the model parameters and the fit statistics (percent of the variance explained, R2) for the validation sites. For reference, we also include the rmse for the one-parameter model (r0 = 1). Clearly, the two-parameter model systematically offers a better fit by a small amount.

Discussion of results

Error separation method

Once the correlation model is determined, the remaining calculations of the Error Separation Method (ESM), as described by Ciach and Krajewski (1999) are straightforward. Let us write the variance of the GPCP–SRDC difference as follows:
i1520-0450-39-7-1071-e5
where RT is the true monthly rainfall averaged over the GPCP box and the GPCP and SRDC are the GPCP and the SRDC rainfall estimates, respectively. If the error covariance term can be neglected, based on independence of the satellite and surface-based errors, the variance of the GPCP error can be calculated as, simply,
GPCPRTGPCPSRDCSRDCRT
Thus, the applicability of the method hinges on the error independence assumption. Although we cannot present rigorous evidence that this is the case, a plausible explanation is based on two observations. First, the physical bases for the ground and space sensors are completely different. Second, the SRDC error for a particular month would change if we changed the network configuration. The GPCP error would remain unaffected. Still, there is a possibility that the errors of the combined satellite–gauge products are not entirely independent of the gauge-based validation estimates errors due to their link through the rainfall process (spatial dependence). To mitigate this adverse effect we made sure that no gauges used in the GPCP products are used in our validation.

Comparison results

Before we discuss the results of the ESM calculations, let us analyze a comparison between the GPCP products and the surface reference. In Fig. 7 we evaluate the final GPCP product, which is the global rainfall map of satellite/gauge (SG) estimates. It includes merged rain gauge data from a small number of rain gauges within the validation sites (see Table 1). Figure 8 shows a similar comparison but for the satellite-only GPCP product (in the GPCP nomenclature this is the multisatellite (MS) product). In both figures, the horizontal lines around the data points denote one standard deviation error range as estimated using the methodology described above. The vertical bars denote a similar statistic, but were determined using the GPCP procedure (Huffman 1997). That procedure is less rigorous than the methodology described herein, although it attempts to take into account both the rain gauge (spatial) and the satellite (temporal) sampling errors. We observe that in general the SRDC error bars are smaller (as expected) and that for certain validation boxes significant multiplicative bias exists in the GPCP estimates.

Error separation results

The main objective of this paper is to calculate the GPCP product error variance from independent observations. These independent estimates of the GPCP error variance can be compared with the above GPCP error estimates to assess the statistical consistency of the GPCP’s procedure. To complete our error estimation using the ESM, we calculated the mean-square difference between the monthly rain gauge rainfall, taken as the average of all the rain gauge data for the given month, and the GPCP monthly rainfall products. As in (6), we subtracted the error variance of the surface-based estimates, using formula (2) and the space–time variance of the monthly rainfall calculated based on 60 months (10 yr and one season), from the bias-adjusted mean-square difference of the GPCP and the SRDC estimates. The results are summarized in Fig. 9 for both the GPCP SG and the MS products. The figure shows the fraction of the SRDC error variance in the GPCP–SRDC mean-square difference for the summer and the winter season. The arrangement of the bars (from left to right) is west to east.

There are several interesting features in Fig. 9. For example, the contribution of the rain gauge networks to the GPCP–SRDC variance ratio is much smaller for the MS estimates than for the SG estimates, as the reference error is the same in both cases but the MS error is much larger than the SG error. For regional difference, consider the three California and the two Midwestern sites. The California rain gauge networks contribute about 30% (between 13% and 51%) to the GPCP–SRDC variance ratio in the warm–summer season. For the two southern plains sites this contribution is only about (slightly less than) 10%. This is due, in part, to more uniform networks in the Midwestern sites. However, this higher contribution could be because the uncertainty of the surface reference error is greater because of the difficulty in fitting the correlation function for the northern California summer data (Fig. 6). Another interesting case is in the northeast Rhode Island site. This site has less than 50% of the land mass included in its area and one could argue that it should not even be selected as a validation site on the basis that its network distribution is very nonuniform and would lead to a high value of the VRF. This is confirmed by the fact that the site contribution is around 10% (highest) for the MS products and off the scale of Fig. 9 for the SG products. In both seasons, the surface reference estimate error was higher than the calculated mean-square difference between the SRDC and the GPCP SG estimates, preventing application of the ESM.

Figure 10 shows a similar arrangement as Fig. 9 for the multiplicative bias, and Figs. 11 and 12 have the same arrangement for the GPCP standard error expressed in absolute and relative terms, respectively. On average, the bias is 0.83 and 0.92 for summer and winter, respectively, for the SG product. It is 0.95 and 0.45 for summer and winter, respectively, for the MS product. The random error is 18% and 23% for the summer and winter, respectively, for the merged product, and 67% and 90% for the satellite-only product.

There is significant geographical and seasonal variability in the error statistics of Figs. 10–12. The largest bias errors of the SG estimates are for the three California sites in both summer and winter, where the SG estimates have approximately one-half the magnitude of the SRDC estimates. However, the MS estimates overestimate the SRDC during summer (including a bias of 2.3, which is off the scale of Fig. 10 for the southern California site) and significantly underestimate during winter. There is even greater underestimation in the northeast grid boxes during winter. But the gauge adjustment to the MS estimates improves the bias error significantly in most regions for both seasons except for the California sites where the effect is smaller.

In terms of standard (random) error [the square root of the variance of the error calculated in (6)], the differences between grid boxes depends on whether the error is expressed in the actual units (mm day−1) or as a percentage of the SRDC mean. There was little rainfall in the California grid boxes during the summer months, so the standard error expressed as a percentage becomes much larger than that of the other boxes having more SRDC rainfall. The largest standard error of any site for either season is for the New Hampshire validation site, and some of the other northeast U.S. sites also have high standard error for winter. These sites also have the worst performance in terms of winter bias (Fig. 10), implying that while the gauge-adjustment technique may reduce the overall bias error with respect to the other sites, the random error of the SG product will still be high. This may be because the satellite-gauge merging procedure [inverse variance method, or static estimation, see Schweppe (1973)] is based on the assumption that the individual sensor estimates are unbiased. These quantitative evaluations of satellite error agree with previous more qualitative assessments based on numerous satellite-method intercomparison studies (PIP-1 1994), which have shown that satellite methods do not perform well in nonconvective precipitation regimes such as those during the winter for the United States as shown here. Also, satellite retrievals over snow-covered surfaces do not work very well.

Last, in Fig. 13 we present the GPCP standard error comparison based on the ESM and Huffman’s (1997) methodologies. The two approaches agree quite well. This is an important finding of our study, because it should give confidence to those users of the GPCP products who would like to make use of the provided uncertainty information.

Conclusions

Above we presented a rigorous quantitative validation study of the GPCP global monthly rainfall estimates based on high-quality surface observations. Our study clearly demonstrates the benefit of merging the rain gauge estimates into the final GPCP product. Even the small amount of rain gauge data available throughout the world for the GPCP purposes leads to a significant improvement of the final estimates. The study contributes little to evaluating the quality of oceanic rainfall estimates.

An important finding of the study is the statistical consistency of the GPCP error estimates. For both the merged and satellite-only products, the SRDC estimates obtained using our rigorous methodology agree rather well with the error estimates obtained using the GPCP methodology. Still, the study can be improved in several respects. We discuss them below.

First of all, we believe that there are other sites around the world, and even within the United States, that have higher rain gauge network density and would result in even smaller surface reference errors. Some of these sites were already used in the previous GPCP studies (Rudolf et al. 1994), and we plan to apply the above analysis to their data. However, as our study illustrates, criteria for using data in climatological studies are very demanding and require detailed exploration of the station information files and data inventories. Many higher-density networks in the United States are either seasonal (agriculture, fire protection of forestry), event oriented (i.e., report only significant amounts of precipitation), have short periods of record (less than 30 yr), or cover less than a 2.5° × 2.5° area. Also, many countries restrict access to their climatological databases, and in many places (including the United States) numerous agencies operate their own networks and maintain their own archives. Thus, finding good validation sites and getting access to the data and their documentation remain an important challenge for the research community. The GPCP has a mechanism to deal with this issue through its SRDC component. Efforts continue within SRDC to secure more validation data.

We made two important assumptions in our analysis. First, we assumed independence of the satellite and surface-based errors. This assumption seems plausible (in particular for the satellite-only product); on the other hand it would be very difficult if not impossible to verify this empirically. Perhaps a validation site with a large number of rain gauges (a few hundred) available for the duration of the GPCP period would allow an appropriate resampling study. Simulation using physics-based models is less attractive here, because it necessitates numerous assumptions, some of which may affect the outcome of the experiment of interest.

The second assumption is that of time and space stationarity. We assumed the temporal stationarity out of the need to increase the sample size. Estimation of the spatial correlation function based on a single realization (for each month separately) based on only 30 or so gauges is not very reliable (see Krajewski and Duffy 1988). Assuming that the same stochastic process generated monthly rainfall for the same month over the years allows an effective increase in sample size and estimation of the spatial correlation function based on hundreds of points, even for a moderate-sized rain gauge network. Still, the quantitative effect of this assumption and its effect on the estimated GPCP error need to be further examined. Once again, validation sites with over 100 rain gauges could be very useful here.

We also assumed spatial homogeneity because of the scarcity of the data within the validation boxes. Linear trend in the mean does not affect our analysis. To determine the existence of the second-order heterogeneity caused by, for example, orography, much more data would be necessary. For instance, the primary validation site in California (Fig. 3) should be examined for the possibility of spatial heterogeneity, because it is likely that many more rain gauges are located there than the ones used in our study.

Last, let us also discuss a possible improvement to the data quality control procedure. Even though the data used in our study passed several quality checks at the NCDC, the estimation of spatial correlation function presents an opportunity to detect bad rain gauges. The procedure applied herein focused on the correlations inconsistent with the sampling theory. An alternative view could be employed, focusing on the rain gauges that lead to inconsistent correlations. Data from the gauges producing such correlations would be eliminated entirely from the calculations. We think, however, that switching to this procedure would not significantly affect the major findings of this study.

Acknowledgments

We would like to thank numerous GPCP researchers for many fruitful discussions over the years. In particular, we acknowledge Bob Adler, Phil Arkin, Sam Benedict, Ralph Ferraro, Arnie Gruber, George Huffman, John Janowiak, Alan McNab, Mark Morrissey, Grant Petty, Bruno Rudolf and Ping Ping Xie. We extend special thanks to Alan McNab for his help with processing the NCDC rain gauge data. The research was supported by the NOAA Office of Global Programs through Grant NA57WHO517. We also thank Diana Thrift for her help with editorial work on the manuscript and Paul Ludington for his assistance with the graphics.

REFERENCES

  • Arkin, P. A., and P. E. Ardanuy, 1989: Estimating climate-scale precipitation from space: A review. J. Climate,2, 1229–1238.

  • Arkin, P. A., and P. Xie, 1994: The Global Precipitation Climatology Project: First Algorithm Intercomparison Project. Bull. Amer. Meteor. Soc.,75, 401–419.

  • Barnston, A. G., 1991: An empirical method of estimating raingage and radar rainfall measurement bias and resolution. J. Appl. Meteor.,30, 282–96.

  • Bras, R. L., and I. Rodriguez-Iturbe, 1976: Evaluation of mean square error involved in approximating the areal average of a rainfall event by a discrete summation. Water Resour. Res.,12, 181–184.

  • Ciach, J. G., and W. F. Krajewski, 1999: On the estimation of radar rainfall error variance. Adv. Water Resour.,22, 585–595.

  • Fisher, R. A., 1921: On the probable error of a coefficient of correlation deduced from a small sample. Metron,1 (4), 3–32.

  • Groisman, P. Y., and D. R. Legates, 1994: The accuracy of United States precipitation data. Bull. Amer. Meteor. Soc.,75, 215–227.

  • Habib, E., W. F. Krajewski, V. Nespor, and A. Kruger, 1999: Numerical simulation studies of raingage data correction due to wind effect. J. Geophys. Res.,104, 19 723–19 734.

  • Huffman, G. J., 1997: Estimates of root-mean-square random error for finite samples of estimated precipitation. J. Appl. Meteor.,36, 1191–1201.

  • Huffman, G. J., and Coauthors, 1997: The Global Precipitation Climatology Project (GPCP) combined precipitation dataset. Bull. Amer. Meteor. Soc.,78, 5–20.

  • Journel, A. G., and Ch. J. Huijbregts, 1978: Mining Geostatistics. Academic Press, 600 pp.

  • Krajewski, W. F., 1993: Global estimation of rainfall: Certain methodological issues. World at Risk: Global Climate Change and Natural Hazards, R. Bras, Ed., American Institute of Physics, 180–192.

  • Krajewski, W. F., and C. J. Duffy, 1988: Estimation of homogeneous isotropic random fields structure: A simulation study. Comp. Geosci.,14 (1), 113–122.

  • McCollum, J. R., and W. F. Krajewski, 1998: Uncertainty of monthly rainfall estimates from rain gauges in the Global Precipitation Climatology Project. Water Resour. Res.,34 (10), 2647–2654.

  • Morrissey, M. L., J. A. Maliekal, J. S. Greene, and J. Wang, 1995: The uncertainty in simple spatial averages using raingage networks. Water Resour. Res.,31 (8), 2011–2017.

  • PIP-1, 1994: The First WetNet Precipitation Intercomparison Project (special issue). Remote Sens. Rev.,11, 373 pp.

  • Rudolf, B., H. Hauschild, W. Rueth, and U. Schneider, 1994: Terrestrial precipitation analysis: Operational method and required density of point measurements. Global Precipitation and Climate Change, M. Desbois and F. Desalmand, Eds., Springer Verlag, 173–186.

  • Schweppe, F. C., 1973: Uncertain Dynamic Systems. Prentice-Hall, 563 pp.

  • Stuart, A., and J. K. Ord, 1994: Kendall’s Advanced Theory of Statistics. Vol. 1, Distribution Theory, 6th ed., Edward Arnold, 676 pp.

  • Tabios, G., III, and T. D. Salas, 1985: A comparison analysis of techniques for spatial interpolation of precipitation. Water Resour. Bull.,21 (3), 365–380.

  • Zawadzki, I., 1973: Errors and fluctuations of raingage estimates of areal rainfall. J. Hydrol.,18, 243–255.

Fig. 1.
Fig. 1.

Width of 95% confidence interval for the sample correlation coefficient [based on sampling distribution approximation proposed by Fisher (1921)]. Here, M is the sample size.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 2.
Fig. 2.

Schematic location of the selected validation sites. The dark-framed sites are the primary sites discussed here in more detail.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 3.
Fig. 3.

Rain gauge network configurations for the four primary sites. Squares denote gauges used in covariance calculations only.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 4.
Fig. 4.

Relative error of the VRF of the Morrissey et al. (1995) formula for different correlation models for the four primary validation sites.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 5.
Fig. 5.

VRF of the Morrissey et al. (1995) formula for different correlation models.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 6.
Fig. 6.

Empirical correlations and the spatial correlation functions (continuous lines). The broken lines define 95% confidence interval of the sampling uncertainty [based on approximation proposed by Fisher (1921)]. See text for more details.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 7.
Fig. 7.

Comparison of the GPCP SG product and the surface reference estimates for the four primary sites. Each point represents a month. The vertical bars represent standard error of the product as estimated by the Huffman (1997) method. The horizontal bars represent the standard error of the surface reference estimates.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 8.
Fig. 8.

Comparison of the GPCP MS product and the surface reference estimates for the four primary sites. Each point represents a month. The vertical bars represent standard error of the product as estimated by the Huffman (1997) method. The horizontal bars represent the standard error of the surface reference estimates.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 9.
Fig. 9.

Relative contribution of the SRDC error variance estimates in the mean-square GPCP − SRDC difference for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 10.
Fig. 10.

Bias of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 11.
Fig. 11.

Standard error of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 12.
Fig. 12.

Relative standard error of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Fig. 13.
Fig. 13.

Standard error comparison of the ESM and the GPCP’s own estimates. Left panel is for the SG product, and the right panel is for the MS product. The dark dots are for summer, and light dots are for winter estimates.

Citation: Journal of Applied Meteorology 39, 7; 10.1175/1520-0450(2000)039<1071:IVOTGP>2.0.CO;2

Table 1.

Locations and main characteristics of the selected validation sites. The primary sites are boldfaced.

Table 1.
Table 2.

Summary of the fitted spatial correlation models. Prefix S and W in the Box ID column stands for summer and winter, respectively. The primary sites are boldfaced. Here, R2 is the percent of the variance explained by the model.

Table 2.
Save
  • Arkin, P. A., and P. E. Ardanuy, 1989: Estimating climate-scale precipitation from space: A review. J. Climate,2, 1229–1238.

  • Arkin, P. A., and P. Xie, 1994: The Global Precipitation Climatology Project: First Algorithm Intercomparison Project. Bull. Amer. Meteor. Soc.,75, 401–419.

  • Barnston, A. G., 1991: An empirical method of estimating raingage and radar rainfall measurement bias and resolution. J. Appl. Meteor.,30, 282–96.

  • Bras, R. L., and I. Rodriguez-Iturbe, 1976: Evaluation of mean square error involved in approximating the areal average of a rainfall event by a discrete summation. Water Resour. Res.,12, 181–184.

  • Ciach, J. G., and W. F. Krajewski, 1999: On the estimation of radar rainfall error variance. Adv. Water Resour.,22, 585–595.

  • Fisher, R. A., 1921: On the probable error of a coefficient of correlation deduced from a small sample. Metron,1 (4), 3–32.

  • Groisman, P. Y., and D. R. Legates, 1994: The accuracy of United States precipitation data. Bull. Amer. Meteor. Soc.,75, 215–227.

  • Habib, E., W. F. Krajewski, V. Nespor, and A. Kruger, 1999: Numerical simulation studies of raingage data correction due to wind effect. J. Geophys. Res.,104, 19 723–19 734.

  • Huffman, G. J., 1997: Estimates of root-mean-square random error for finite samples of estimated precipitation. J. Appl. Meteor.,36, 1191–1201.

  • Huffman, G. J., and Coauthors, 1997: The Global Precipitation Climatology Project (GPCP) combined precipitation dataset. Bull. Amer. Meteor. Soc.,78, 5–20.

  • Journel, A. G., and Ch. J. Huijbregts, 1978: Mining Geostatistics. Academic Press, 600 pp.

  • Krajewski, W. F., 1993: Global estimation of rainfall: Certain methodological issues. World at Risk: Global Climate Change and Natural Hazards, R. Bras, Ed., American Institute of Physics, 180–192.

  • Krajewski, W. F., and C. J. Duffy, 1988: Estimation of homogeneous isotropic random fields structure: A simulation study. Comp. Geosci.,14 (1), 113–122.

  • McCollum, J. R., and W. F. Krajewski, 1998: Uncertainty of monthly rainfall estimates from rain gauges in the Global Precipitation Climatology Project. Water Resour. Res.,34 (10), 2647–2654.

  • Morrissey, M. L., J. A. Maliekal, J. S. Greene, and J. Wang, 1995: The uncertainty in simple spatial averages using raingage networks. Water Resour. Res.,31 (8), 2011–2017.

  • PIP-1, 1994: The First WetNet Precipitation Intercomparison Project (special issue). Remote Sens. Rev.,11, 373 pp.

  • Rudolf, B., H. Hauschild, W. Rueth, and U. Schneider, 1994: Terrestrial precipitation analysis: Operational method and required density of point measurements. Global Precipitation and Climate Change, M. Desbois and F. Desalmand, Eds., Springer Verlag, 173–186.

  • Schweppe, F. C., 1973: Uncertain Dynamic Systems. Prentice-Hall, 563 pp.

  • Stuart, A., and J. K. Ord, 1994: Kendall’s Advanced Theory of Statistics. Vol. 1, Distribution Theory, 6th ed., Edward Arnold, 676 pp.

  • Tabios, G., III, and T. D. Salas, 1985: A comparison analysis of techniques for spatial interpolation of precipitation. Water Resour. Bull.,21 (3), 365–380.

  • Zawadzki, I., 1973: Errors and fluctuations of raingage estimates of areal rainfall. J. Hydrol.,18, 243–255.

  • Fig. 1.

    Width of 95% confidence interval for the sample correlation coefficient [based on sampling distribution approximation proposed by Fisher (1921)]. Here, M is the sample size.

  • Fig. 2.

    Schematic location of the selected validation sites. The dark-framed sites are the primary sites discussed here in more detail.

  • Fig. 3.

    Rain gauge network configurations for the four primary sites. Squares denote gauges used in covariance calculations only.

  • Fig. 4.

    Relative error of the VRF of the Morrissey et al. (1995) formula for different correlation models for the four primary validation sites.

  • Fig. 5.

    VRF of the Morrissey et al. (1995) formula for different correlation models.

  • Fig. 6.

    Empirical correlations and the spatial correlation functions (continuous lines). The broken lines define 95% confidence interval of the sampling uncertainty [based on approximation proposed by Fisher (1921)]. See text for more details.

  • Fig. 7.

    Comparison of the GPCP SG product and the surface reference estimates for the four primary sites. Each point represents a month. The vertical bars represent standard error of the product as estimated by the Huffman (1997) method. The horizontal bars represent the standard error of the surface reference estimates.

  • Fig. 8.

    Comparison of the GPCP MS product and the surface reference estimates for the four primary sites. Each point represents a month. The vertical bars represent standard error of the product as estimated by the Huffman (1997) method. The horizontal bars represent the standard error of the surface reference estimates.

  • Fig. 9.

    Relative contribution of the SRDC error variance estimates in the mean-square GPCP − SRDC difference for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

  • Fig. 10.

    Bias of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

  • Fig. 11.

    Standard error of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

  • Fig. 12.

    Relative standard error of the GPCP estimates for the SG product (upper panels) and the MS product (lower panels) over the 14 validation sites. The arrangement of the sites (from top to bottom) is west to east (also see Table 1 for the side codes).

  • Fig. 13.

    Standard error comparison of the ESM and the GPCP’s own estimates. Left panel is for the SG product, and the right panel is for the MS product. The dark dots are for summer, and light dots are for winter estimates.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 454 62 6
PDF Downloads 161 39 2