## 1. Introduction

As new weather observing platforms are deployed, such as the national WSR-88D network (Crum and Alberty 1993; Klazura and Imy 1993) and the Oklahoma Mesonet (Brock et al. 1995), more refined and precise real-time environmental information becomes available for research and operational endeavors (Morris and Janish 1996). These new data sources are revealing weather phenomena that open new avenues for mesoscale research in the understanding of physical mechanisms responsible for their occurrence. Traditional scientific methods and techniques developed with low-resolution data will not suffice to explain and numerically reproduce these newly observed, smaller-scale phenomena. Hence, there is a need for developing better analysis schemes for analyzing the new high-resolution observations; this need is critical to furthering operational capabilities.

One of the most important variables in hydrology and meteorology is the precipitation field, which includes many scales, produced by a variety of weather conditions ranging from purely convective to large-scale forcing. The calibration of any hydrologic model depends on accurate precipitation data, and the results of atmospheric simulations and forecasts performed with numeric models are improved by the assimilation of the observed precipitation field (Wang and Warner 1988). Additionally, accurate precipitation measurements can be used to verify model performance since precipitation is one of the most difficult variables to simulate and forecast.

In this study, the hourly stage III rainfall analysis produced by the Arkansas–Red Basin River Forecast Center (ABRFC) is a mosaic of digital stage II data (hourly precipitation computed on a grid of approximately 4 km × 4 km) from each Weather Surveillance Radar-1988 Doppler (WSR-88D) within the ABRFC area of responsibility. The stage III estimates are combined with rainfall measured by Mesonet rain gauges through a statistical objective analysis (SOA) scheme (Pereira Fo. et al. 1996) to improve the accuracy of hourly rainfall analyses. A subset grid of the Hydrologic Rainfall Analysis Project (HRAP) is specified to generate SOA products for the Lake Altus area (Fig. 1). Rain gauges available from the Oklahoma Mesonet are crucial to the reanalysis because of the relatively high spatial density in the mesonet. Consequently, a comparative study among the resulting analyses is performed to determine whether or not significant improvements and/or discrepancies are obtained.

The hourly rainfall accumulation estimates from the Next-Generation Radar (WSR-88D) Precipitation Processing Subsystem (PPS) have been shown to have systematic biases. Smith et al. (1996) found that for paired gauge–radar observations, underestimation of rainfall occurs at most sites, relative to rain gauge observations. Underestimation is most pronounced at close and far ranges from the radar, but it occurs over all ranges across the southern Great Plains. Baeck et al. (1997) concluded that warm-season peaks in mean rainfall around 100 km from the radar are strongly associated with amplification of reflectivity just below the 0°C isotherm (melting layer bright band) in the second tilt with use of biscan maximization. Thus, the scanning strategy for the PPS is a dominant factor in the range-dependent bias exhibited by WSR-88D rainfall products. Yet, the current stage II precipitation-processing methodology computes a mean field bias using a Kalman filter technique and available rain gauge observations, and applies this mean bias adjustment over the entire radar field (Seo and Johnson 1995; Fulton et al. 1995).

Shedd and Smith (1991) confirmed that, for areas of overlapping radar coverage, the mosaicked stage III value for each HRAP grid cell will consist of the average of all nonzero precipitation accumulations. Intuitively, one can infer that the simple averaging technique is inappropriate to mosaic (or merge) the individual radar stage II digital precipitation products because observational errors vary greatly from one WSR-88D system to the next. In addition, because of the earth’s curvature, differences in range from the radars to the overlapping coverage areas will result in different degrees of radar beamfilling and overshooting of rainfall.

While it has been suggested that the SOA scheme be used to analyze the stage I (hourly precipitation in a polar coordinate format) or stage II hourly digital precipitation (HDP) arrays, this has not been done because archived products are not available for all the WSR-88D radars that cover southwestern Oklahoma (mainly the Frederick radar, which is a Department of Defense facility). Also, stage III HDP products are readily available from the ABRFC. Because doubts exist concerning the accuracy of WSR-88D level III (stages II and III) hourly rainfall products, verification of the impact of current precipitation processing methodologies on stage III results is deemed necessary.

Oklahoma was selected as the study area because the terrain is relatively flat, the Oklahoma Mesonet is a real-time environmental monitoring network, and there is ready access to stage III data. The SOA statistics are developed for the Great Plains specifically for Oklahoma rainfall systems. Further statistics can be developed to apply the SOA scheme elsewhere.

## 2. Methodology

Neither weather radar estimates nor rain gauge measurements of precipitation are perfectly accurate. Measurements made by rain gauges underestimate rainfall rates, especially at both ends of the rainfall spectrum (Legates and DeLiberty 1993). Weather radars sample a variable volume of the atmosphere to determine areal estimates of rainfall. These estimates are affected by several common sources of observational errors: electronic calibration, the *Z–R* relationship, rainfall attenuation, range effect, bright band, anomalous propagation, ground clutter, and the sampling strategy used by the radar (Austin 1987).

The impact of calibration errors can be minimized by following strict guidelines for equipment maintenance and data quality assurance procedures, such as those used in the mesonet and in the WSR-88D system. The goal of this work is to minimize further the *analysis errors* by integrating WSR-88D rainfall estimates and mesonet rain gauge measurements of rainfall through a statistical objective analysis scheme. In this technique, the analyzed rainfall at each grid point is obtained by adding together the radar rainfall estimate at the grid point (background) and the sum of weighted *differences* between nearby rain gauge measurements (observations) and the respective radar rainfall estimates. The error variance is defined as the sum of squared differences between the observed (or estimated) value and the true value (which is unknown since it cannot be measured with absolute perfection). Thus, analysis weights are derived from the error variance of radar rainfall estimates (background-error variance). Hence, the statistical properties of precipitating systems are incorporated into the analyses such that the resulting *analysis-error* variance is less than the minimum *observation-error* variance.

### a. The statistical objective analysis scheme

*P*

_{a}(

*x*

_{i},

*y*

_{i}) = analyzed precipitation (mm) at grid point

*i,*

*P*

_{r}(

*x*

_{i},

*y*

_{i}) = radar estimation of the precipitation (mm) at grid point

*i,*

*P*

_{g}(

*x*

_{k},

*y*

_{k}) = rain gauge precipitation measurement (mm) at station point

*k,*

*P*

_{r}(

*x*

_{k},

*y*

_{k}) = radar estimation of the precipitation (mm) at station point

*k,*

*w*

_{ik}= not yet specified a posteriori weight,

*K*= number of rain gauges, and (

*x, y*) = coordinates (km).

*t*(

*x*

_{i},

*y*

_{i}) and

*t*(

*x*

_{k},

*y*

_{k}) as the true value of

*P*at grid and station points, respectively,

*t*(

*x*

_{i},

*y*

_{i}) is subtracted from both sides of Eq. (1) and rewritten in a simplified form as where

*a*

_{i}= P

_{a}(

*x*

_{i},

*y*

_{i}),

*b*

_{i}=

*P*

_{r}(

*x*

_{i},

*y*

_{i}),

*o*

_{k}=

*P*

_{g}(

*x*

_{k},

*y*

_{k}),

*b*

_{k}=

*P*

_{r}(

*x*

_{k},

*y*

_{k}),

*t*

_{i}=

*t*(

*x*

_{i},

*y*

_{i}), and

*t*

_{k}=

*t*(

*x*

_{k},

*y*

_{k}).

The following can be defined: *a*_{i} − *t*_{i} = analysis error, *b*_{i} − *t*_{i} = background error, *a*_{i} − *b*_{i} = analysis increment or correction, and *o*_{k} − *b*_{k} = observation increment.

*b*

_{i},

*b*

_{k}, and

*o*

_{k}are assumed to be unbiased or the biases are assumed to have been removed. In other words, the expected value of the errors is assumed to be equal to zero:

*b*

_{i}

*t*

_{i}

*b*

_{k}

*t*

_{k}

*o*

_{k}

*t*

_{k}

*c*〉 =

^{∞}

_{−∞}

*c*∗

*p*(

*c*)

*dc*is the expectation operator, and

*p*(

*c*) = probability density function of

*c.*

This assumption, which implies that the analysis values also are unbiased, might not be satisfied since tipping bucket gauges underestimate rainfall, especially at low and high rainfall rates (Legates and DeLiberty 1993), and wind and wetting losses can also contribute to an underestimate of rainfall (Groisman and Legates 1994). Further, radar errors are spatially dependent. Moreover, other sources of error—such as the ones due to an improper *Z–R* relationship, the bright band, and overshooting (Smith et al. 1996)—can be larger than the error produced by the radar itself. Thus, analysis results can be biased if Eq. (3) is not satisfied.

*E*

^{2}

_{a}

*a*

_{i}

*t*

_{i}

^{2}

*k*and

*l*(distance dependent term), respectively. The expected analysis error variance is minimized by differentiating Eq. (5) with respect to each of the weights,

*w*

_{ik}:

*P*

_{a}(

*x*

_{i},

*y*

_{i}) is a minimum variance estimate of

*P*

_{t}(

*x*

_{i},

*y*

_{i})—the true value of

*P*at grid point

*i.*When all terms in Eq. (10) are estimated accurately, this scheme is termed optimal interpolation.

*ρ*

_{kl}= background error cross correlation at stations

*k*and

*l*;

*ρ*

_{ki}= background error cross correlation between grid point

*i*and station

*k*;

^{2}

_{k}

*o*

_{k}−

*t*

_{k})

^{2}〉/〈(

*b*

_{k}−

*t*

_{k})

^{2}〉, normalized observation error; and

*W*

_{l}= a posteriori weight.

The normalized expected analysis error variance (NEXERVA) is the ratio between the expected analysis-error variance [left-hand side of Eq. (5)] and the background-error variance [first term of the right-hand side of Eq. (5)]. It can be used to determine the spatial distribution of the analysis-error variance and the overall error reduction for a given analysis area. In general, the closer the observation grid points are to the analysis grid point, the smaller the value of ^{2}_{a}

Equations (11) and (12) are normalized by the background-error covariance matrix [the first term of the right-hand side of Eq. (5)]. This matrix is the most important component of the SOA scheme; the exactness of the analysis depends heavily on this component. Failure to consider the impact of the background error will adversely affect the interpolation errors.

*P*

^{k(l)}

_{r}

*k*(

*l*). Since the true rainfall accumulation

*P*

^{k(l)}

_{t}

*ρ*

_{kl}is estimated in the climatological sense (Creutin and Obled 1982) by replacing

*P*

_{t}with the long-term precipitation mean. In this work, the long-term precipitation mean is 250 1-h precipitation estimates.

This simple scheme maximizes the extractable precipitation signal in the data and minimizes the observational errors to produce an analysis that has smaller total errors than would be in a univariate analysis of either radar or rain gauge data. The primary advantages of the analysis scheme are as follows.

- The expected analysis-error variance is minimized and known.
- The SOA technique uses the statistical properties of observed rainfall fields.
- Only rain gauges nearby the analysis location are used for gridpoint interpolation.
- The technique is straightforward and related to the statistical properties (i.e., the spatial covariance) of actual rainfall systems.

Until recently, computer power and extensive datasets were the primary factors that limited the operational use of the SOA scheme. With the advent of the WSR-88D system and faster, larger computers, these limitations are now much less significant.

### b. Background error correlation function

The background error cross correlations [Eq. (13)] are estimated using level II reflectivities (i.e., in azimuth–range format) from the WSR-88D radar at Twin Lakes, Oklahoma. Reflectivity levels are transformed to rainfall rates (*Z* = 300*R*^{1.4}) at 2-km altitude and integrated to produce rainfall accumulations with 2 km × 2 km resolution (Pereira Fo. and Crawford 1995). The background error cross correlation in Fig. 2 was calculated using detailed reflectivity information from many Oklahoma weather systems.

## 3. Datasets

### a. The stage III hourly precipitation product

This product is available on an approximately 4 km × 4 km polar stereographic grid used by HRAP. The HRAP grid for the ABRFC area of responsibility (Fig. 3) has 335 grid points in the east–west direction and 159 grid points in the north–south direction. Stage III data on the HRAP grid for the ABRFC are a mosaic of 17 individual WSR-88D stage II data analyses. Currently, a simple linear-averaging method is used to estimate surface rainfall accumulations where WSR-88Ds overlap in coverage.

The area of southwest Oklahoma where the SOA technique is used to combine stage III analyses with mesonet rain gauge observations is indicated in Fig. 1. This particular area has surveillance coverage from WSR-88Ds at Frederick, Oklahoma; Twin Lakes, Oklahoma; Amarillo, Texas; and Lubbock, Texas. All significant rainfall events that occurred over the Lake Altus area between June 1995 and July 1996 are selected for further study. A total of 185 hourly precipitation maps are used in the reanalysis.

### b. Mesonet rain gauges

The Oklahoma Mesonet has 114 tipping bucket rain gauges that measure the rainfall accumulation at 5-min time steps. Ten rain gauges are available in the Lake Altus area (Fig. 1). The corresponding 185 h of hourly rainfall accumulation for mesonet gauges are used to reanalyze stage III analyses and to produce a mesonet-only analysis. Morrissey et al. (1995) analyzed the observation errors introduced by random, uniform, clustered, and linear rain gauge networks on simple spatial averages. They determined that uniform networks produced a minimum in the analysis error variance. Their results also indicate that rain gauges in the mesonet have an error variance similar to an uniform network. But that does not imply that the mesonet rain gauges have uniform distribution or that they are error free; rather, they resemble a uniform network that statistically produces a minimum areal error when used to estimate basin rainfall. Note that *minimum* error should not be confused with *small* error. For simplicity, the rain gauges are assumed to measure the rainfall correctly in this study. Moreover, comprehensive statistics on errors produced by mesonet rain gauges are not available.

## 4. Results and discussion

### a. Normalized expected analysis-errorvariance (NEXERVA)

With the SOA, it is possible to determine the NEXERVA [Eq. (12)] at each grid point before the actual analysis is performed. For the Lake Altus area (27 × 36 HRAP pixels with a size of 4 km × 4 km), the spatial distribution of NEXERVA reveals concentric circles around each mesonet rain gauge (Fig. 4). Local minima represent the quality or accuracy of a rainfall analysis that is exactly collocated with a mesonet rain gauge. Three rain gauges are used to determine the accuracy of an analysis for each grid point. The fact that NEXERVA approaches zero at each rain gauge site means that actual observation errors in the gauge measurements have not been incorporated into the analysis methodology. If the observation errors had been included in Eq. (12), isolines of NEXERVA would still be concentric (since an isotropic cross-correlation function is used), but minimum values at the rain gauge sites would not approach zero and it would be a measure of the gauge’s ability to sample precipitation accurately. Areas with large values of NEXERVA in Fig. 4 coincide with analysis locations that are well removed from rain gauge locations. The areal mean NEXERVA for the Lake Altus area is 0.55, reflecting the fact that the analysis error variance is reduced by 45% in relationship to the background (WSR-88D) error variance. A further reduction of NEXERVA can be accomplished by either increasing the number of gauges used in the analysis or by increasing the rainfall accumulation time interval.

Rain gauges have two major problems: lack of spatial representativeness and undercatchment due to the wind effect. The mesonet gauges have been carefully designed to reduce the wind effect. Even so, the spatial representativeness of mesonet observations can still be a problem, especially for convective systems, since the average distance between rain gauges is about 30 km. The SOA scheme takes advantage of the strengths in both systems (gauges determine the mean value of precipitation field more accurately than does a radar while weather radars map out spatial details more accurately) to improve surface estimates of rainfall. The ultimate goal is to combine these two sources of data in such a way as to reduce analysis errors. This does not mean that a final SOA will have eliminated errors or necessarily made them small.

The rain gauge density and their distribution in a network significantly affect the accuracy of the final rainfall analysis. For example, notice in Fig. 4 that concentric isolines vary in radius, which is a function of the spatial distribution of rain gauges and their distance from other nearby rain gauges. For example, the isoline of 0.6 (Fig. 4) for the rain gauge at Hollis has a radius of 36 km (note: all rain gauges are identified in Fig. 1). In comparison, this same isoline has a radius of 28 km for the rain gauge at Altus. This situation results from the fact that the mesonet rain gauge at Hollis is isolated (i.e., it is in the southwest corner of the mesonet), whereas the gauge at Altus is clustered with two other nearby rain gauges. From a statistical point of view, the clustering of rain gauges reduces the independent information that each rain gauge brings to the final analysis. While clustered gauges tend to increase the expected analysis-error variance, their participation in SOA brings more accuracy to the final analysis since gauges are very sparse when compared to radar information.

Therefore, the normalized expected analysis-error variance provides additional information about the quality of an analysis before it is performed. The results reveal that analysis accuracy is highly inhomogeneous in space. It also illustrates that the mean “bias” approach to adjust hourly rainfall accumulations of the WSR-88D’s stage I and II data is inappropriate because that technique introduces analysis errors. As a result, the quality of the final analysis, namely, the stage III analysis, is reduced.

### b. Comparison of original and reanalysis stage III

All stage III data available from the ABRFC along with concurrent mesonet gauge records are used to generate the statistical objective analysis of stage III combined with mesonet data (i.e., stage III is reanalyzed via SOA). Mean areal rainfall accumulations for SOA and the “raw” stage III (STP) in the Lake Altus area are determined for each hour and plotted as a time series (Fig. 5). In most instances, the stage III product yielded lower areal-mean values of precipitation than did the SOA. Differences between the raw STP and the SOA tended to be large when areal-mean rainfall values are large.

Average areal-mean rainfall accumulation for SOA and STP and their respective variances are used to determine whether these mean values (from the 185 map times) are significantly different. A statistical *t* test at the 5% significance level indicated that the average areal-mean rainfall accumulation for STP and SOA are not significantly different for the 185 hours studied. Nevertheless, individual hourly differences can be significant (Fig. 5).

Some of the most significant differences are illustrated by the STP and SOA rainfall fields plotted at hourly intervals in Fig. 5. The fine details in rainfall structures observed in the stage III analysis from the ABRFC are preserved by the SOA. Radar estimates of rainfall can have large errors in the mean value of a pattern, but most users believe the WSR-88D provides a “good” estimate of the spatial variability of the rainfall rate field. Adjustments by the reanalysis technique are more significant when STP areal-mean rainfall is less than that of the SOA analysis. This suggests that the stage III rainfall maps underestimate rainfall accumulations. For example, the stage III rainfall analysis for the hour ending at 0500 UTC on 15 June 1996 (peak B in Fig. 5) underestimated the accumulation by a factor of 4 when compared to the SOA analysis. On this day, the stage III processing did not incorporate the Cheyenne mesonet rain gauge (which recorded 250 mm of rain) into the analysis. As a result, the stage III mosaicking technique contributed to large differences between the actual rainfall and the stage III estimate of rainfall.

Perhaps the most severe deficiency in the current technique to produce a stage III analysis occurs at the fringes of radar coverage by several radars. Analysis grid points located at the maximum range of the WSR-88D will often reveal spurious rainfall gradients, as suggested in Fig. 6a (the event at point A in Fig. 5). Unfortunately, these spurious rainfall gradients also are evident in the SOA reanalysis since mesonet rain gauges are not located near (<25 km) the analysis error in question. Hence, once stage III processing introduces a fictitious gradient, the false gradients cannot be removed unless nearby rain gauges are available to correct the analysis.

The transition zones, where two or more WSR-88Ds provide overlapping coverage, represent areas where spurious rainfall gradients are generated by the current analysis methodology. In fact, these spurious gradients are present in most cases, though they are not always as clearly defined as in Fig. 6. Even so, large analysis errors should be expected in transition zones (more shown below) as long as the current analysis methodology remains in use.

A case study rainfall-accumulation map (for the 185 individual hours of rainfall) for the stage III and for the SOA analysis, along with the difference field between these two analyses, is shown in Fig. 7. The difference field reveals that the accumulated stage III rainfall is nearly 40% less than that accumulated in the SOA reanalysis.

When one considers the size of the drainage area of the Lake Altus basin (6511 km^{2}) and its reservoir storage capacity (165.8 × 10^{6} m^{3}), the difference in rainfall accumulation between a stage III analysis and an SOA reanalysis could represent a large enough volume of water such that the difference would be equivalent to at least *three full reservoirs.* This means that hydrologic simulations performed with stage III rainfall versus SOA rainfall likely would produce completely different hydrographs. How these forecast hydrographs will verify against observations in future events is a very important issue that deserves an early and more in-depth investigation. Thus, it is best to use SOA prior to stage III processing. Furthermore, with today’s faster and larger computers, the processing overhead for applying the SOA scheme operationally is not a limiting factor. However, theoretically sound estimates of the background-error covariances across the WSR-88D network are required.

Whenever the radar-range effect likely has an impact on the radar-derived precipitation (e.g., overshooting tops of echoes), the simple averaging method used to mosaic digital images from various WSR-88Ds *will always* produce an underestimate of rainfall accumulation in overlapping areas. On the other hand, stage III (also II and I) errors may increase as less and less of the lower portion of the precipitating cloud (e.g., warm rain clouds) is sampled when radars are installed close to mountain areas. The error caused by the range effect, and amplified by the simple averaging method, can be illustrated by a simple experiment, which is described in the following section.

### c. Illustration of radar-range effect problem

Assume that one unit of rainfall occurs everywhere within the Lake Altus area. In this situation, WSR-88D estimates of rainfall (from Twin Lakes, Frederick, Lubbock, and Amarillo) are distance dependent by the range effect as portrayed in Fig. 8. For example, the hypothetical rainfall field would be underestimated by the WSR-88D at Frederick; at far ranges, this underestimate could be 30% or more. When the same range effect is applied to all four WSR-88Ds that provide surveillance in southwest Oklahoma, a simple average of rainfall estimates to determine a mosaic of individual rainfall fields produces a much degraded and highly artificial rainfall field (Fig. 9). Notice that areas with spuriously large rainfall gradients coincide with similar gradients in Fig. 6a. An inspection of Fig. 9 and a comparison with Fig. 5 makes it clear where the transition zones of overlapping radar surveillance are located. This simple experiment reveals how analysis errors can increase due to inappropriate processing of the data.

*P*

_{m}

*i, j*

*P*

_{k}

*i, j*

*k*

*N,*

*P*

_{m}(

*i, j*) = mosaic rainfall accumulation at grid point (

*i, j*),

*P*

_{k}(

*i, j*) = rainfall accumulation from the

*k*th WSR-88D at grid point (

*i, j*), and

*N*= number of WSR-88Ds providing surveillance at grid point (

*i, j*). A mosaic of rainfall accumulations also can be produced using a weighted average: where

*E*

_{k}(

*i, j*) = (

*P*

_{k}(

*i, j*) −

*P*

_{a}(

*i, j*)), estimated error (mm) of the

*k*th WSR-88D at grid point (

*i, j*), and

*P*

_{a}(

*i, j*) = analyzed rainfall accumulation (mm) at grid point (

*i, j*).

The simple averaging method is a particular case of Eq. (15) whereby the estimated error is considered to be independent of distance and is constant for all WSR-88Ds (an unlikely scenario). In these three simulations, the true rainfall field is uniform and equal to one unit of rainfall. Because of the range effect, the mosaic methods alter the cumulative frequency distributions from a delta function to the distributions shown in Fig. 10. Thus, the simple averaging technique in processing stage III produces a significant underestimation of the basin rainfall for an entire River Forecast Center area.

The point of this study is further illustrated using an event that occurred on 7 October 1996 (Fig. 11). Note the differences in WSR-88D reflectivity measurements between Frederick and Twin Lakes. Major discrepancies in the shape and intensity of storm reflectivities are apparent, even though the measurements were made at the same elevation angle and just 1 min apart. Because of these differences, it is apparent that the processing of hourly rainfall accumulations in stages I–II–III can alter the final analysis unfavorably when the processing system introduces its own set of errors.

In this event, a frequency distribution of the hourly rainfall accumulation (using radar pixel estimates) for the stage III and the adjusted stage III rainfall maps indicate (not shown) that stage III accumulations less than 2.5 mm are shifted by the SOA reanalysis toward higher values that ranged between 5.0 and 15.0 mm. This shift in rainfall frequency increases the accumulation variance of the SOA (not shown) in relationship to the variance in the original STP field.

A time series of the mean absolute error between the STP (stage III) and the SOA reanalysis reveals that the analysis quality of stage III rainfall improved slightly during 1995–96 (not shown). However, two major error spikes occurred in 1996. In both instances, mesonet rain gauges likely are not included in the stage III analysis because of large differences between the raw WSR-88D estimates and rain gauge measurements. In both cases, mesonet personnel judged the rain gauge readings to be correct. Thus, the STP analyses from the ABRFC seems to strongly reflect the original radar estimates. As it turned out, both precipitating systems are located within the area of overlapping radar surveillance.

## 5. Conclusions

This study has revealed significant problems with the stage III processing system. However, these problems are reduced when a reanalysis of the stage III estimates is performed using statistical objective analysis. Even though the analysis errors seemingly are small, the rainfall differences between the STP and SOA hourly analysis amounts to 40% more rainfall that likely occurred in overlapping areas of WSR-88D surveillance. While the SOA scheme improves the stage III rainfall estimates, it is not able to remove the fictitious sharp gradients in rainfall that are produced by the current stage III processing at the fringes of the radar umbrellas where rain gauge observations area unavailable. Consequently, it is best to use the SOA scheme in stage II precipitation processing. Clearly, the use of mesonet rain gauges in the Lake Altus area contributes to improvements in the final analyses. Acceptable results also are obtained with mesonet-only analyses when the rainfall systems are large and widespread, though these latter analyses tend to overestimate the areal-mean rainfall.

Therefore, this comparative study has shown that stage III processing can be significantly improved by a reanalysis that uses the SOA technique along with mesonet rain gauges. Even so, additional work is required to verify the hydrologic impact of both analysis systems. Changes to the current stage III processing system can eliminate some of the observed range effects and mosaicking errors. Yet it is clear that the simple averaging method is inappropriate to produce mosaics of stage II rainfall fields. Since other sources of error exist in radar rainfall estimates, it is better to mosaic the radar using the weighted-average method. For instance, the bright band or hail contamination will produce an overestimate of the rainfall accumulation when the maximum-value method is used to produce mosaic images.

Based on this study, the hydrologic area surrounding Lake Altus is best served through use of Frederick WSR-88D rainfall accumulations that are combined with mesonet rain gauges to generate hourly SOA rainfall accumulations. Thus, it is worthwhile to reanalyze stage III data as provided by the ABRFC. On the other hand, it remains necessary to verify the impact of using STP and SOA rainfall analyses on the hydrologic modeling for the Lake Altus area.

Calibrated hydrologic models for the Lake Altus watershed can simulate hydrographs based on STP and SOA rainfall accumulations. The simulated hydrographs can be verified against available flow measurements to determine the impact of the rainfall analysis. Consequently, hydrologic models can be used as a forecast tool to improve the operation of the Lake Altus reservoir. Moreover, an increase in the lead time of a critical hydrologic forecast can be achieved using short-term rainfall forecasts accomplished by extrapolation of the radar rainfall rate and by numeric modeling (Warner et al. 1995; Wang and Warner 1988). The result would be a complete hydrometeorologic forecast system.

This research has been sponsored by the Oklahoma Water Resources Board through an Interagency Contract 960716 with the University of Oklahoma, with funding provided by the U.S. Bureau of Reclamation and by the Oklahoma Climatological Survey. Partial support also was provided by UCAR/COMET under Grant NA37RJ0203. The authors would like to thank the anonymous reviewers for the many comments, suggestions, and corrections that improved this manuscript substantially.

## REFERENCES

Austin, P. M., 1987: Relation between measured radar reflectivity and surface rainfall.

*Mon. Wea. Rev.,***115,**1053–1070.Baeck, M. L., J. A. Smith, and M. Steiner, 1997: Sampling features of NEXRAD precipitation estimates. Preprints,

*13th Conf. on Hydrology,*Long Beach, CA, Amer. Meteor. Soc., J119–J122.Bhagarva, M., and M. Danard, 1994: Application of optimum interpolation to the analysis of precipitation in complex terrain.

*J. Appl. Meteor.,***33,**508–518.Brock, F. V., K. C. Crawford, R. L. Elliott, G. W. Cuperus, S. J. Stadler, H. L. Johnson, and M. D. Eilts, 1995: The Oklahoma Mesonet: A technical overview.

*J. Atmos. Oceanic Technol.,***12,**5–19.Crawford, K. C., 1979: Considerations for the design of a hydrologic data network using multivariate sensors.

*Water Resour. Res.,***15,**1752–1762.Creutin, J. D., and C. Obled, 1982: Objective analysis and mapping techniques for rainfall fields: An objective comparison.

*Water Resour. Res.,***18,**413–431.Crum, T. D., and R. L. Alberty, 1993: The WSR-88D and the WSR-88D Operational Support Facility.

*Bull. Amer. Meteor. Soc.,***74,**1669–1687.Daley, R., 1991:

*Atmospheric Data Analysis.*Cambridge University Press, 457 pp.Fulton, F., D.-J. Seo, J. Breidenbach, and E. Johnson, 1995: Performance testing of the WSR-88D precipitation adjustment algorithm.

*Proc. Third Int. Symp. on Hydrological Applications of Weather Radar,*São Paulo, Brazil, ABRH/IAHR, 109–117.Gandin, L. S., 1963:

*Objective Analysis of Meteorological Fields.*Translated by R. Harding, Israel Program for Scientific Translation, 242 pp.Groisman, P. Y., and D. R. Legates, 1994: The accuracy of the United Sates precipitation data.

*Bull. Amer. Meteor. Soc.,***75,**215–227.Klazura, G. E., and D. A. Imy, 1993: A description of the initial set of analysis products available from the NEXRAD WSR-88D system.

*Bull. Amer. Meteor. Soc.,***74,**1293–1311.Krajewski, W. F., 1987: Cokriging radar-rainfall and rain gauge.

*J. Geophys. Res.,***92,**9571–9580.Legates, D. R., and T. L. DeLiberty, 1993: Measurement biases in the United States rain gauge network.

*Water Resour. Bull.,***29,**855–861.Morris, D. A., and P. R. Janish, 1996: The utility of mesoscale versus synoptic scale surface observations during the Lahoma hail and windstorm of 17 August 1994. Preprints,

*18th Conf. on Severe Local Storms,*San Francisco, CA, Amer. Meteor. Soc., 60–64.Morrissey, M. L., J. A. Maliekal, J. S. Greene, and J. Wang, 1995: The uncertainty of simple spatial averages using rain gauge networks.

*Water Resour. Res.,***31,**2011–2017.Pereira Fo., A. J., and K. C. Crawford, 1995: Integrating WSR-88D estimates and Oklahoma Mesonet measurements of rainfall accumulation: A statistical approach. Preprints,

*27th Conf. on Radar Meteorology,*Vail, CO, Amer. Meteor. Soc., 240–242.——, ——, and C. L. Hartzell, 1996: Statistical objective analysis scheme for improving WSR-88D rainfall estimates. Bureau of Reclamation, Rep. R-96-08, 2 Vols., Denver, CO, 98 pp. [Available from the National Technical Information Service, Operations Division, 5285 Port Royal Road, Springfield, VA 22161.].

Seo, D.-J., and E. R. Johnson, 1995: The WSR-88D precipitation processing subsystem: An overview and performance evaluation.

*Proc. Third Int. Symp. on Hydrological Applications of Weather Radars,*São Paulo, Brazil, ABRH/IAHR, 222–231.Shedd, R. C., and J. A. Smith, 1991: Interactive precipitation processing for the modernized National Weather Service. Preprints,

*Seventh Int. Conf. on Interactive Information and Processing Systems for Meteorology, Oceonagraphy, and Hydrology,*New Orleans, LA, Amer. Meteor. Soc., 320–323.Smith, A. J., D.-J. Seo, M. L. Baeck, and M. D. Hudlow, 1996: An intercomparison study of NEXRAD precipitation estimates.

*Water Resour. Res.,***32,**2035–2045.Wang, W., and T. Warner, 1988: Use of four-dimensional data assimilation by Newtonian relaxation and latent-heat forcing to improve a mesoscale-model precipitation forecast: A case study.

*Mon. Wea. Rev.,***116,**2593–2613.Warner, T. T., and Coauthors, 1995: Development and testing of a precipitation detection, nowcasting and prediction at the National Center for Atmospheric Research.

*Proc. Third Int. Symp. on Hydrological Applications of Weather Radars,*São Paulo, Brazil, ABRH/IAHR, 497–506.WMO, 1970: The planning of meteorological station networks. WMO Tech. Note 111, 35 pp.