1. Introduction
Accurate measurements of precipitation are of great importance to hydrologic prediction. For predictions over large watersheds, gridded precipitation fields based on gauge data are one option (e.g., Daly et al. 1994, 2008; Thornton et al. 1997; Cosgrove et al. 2003; Hamlet and Lettenmaier 2005; Livneh et al. 2015; Newman et al. 2015), but they have limitations, including spatial heterogeneity in geographical coverage of gauges and a lack of gauges in remote regions and the developing world. Satellite-based precipitation products offer an alternative and have been the subject of accelerated development in recent decades, motivated in part by the launch of the U.S.–Japan Tropical Rainfall Monitoring Mission (TRMM) in 1997 (Kummerow et al. 1998), and its successor, the Global Precipitation Measurement (GPM) mission, in 2014 (Hou et al. 2014).
Over the years, numerous studies have been performed to evaluate satellite-based precipitation products through comparisons with radar rainfall estimates (e.g., Stampoulis et al. 2013; Gebregiorgis et al. 2017), gauge observations (e.g., Mei et al. 2014; Prat and Nelson 2015; Miao et al. 2015), and merged radar and gauge rainfall estimates such as the National Centers for Environmental Prediction (NCEP) Stage IV (Lin and Mitchell 2005) products (e.g., Gourley et al. 2010; Mehran and AghaKouchak 2014). Radar precipitation estimates are subject to errors from, for example, radar calibration, beam blockage, range effects, and other causes (Hunter 1996). However, the main drawback in mountainous regions is a lack of low-level coverage due to terrain blockage (Maddox et al. 2002). As noted above, gauge-based precipitation products may not be reliable in areas with sparse gauge distributions (Henn et al. 2015, 2018). Given that gauges provide accurate point rainfall measurements while radar provides rainfall estimates at a larger spatial coverage, a great deal of effort has been devoted to development of techniques that merge the two data sources, for example, mean field bias (Berndt et al. 2014; Nikolopoulos et al. 2015), probability density function matching (Nikolopoulos et al. 2015; Hasan et al. 2016), and geostatistical approaches including kriging, cokriging, and kriging with external drift (KED; Velasco-Forero et al. 2009; Berndt et al. 2014; Rabiei and Haberlandt 2015). Despite the fact that integration of radar and gauge data exploits both of their strengths, its performance may not be ideal in mountain areas due to terrain blockage.
The most common techniques to infer spatial precipitation distributions from gauge observations over complex terrains include interpolation using fixed orographic precipitation gradients (OPGs; e.g., as used by Livneh et al. 2014) and based on residuals from climatological maps of precipitation normals such as Parameter-Elevation Regressions on Independent Slopes Model (PRISM; Daly et al. 1994, 2008). Despite the fact that PRISM generally captures spatial heterogeneity better than OPGs, its accuracy depends on the quality and spatial density of gauges used in the regressions, especially those at high elevations. Henn et al. (2015) estimated basin-mean precipitation by optimizing a precipitation multiplier and OPGs based on streamflow observations via Bayesian inference for several mountain basins with drainage areas ranging from 60 to 1181 km2 in the Sierra Nevada of California. Their inferred annual basin-mean precipitation showed up to 30% differences as compared with precipitation derived from PRISM for these relatively small basins. In some of the basins, they found that the PRISM-derived precipitation was too low given the regional climate. Another consideration is that most extreme precipitation events in the western United States are orographically enhanced, and many precipitation events are associated with atmospheric rivers (ARs) accompanied by strong low-level winds (e.g., Zhu and Newell 1994; Ralph et al. 2006; Dettinger et al. 2011; Neiman et al. 2011). These conditions lead to a nonstationary spatial distribution of mountain precipitation from event to event. For example, Lundquist et al. (2010) compared PRISM with the OPG method in the northern Sierra Nevada range. They found that both methods failed to capture spatial precipitation patterns adequately during years dominated by AR-induced precipitation events, when the OPG was strongly influenced by height and strength of barrier jets.
Over the last decade, several studies have been performed that use remotely sensed snow data to help infer the spatial distribution of precipitation. Durand et al. (2008) and Girotto et al. (2014a,b) used satellite-derived snow covered area (SCA) data, after converting to SWE using a snow depletion curve, to update the precipitation disaggregation weights in a land surface model via smoothing methods. Livneh et al. (2014) used the seasonal peak value of SWE via reconstructions based on satellite SCA during the ablation season as the climatology to infer the spatial distribution of cold season precipitation in several mountain basins tributary to the upper Colorado River. Satellite-based SCA, however, usually is highly uncertain in areas with dense forest canopy (e.g., Maurer et al. 2003). The NASA JPL Airborne Snow Observatory (ASO) products (Painter et al. 2016) have much higher spatial resolution and higher accuracy than previously available remote sensing methods (Lettenmaier 2017). Henn et al. (2016) used streamflow observations, ASO snow data, and gauge precipitation to infer basin-mean precipitation by estimating the OPG in a hydrology model in the upper Tuolumne River basin in the Sierra Nevada of California via a Bayesian parameter inference method. They focused on basin-mean precipitation instead of spatial distribution of precipitation.
Launched on 27 February 2014, NASA–JAXA’s GPM is a joint U.S.–Japan satellite mission intended to provide the next generation of precipitation estimates globally. A major advance in GPM relative to its predecessor TRMM is that its orbit allows observations of AR events, most of which have landfalls too far north to be tracked by TRMM. The Olympic Mountain Experiment (OLYMPEX) was a GPM ground validation study that took place on the Olympic Peninsula of Washington during winter 2015/16 (Houze et al. 2017). One goal of the OLYMPEX campaign was to better assess precipitation products based on GPM and other satellites, especially in a cold season environment where orographic factors exert strong controls on precipitation. The observational resources deployed include rain gauges, radars, aircraft, and other meteorological sensors (Houze et al. 2017).
Orographic precipitation in the Olympics has been examined previously by a number of modeling studies (e.g., Barros and Lettenmaier 1993; Colle and Mass 1996; Leung and Qian 2003). The fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5), has been run at resolutions of 4 km [recently increased to 1.33 km using the Weather Research and Forecasting (WRF) Model] over the Pacific Northwest since 1997 (Mass et al. 2003). Anders et al. (2007) and Minder et al. (2008) evaluated the performance of these products over the Olympics. They found that the model simulated the windward ridge–valley pattern of orographic precipitation well at seasonal time scales, but there were major errors for individual events. They attributed this to inaccurate initial and boundary conditions, which were difficult to improve because many of the heavy precipitation events were associated with ARs originating far from the midlatitudes. Furthermore, NWPs generally are better at simulating synoptically forced rainfall, but are less suited to simulating convective rainfall events and snowfall (Ebert et al. 2007; Minder et al. 2008).
Here, we attempt to develop the best available precipitation product for the evaluation of GPM-based precipitation products such as NASA’s Integrated Multisatellite Retrievals (IMERG) product (Huffman et al. 2015) over the OLYMPEX domain, which for our purposes we define as the Olympic Peninsula plus the Chehalis River basin (see Fig. 1). The availability of multiple resources of observations from OLYMPEX allows us to address in detail the following question: What is the ability of IMERG products to estimate precipitation rates in cold seasons over complex terrain, and in particular, over the OLYMPEX domain?
To answer this question, we derive a daily precipitation product at 1/32° spatial resolution from October 2015 to April 2016. We use different strategies to estimate precipitation for low and high elevations. For the former, where radar precipitation estimates are of better quality and rain gauges are relatively abundant, we estimate precipitation by merging radar and gauge precipitation products.
In contrast to the lowlands, much of the interior of the Olympic Mountains is above 500 m elevation with substantial winter snow accumulations, but few precipitation observations. For these areas where radar coverage is restricted by terrain blockage, we infer precipitation using the Variable Infiltration Capacity (VIC) land surface model (Liang et al. 1994) driven by gridded observations and backward adjusted using SWE estimated from snow depth measurements produced by two flights of ASO (Painter et al. 2016). We then merge the SWE-based winter precipitation estimate for the high-elevation areas with the radar–gauge product for the lowlands and evaluate IMERG precipitation over our entire domain.
Our product is based on a combination of observations of various types and modeling and is intended to provide a best estimate of precipitation over the OLYMPEX domain. It can be used by algorithm developers to meet the primary objective of OLYMPEX, which is ground validation of GPM products. In developing our product though, we have made use of ASO snow products for the first time in a mountain maritime environment with dense forest canopies and have integrated (low elevation) precipitation data from a number of sources, including different gauge networks and NWS precipitation radars.
2. Study region and data
The Olympic Peninsula is situated in the northwest corner of Washington State, bounded by the Pacific Ocean to the west, the Straits of Juan de Fuca to the north, and Puget Sound to the east (see Fig. 1). Elevations range from sea level to 2427 m at the top of Mt. Olympus in the interior of the Peninsula. Precipitation in this area is winter dominant, with over 80% of the annual total (on average over our domain) falling between October and April. The southwestern and western slopes of the Olympic Mountains are covered by dense temperate rain forest and receive plentiful winter precipitation due to orographic enhancement of moisture accompanied by strong southwesterly winds. Annual precipitation here ranges from as much as 2500 mm in the lowlands to over 5000 mm in the higher mountain elevations. In contrast, the northeastern side of the mountains receives much less precipitation due to the rain shadow effect, with annual precipitation as low as 400 mm in some parts of our domain.
The study region is within the range of two NOAA WSR-88D radars. The Langley Hill site on the Washington Coast provides coverage for much of the west side of our domain, although Langley Hill radar is blocked by terrain over most of the interior of the Olympic Mountains (see Fig. 1). A small portion of the northeastern part of our domain is within the range of the Camano Island WSR-88D site in the northern Puget Sound.
The radar product we used is the radar quantitative precipitation estimation (QPE) product (denoted as Q3GC) from NOAA’s National Severe Storms Laboratory (NSSL) Multi-Radar Multi-Sensor (MRMS) system (Zhang et al. 2011, 2016). This product is bias corrected using NOAA Hydrometeorological Automated Data System (HADS) gauges. It is a national mosaic product, so both the Langley Hill and Camano Island radar sites are incorporated. NSSL also generates a Mountain Mapper QPE (Q3MM) product, using the PRISM monthly climatology as a background precipitation distribution map to interpolate HADS precipitation, for areas with poor radar quality (Zhang et al. 2016). We combined these two products as a baseline into which we incorporated additional (i.e., other than HADS) gauge observations that were available to us during the OLYMPEX experiment period.
In addition to the HADS gauges embedded in the NSSL Q3GC and Q3MM products, we used gauge precipitation from NOAA’s Cooperative Observer Program (COOP) network; the Community Collaborative Rain, Hail and Snow Network (CoCoRaHS); Remote Automatic Weather Stations (RAWS); the Automated Surface Observing System (ASOS); SNOTEL; and gauges installed during the OLYMPEX period (denoted as “OLYMPEX gauges”), mainly in the Quinault and Chehalis River basins (see Fig. 1 for locations). There is some overlap between HADS and the RAWS network. We checked the gauge list and eliminated the ones that were already in the HADS network in the process of gauge selection.
Most stations are located at elevations lower than about 500 m. Much of the interior Olympic Mountains are above 500 m elevation with substantial winter snow cover, but with few precipitation measurements [four National Resources Conservation Service (NRCS) SNOTEL sites measure SWE using snow pillows, in addition to precipitation and temperature]. We obtained snow depth maps at 3-m resolution for the interior of the Olympic Peninsula from two ASO flights on 8–9 February 2016 and 29–30 March 2016. In addition, we obtained snow depth and density measurements on 8 February 2016 and 7 April 2016 from nine snow monitoring sites established for the OLYMPEX campaign (for details, see Currier et al. 2017). COOP, ASOS, and SNOTEL stations have daily minima and maxima observations for temperature in addition to precipitation. SNOTEL temperature data were biased warm at cold temperatures, and the biases were corrected based on Currier et al. (2017). In addition, we used hourly temperature data from 26 sites, which had HOBO U23 Pro v2 temperature/relative humidity sensors installed during the OLYMPEX period (see Fig. 2). Wind speed data, required as the VIC forcing, were obtained from the NRCS Waterhole SNOTEL site. This was the only SNOTEL site among the four at which wind measurements were available.
3. Methods
a. Precipitation estimation at lower elevations
As noted above, precipitation radar suffers from severe terrain obstruction over most of the Olympic Mountains. NSSL has developed a mosaic Radar Quality Index (RQI) product to indicate the potential uncertainties of radar precipitation related to beam obstruction due to terrain blockage and intersection with the melting layer. Its value ranges from 0 to 1, indicating uncertainty from high to low. To obtain realistic initial precipitation fields over the entire domain, we first used NSSL’s Q3MM to replace high-elevation pixels of Q3GC with RQI lower than 0.85, which basically delineated the mountain area at elevations higher than 500 m. We rescaled Q3GC pixels with RQI between 0.85 and 0.9 by a linear interpolation between the seasonal mean of Q3GC and Q3MM for better spatial continuity.
Once we obtained a (subjectively determined) plausible initial precipitation field, we augmented the merged product with additional gauges in the region. We incorporated gauges that produced useful data during at least 50% of the period from 1 October 2015 to 30 April 2016. A total of 120 gauges met this criterion, including 7 COOP gauges, 77 CoCoRaHS gauges, 1 RAWS gauge, 10 ASOS gauges, 21 OLYMPEX gauges, and 4 SNOTEL gauges (see Fig. 1). Most of these gauges are below 500 m elevation. We excluded the HADS gauges because they had already been incorporated into the Q3GC and Q3MM products. We considered daily precipitation as occurring between 0000 and 2400 Pacific standard time (PST; UTC − 0800). For COOP and CoCoRaHS gauges with only daily records, we proportioned them, according to their observation time, to 24 h based on hourly Q3GC. The rest of the gauges had hourly records, so they were directly accumulated to daily in PST. OLYMPEX rain gauges were quality controlled by the NASA OLYMPEX group. Because these NASA gauges were dual-platform tipping buckets, we used the average of each pair of gauges if they both had no quality flags for malfunctioning or possible ice/snow and if their daily precipitation differed by less than 10% for days when precipitation exceeded 10 mm; otherwise, the precipitation data were marked as invalid. We performed a simple quality control (QC) for all gauges to screen for outliers, including improbable zero values and unusually high daily values, by comparing each gauge with the neighboring four gauges. In particular, for any day, if the precipitation at the target (of the QC) gauge was zero but its surrounding four gauges all showed significant precipitation, the target gauge was flagged as missing. If precipitation at a selected gauge exceeded the maximum of the surrounding four gauges by 100% (i.e., double), it was flagged as missing as well.
We used the conditional merging (CM) method of Sinclair and Pegram (2005) to integrate the radar and gauge rainfall estimates. This geostatistical approach maintains the mean field characteristics from the gauges while preserving the spatial rainfall pattern from radar and has been found to be computationally efficient and robust (Berndt et al. 2014). We used the CM method because it outperformed KED in two recent studies (Berndt et al. 2014; Rabiei and Haberlandt 2015). The first step in CM is to interpolate gauge observations and radar estimates at gauge locations to grid points. For this purpose, we used the synergraphic mapping system (SYMAP) algorithm (Shepard 1984). We then added the deviation between interpolated and observed radar rainfall values at each grid point to the rain gauge interpolation field. Because of the limited number of stations, we assessed the accuracy of the merging method by systematically removing individual stations one at a time and evaluating the merged product at the station grid against the removed station.
b. Precipitation estimation at high elevations
1) Gridded temperature
We selected 335 grid cells with long-term average 1 April SWE over 10 mm as our modeling domain. We used an interpolation method described by Molotch (2009) to grid temperature. There were 4 SNOTEL sites and 16 HOBO sites at elevations higher than 650 m in our domain (see Fig. 2). We first estimated the average daily maximum and minimum temperature at each grid cell within the modeling domain by linear regression between the observations at these 20 sites and their elevations. At each station, we calculated residuals by subtracting daily observations from their predicted average daily values. These residuals were interpolated to model grids by SYMAP. We obtained gridded temperature by adding the gridded residuals to the predicted average temperature at each grid. We assessed the accuracy of this interpolation technique by removing individual stations one at a time (see Fig. S1 in the online supplemental material for results), similar to the method described in section 3a.
2) ASO SWE maps
Because of the relatively dense forest canopy over much of the OLYMPEX domain, ASO lidar ground point densities under canopy in this area (especially at intermediate elevations) were lower than at other ASO study sites, which increased the vegetation-induced errors in snow depth retrievals (K. Bormann et al. 2018, unpublished manuscript). The ground point densities decreased nonlinearly with increased canopy density as well as vegetation heights, with great reductions occurring after tree heights exceeded 5 m (K. Bormann et al. 2018, unpublished manuscript). However, the ASO team is working on algorithm adjustments to mitigate these impacts to the degree possible.
We evaluated the possible underestimation of SWE under canopy. We first calibrated VIC snow roughness lengths and rain–snow transition temperatures at SNOTEL sites [see section 3b(3) for details]. We designated “open grid cells” as grid cells that are free of vegetation. We identified the open grid cells at the 3-m ASO scale and adjusted precipitation and temperature lapse rates to force the VIC SWE to match that inferred from ASO. We then took essentially the same precipitation and temperature and forced the VIC snow model for forested pixels with the Storck (1999) parameterization and compared the inferred undercanopy SWE with the VIC estimates. If they did not match to within a tolerance that we specified (see SM3 in the supplemental material), we adjusted (mostly increased) the undercanopy SWE using a Bayesian adjustment procedure.
We implemented the undercanopy SWE adjustment procedure as follows. We first used an index station method (motivated by the large number of 3-m grid cells) to generate a 3-m snow density map in order to convert ASO snow depth to SWE. Within each 1/32° grid, we first divided the 3-m snow depth data into multiple categories by elevation bands with an interval of 50 m and into slope bands with an interval of 15°. We calculated the distribution of snow depths in open areas within each category and selected some index pseudostations from each 0.1 quantile for modeling (about 300 points within each 1/32° grid). For each index station, we ran the VIC snow model with precipitation and temperature forcings lapsed by elevation, and we adjusted the precipitation lapse rate and temperature lapse rates to force the simulation to match the observed snow depth values, from which we obtained snow densities at the index pseudostations. We bias corrected simulated densities by developing a linear relationship between snow depth and density simulation errors at 13 observation sites (see Fig. S2) using in situ snow density measurements taken on 8 February 2016 and 7 April 2016. Since ASO surveys were conducted on 9 February 2016 and 29 March 2016, the density change from 29 March 2016 to 7 April 2016 was subtracted from the in situ measurements on 7 April 2016 before its use, by taking the difference in snow density observations at the nearest SNOTEL sites between these two dates.
With this bias-corrected snow density map, we converted ASO snow depth maps to ASO SWE maps. Given the changes of snow density in the bias correction, we ran the VIC snow model for each index pseudostation and slightly adjusted the precipitation and temperature lapse rates that we had previously obtained to force the simulations to match the ASO SWE values. After our simulations matched the ASO SWE at index stations in the open areas, we reran the model with the canopy interception parameter obtained from field experiments reported by Storck (1999).
Here we assumed that the distribution of SWE in a vegetated grid cell was well represented by the SWE in the surrounding (within a distance of 30 m) vegetated grid cells. For the vegetated pixels, if the ASO SWE fell in the lower 5% of the distribution of its surrounding VIC estimates within a 30-m distance, we adjusted the ASO SWE using the Bayesian conditional probability approach in a way similar to Coccia et al. (2015). For a 3-m undercanopy grid cell that we deemed to require adjustment, we estimated its rank distribution (after normal quantile transformation) conditioned on the VIC undercanopy simulations of its surrounding grid cells within a 30-m distance. We randomly drew a value from the rank distribution and then mapped it back to its original ASO distribution, which was used to update the suspect grid cell. We conducted a sensitivity test on the threshold value, and results were shown in section SM3 of the supplemental material.
After we adjusted the ASO 3-m SWE maps, we aggregated them to 1/32° spatial resolution. About ⅓ of the grid cells in our domain had no ASO observation on 8 February 2016 (a brief weather window closed, and ASO observations were curtailed over part of the domain due to weather conditions; see Fig. S4). We filled in these grids based on a linear regression between ASO estimates for observed grid cells on 8 February and 29 March, which had a correlation of 0.93.
3) Precipitation multiplier
With an initial precipitation field, gridded temperature field at higher elevations, and observed SWE maps on 8 February and 29 March 2016, we were able to infer precipitation using the VIC model driven by observed forcings and adjusted by ASO SWE. We first calibrated VIC snow model parameters at SNOTEL grids. SNOTEL sites not only measure snow variables, but also provide precipitation and temperature observations. We used the forcings (precipitation and temperature) from SNOTEL observations to calibrate the rain–snow temperature thresholds and snow roughness lengths in VIC, which were sensitive parameters as suggested by Andreadis et al. (2009), by minimizing the SWE simulation error at the four SNOTEL sites. Currier et al. (2017) note that these sites were extremely sensitive to longwave parameterization, so we calibrated the VIC longwave algorithm based on observations at the Snoqualmie Pass, Washington, energy balance tower (approximately 100 km distant from the OLYMPEX domain; Wayand et al. 2015). In addition, we calibrated the maximum interception capacity by comparing simulated SWE averaged over the 1/32° grid cell with ASO SWE after the SNOTEL SWE was matched by the simulated SWE in the corresponding snowband.
We used a moving window of 3 × 3 grids of 1/32° resolution to search for optimal precipitation factors for the two periods ending with the 8 February and 29 March 2016 ASO survey dates. We adjusted the precipitation multiplier between 0.1 and 2.0 with an interval of 0.01 to force the simulated SWE to match the ASO SWE. These searches were conducted for the two periods separately given that the spatial patterns of precipitation varied with time. We used the best result obtained from the first period as the initial condition for the second period.
4. Results
a. Precipitation estimation at low elevations
Our radar and gauge merged precipitation map is shown in Fig. 3b. The merged product maintains the mean field characteristics from gauges. Because we included more gauges in the mountain areas than does the Q3MM product, precipitation in these areas is different than in Q3MM (mostly increased). Figure 4 shows the assessment of the radar and station precipitation merging technique. Most of predicted precipitation at station locations matches the observations relatively well, with correlations mostly higher than 0.9 and normalized RMSE (NRMSE) mostly smaller than 0.8 over the time period from 1 October 2015 to 30 April 2016. Predicted precipitation at elevations higher than 500 m, however, shows greater differences from the (relatively few) stations, with correlations ranging from 0.86 to 0.92 and NRMSE ranging from 0.51 to 0.86.
b. Precipitation estimation at high elevations
The result of undercanopy SWE adjustment averaged over 1/32° grid cells are shown in Fig. S3 and indicate increases up to 0.23 m in some grid cells (of course, the effects of adjustment generally are much larger at the 3-m ASO pixel scale). We aggregated the 3-m ASO SWE maps to 1/32° spatial resolution in order to adjust the grid precipitation patterns (see Fig. S5). Before precipitation pattern adjustment, we calibrated the VIC snow model (snow roughness and rain–snow transition temperature) at the four SNOTEL grids using SNOTEL observed precipitation; the calibration results are shown in Fig. 5. We compared VIC SWE with ASO 1/32° SWE on the two ASO survey dates with canopy (whose percentage is based on a land cover map and percent tree canopy map from the National Land Cover Database) and SNOTEL observations with VIC SWE in the corresponding snowbands with canopy removed, both of which matched relatively well.
Figure 6 shows precipitation adjustment factor results. For the first period ending on 8 February 2016, adjustments based on ASO SWE generally decreased precipitation, with factors ranging from 0.49 to 1.33. The mean precipitation factor averaged across the modeling grids was 0.86. For the second period, the adjustment factors increased along the southwestern and western slopes, ranging from 0.38 to 1.75 over all modeling grids, with a mean of 1.23. The accumulated precipitation averaged over the mountain areas during the period up to the first ASO flight (from 1 October 2015 to 8 February 2016) was 2116 mm and 2482 mm with and without adjustment, respectively, and for the period between the two ASO flights’ (from 9 February 2016 to 30 April 2016) mountain precipitation was 1291 mm with and 1034 mm without adjustment. These are averages; differences at individual grid cells were considerably larger in many cases. Overall, the effect of utilizing the ASO data on the precipitation estimation in the mountain area of the OLYMPEX domain was substantial.
We evaluated the adjusted precipitation factors at 11 snow depth sites (see Fig. 7). The simulations matched ASO snow depth measurements reasonably well at 1/32° spatial resolution. A few sites (such as Mount Hopper and Mount Seattle East and West) show greater bias on 8 February 2016, possibly due to errors in density estimation and spatial representation issues of in situ snow sites. For example, Mount Hopper had a snowdrift, but the snow pole was outside of the snowdrift. We also compared simulated time series of snow depth in the corresponding snowbands with snow pole measurements (see Fig. 7). Some of the snow poles bent after 23 December 2015 and therefore had greater uncertainty (Currier et al. 2017). The simulations matched the pole observations plausibly well, but missed the late December storm, and the model results generally have smaller temporal variations than the observations, possibly due to the VIC new snow density algorithm.
c. Evaluation of the IMERG product
As described in section 1, our ultimate goal was to compare IMERG precipitation with our “best estimate” precipitation fields over the OLYMPEX domain. To do so, we aggregated our final daily precipitation map (Fig. 6f) from 1/32° to the IMERG 0.1° spatial resolution. We compared spatial patterns of the IMERG satellite-only product (version 04A), as well as the corresponding Japanese algorithm Global Satellite Map of Precipitation (GSMaP; Okamoto et al. 2005; Kubota et al. 2007; Aonashi et al. 2009; Ushio et al. 2009) satellite-only product (version 04B), with our reconstructed product on a monthly basis over winter 2015/16 (Fig. 8). As the figure shows, over the northern part of the OLYMPEX domain, IMERG misses many of the orographic precipitation patterns that are evident in our composite product. GSMaP performs better than IMERG in capturing orographic precipitation patterns in most of the months, but with a shift toward the west. We divided our study domain into six subregions according to their climatology and precipitation patterns (see Fig. 9). We evaluated storm interarrival times for hourly IMERG and GSMaP data, relative to hourly Q3GC (with temporal information mainly from radar) at lower elevations and Q3MM (with temporal information from HADS gauges) at higher elevations (see Fig. 10). For this purpose, hourly precipitation over a subdomain was treated as zero, if it was below 0.1 mm. Both IMERG and GSMaP capture the temporal frequency of storms relatively well in most subregions except for region IV(b) (eastern Olympic Mountains interior), which may be partly due to the small number of gauges over this area incorporated in Q3MM.
In addition, we compared the three products on daily and seasonal scales (Figs. 11 and 12). GSMaP shows a slightly better match with our composite product compared to IMERG on daily and seasonal scales. The underestimation of both IMERG and GSMaP is higher in mountainous region IV and is obvious especially in the mountainous interior of the OLYMPEX domain, where the underestimation is up to 63% and 59%, respectively, in region IV(c) on a seasonal basis. For the low-elevation areas, IMERG and GSMaP errors are smaller, with the smallest errors, on a seasonal percentage basis, in inland region III where there is less winter precipitation than in coastal region I. On a monthly percentage basis, IMERG has smaller underestimation errors in December and January. IMERG and GSMaP underestimate precipitation over the entire domain from 1 October 2015 to 30 April 2016 by 41% and 28%, respectively, compared to our composite product. The underestimation is more pronounced at high elevations (region IV), with percentages of 57% and 48%, respectively.
5. Discussion
Previous studies have shown that satellite-based precipitation products such as CMORPH, PERSIANN, and TRMM 3B42 tend to underestimate high precipitation rates over the contiguous United States (e.g., Stampoulis et al. 2013; Mei et al. 2014). Underestimation usually occurs in winter over mountainous regions (e.g., Tian et al. 2009). AghaKouchak et al. (2012) showed that satellite precipitation products had higher systematic errors in winter than summer, and the errors seemed to be proportional to rain rate. Winter precipitation over the OLYMPEX domain is primarily orographically influenced stratiform (Zagrodnik et al. 2018). Most satellite-based precipitation estimates (including the IMERG component algorithms) are based on a combination of passive microwave (PMW) and infrared (IR) radiometers, and some (including IMERG) also use the GPM dual-frequency precipitation radar as a calibrator. IR schemes do not perform well for stratiform clouds (Lettenmaier et al. 2015). On the other hand, PMW retrieval techniques rely mostly on indirect-scattering-based schemes and tend to underestimate precipitation generated in shallow orographic systems due to weak ice scattering signatures (Shige et al. 2013). PMW sensors also show an inability to measure frozen precipitation over snow- or ice-covered areas (Nasrollahi 2015). This is a problem expected to affect all satellite precipitation products, not just IMERG. High-resolution numerical modeling could potentially help to adjust satellite-based precipitation products (see, e.g., Nikolopoulos et al. 2015). For example, Currier et al. (2017) found that high-resolution WRF precipitation simulations were relatively unbiased (over a set of sites) in the interior of the Olympics. However, we preferred not to incorporate NWP or other atmospheric model results in our estimates, so that they are at least approximately independent of this class of models, and could be used in model diagnosis (in addition to satellite-based precipitation estimates, which is our primary focus).
Our precipitation estimates are affected by errors in both observations and (for the interior of the domain) the model parameters we used in our snowpack reconstructions, though those errors should be mitigated by the adjustment procedure discussed in section 3b(2). The estimates are also strongly affected by the estimates of temperature lapse rates that are input to our model snowpack reconstructions. Radiosonde data from Quillayute (UIL; at the northwestern extremity of our domain, on the Washington coast) provide vertical profile temperature measurements. Temperature lapse rates, however, can have great spatial variability over complex terrains. For example, Minder et al. (2010) found that temperature lapse rates exhibit substantial differences on windward and leeward sides of the Cascade Mountains. Therefore, rather than using lapse rates from UIL soundings, we estimated grid temperature values by interpolating residuals from the local 20 temperature stations. By taking one station out at a time, we evaluated the accuracy of our temperature interpolation method; the RMSE averaged across all stations was 0.95°C for the entire study period. We note that lapse rate errors affect our reconstruction of SWE, and in turn winter precipitation, in the high-elevation interior of our domain. They have little effect over the much larger low-elevation portion, and generally do not affect our conclusions regarding underestimation by IMERG of precipitation, which is pervasive across our entire domain.
Temperature during winter 2015/16 (from 1 November to 31 March) was close to the climatological normal, but winter precipitation at three SNOTEL sites with records longer than 20 years was 30% higher than normal. An elevation-based precipitation regression model such as PRISM might represent mean spatial precipitation fields well, but has not always performed well during anomalous years (Livneh et al. 2014; Lundquist et al. 2010, 2015). Currier et al. (2017) suggested that PRISM shifts estimates of total annual precipitation too far west of the Olympic crest due to sparse gauge distribution over the mountainous interior of the Olympics and the smoothing method PRISM uses on the DEM to derive topographic facets, which our study suggests as well (see Fig. 3a). The SNOTEL Buckinghorse station installed in 2007 was not included in PRISM, so when we augmented the Q3MM at higher elevations with additional gauges, the gridded precipitation in the vicinity of this station was amplified considerably (see Fig. 3b). The adjustment based on ASO SWE, however, shifted precipitation toward the Olympic crest (see Figs. 6e,f). Our incorporation of snow data may induce some artifacts into our precipitation inversions due to wind-related drifting, snow creep, or avalanching, although these effects are likely minor aside from the highest elevations given the dense forest cover over much of the domain. Other OLYMPEX instruments might be included to further improve our estimation. For example, the Micro Rain Radars (Houze et al. 2017) provide vertical profile measurements, which could improve the drop size parameterizations used in the precipitation radar estimates, although these improvements likely would be mostly limited to low elevations, which are already better represented by gauges than high elevations.
We evaluated our precipitation estimates by systematically removing one station at a time and found that 80% of the gauges have RMSE less than 4 mm day−1, and their NRMSEs range from 0.11 to 0.83 over the entire OLYMPEX period. All gauges have correlations greater than 0.85 in the one-at-a-time removal estimates. Most of the gauges at lower elevations with larger errors are located in the eastern and northern part of the domain, where gauges are relatively dense and are surrounded by gauges from different sources. In contrast, OLYMPEX gauges along the orographic windward slope to the west of the Olympic crest and SNOTEL gauges at higher elevations show much greater bias than the ones at lower elevations, with RMSEs ranging from 5.6 to 15 mm day−1, NRMSEs ranging from 0.45 to 0.86, and correlations ranging from 0.86 to 0.95. The larger error for the mountain gauges can be attributed to greater spatial heterogeneity of precipitation, sparse gauges, and inaccuracy of climatology from PRISM incorporated into Q3MM. The errors in the estimation of time series in the initial precipitation fields at higher elevations can propagate to our precipitation adjustment factor.
6. Conclusions
We evaluated the recently released IMERG version 04A satellite-only precipitation product and its Japanese counterpart GSMaP version 04B over a domain that is characterized by winter-dominant precipitation with large orographic enhancements. Despite IMERG improvements relative to TRMM-based products, they retain large biases over complex terrain and for frozen precipitation as indicated by previous studies (e.g., Chen and Li 2016; Tang et al. 2016; Kim et al. 2017), and these tendencies are confirmed here.
Because of known issues in IMERG, the higher elevations of the Olympics are particularly challenging. Using a combination of radar and gauge precipitation, and adjustments in the sparsely observed high-elevation interior of the domain based on ASO-based SWE estimates, we obtained a daily precipitation product for the OLYMPEX domain. Our results show that both IMERG and GSMaP capture storm interarrival time well; their major issue is with precipitation magnitude.
IMERG substantially underestimates precipitation throughout our evaluation period (from 1 October 2015 to 30 April 2016), with ratios of IMERG to our product for each month of 0.50, 0.51, 0.84, 0.82, 0.49, 0.29, and 0.36 averaged over our domain. GSMaP outperforms IMERG in most months, with ratios of GSMaP to our product for each month of 0.81, 0.82, 0.87, 0.77, 0.50, 0.49, and 0.62. The ratios of IMERG and GSMaP to our composite product are 0.59 and 0.72 over the entire domain and time period from 1 October 2015 to 30 April 2016.
The best match of IMERG and GSMaP with our product is in the relatively inland subdomain III, where there is less precipitation, with ratios of 0.73 and 1.00. For the coastal subdomain I, the ratios are 0.65 and 0.74. The underestimation is more severe at high elevations, with seasonal ratios of IMERG to our product of 0.41, 0.53, and 0.37 for subdomains IV(a)–(c) separately, and with ratios of GSMaP to our product of 0.52, 0.61, and 0.41 for these regions.
Acknowledgments
This work was funded in part through NASA Grant NNX13AH74G to the University of California, Los Angeles. The authors gratefully acknowledge Kathryn Bormann at JPL for providing ASO data; Heather Moser Grams at NSSL for providing MRMS data; Bart Nijssen, Diana Gergel, and Joe Zagrodnik at the University of Washington for providing OLYMPEX precipitation gauge data and gauge QC documentation; and Walter Petersen at Marshall Space Flight Center for his advice and comments on this work. Part of this work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. We particularly emphasize the NASA Terrestrial Hydrology program and the NASA Applied Sciences program.
REFERENCES
AghaKouchak, A., A. Mehran, H. Norouzi, and A. Behrangi, 2012: Systematic and random error components in satellite precipitation data sets. Geophys. Res. Lett., 39, L09406, https://doi.org/10.1029/2012GL051592.
Anders, A., G. Roe, D. Durran, and J. Minder, 2007: Small-scale spatial gradients in climatological precipitation on the Olympic peninsula. J. Hydrometeor., 8, 1068–1081, https://doi.org/10.1175/JHM610.1.
Andreadis, K., P. Storck, and D. Lettenmaier, 2009: Modeling snow accumulation and ablation processes in forested environments. Water Resour. Res., 45, W05429, https://doi.org/10.1029/2008WR007042.
Aonashi, K., and Coauthors, 2009: GSMaP passive microwave precipitation retrieval algorithm: Algorithm description and validation. J. Meteor. Soc. Japan, 87A, 119–136, https://doi.org/10.2151/jmsj.87A.119.
Barros, A. P., and D. P. Lettenmaier, 1993: Dynamic modeling of the spatial distribution of precipitation in remote mountainous areas. Mon. Wea. Rev., 121, 1195–1214, https://doi.org/10.1175/1520-0493(1993)121<1195:DMOTSD>2.0.CO;2.
Berndt, C., E. Rabiei, and U. Haberlandt, 2014: Geostatistical merging of rain gauge and radar data for high temporal resolutions and various station density scenarios. J. Hydrol., 508, 88–101, https://doi.org/10.1016/j.jhydrol.2013.10.028.
Chen, F., and X. Li, 2016: Evaluation of IMERG and TRMM 3B43 monthly precipitation products over mainland China. Remote Sens., 8, 472, https://doi.org/10.3390/rs8060472.
Coccia, G., A. L. Siemann, M. Pan, and E. F. Wood, 2015: Creating consistent datasets by combining remotely-sensed data and land surface model estimates through Bayesian uncertainty post-processing: The case of Land Surface Temperature from HIRS. Remote Sens. Environ., 170, 290–305, https://doi.org/10.1016/j.rse.2015.09.010.
Colle, B., and C. Mass, 1996: An observational and modeling study of the interaction of low-level southwesterly flow with the Olympic Mountains during COAST IOP 4. Mon. Wea. Rev., 124, 2152–2175, https://doi.org/10.1175/1520-0493(1996)124<2152:AOAMSO>2.0.CO;2.
Cosgrove, B., and Coauthors, 2003: Real-time and retrospective forcing in the North American Land Data Assimilation System (NLDAS) project. J. Geophys. Res., 108, 8842, https://doi.org/10.1029/2002JD003118.
Currier, W. R., T. Thorson, and J. D. Lundquist, 2017: Independent evaluation of frozen precipitation from WRF and PRISM in the Olympic Mountains. J. Hydrometeor., 18, 2681–2703, https://doi.org/10.1175/JHM-D-17-0026.1.
Daly, C., R. Neilson, and D. Phillips, 1994: A statistical topographic model for mapping climatological precipitation over mountainous terrain. J. Appl. Meteor., 33, 140–158, https://doi.org/10.1175/1520-0450(1994)033<0140:ASTMFM>2.0.CO;2.
Daly, C., M. Halbleib, J. I. Smith, W. P. Gibson, M. K. Doggett, G. H. Taylor, J. Curtis, and P. P. Pasteris, 2008: Physiographically sensitive mapping of climatological temperature and precipitation across the conterminous United States. Int. J. Climatol., 28, 2031–2064, https://doi.org/10.1002/joc.1688.
Dettinger, M., F. Ralph, T. Das, P. Neiman, and D. Cayan, 2011: Atmospheric rivers, floods and the water resources of California. Water, 3, 445–478, https://doi.org/10.3390/w3020445.
Durand, M., N. P. Molotch, and S. A. Margulis, 2008: A Bayesian approach to snow water equivalent reconstruction. J. Geophys. Res., 113, D20117, https://doi.org/10.1029/2008JD009894.
Ebert, E., J. Janowiak, and C. Kidd, 2007: Comparison of near-real-time precipitation estimates from satellite observations and numerical models. Bull. Amer. Meteor. Soc., 88, 47–64, https://doi.org/10.1175/BAMS-88-1-47.
Gebregiorgis, A., P.-E. Kirstetter, Y. Hong, N. Carr, J. J. Gourley, and Y. Zheng, 2017: Understanding overland multisensor satellite precipitation error in TMPA-RT products. J. Hydrometeor., 18, 285–306, https://doi.org/10.1175/JHM-D-15-0207.1.
Girotto, M., G. Cortés, S. A. Margulis, and M. Durand, 2014a: Examining spatial and temporal variability in snow water equivalent using a 27 year reanalysis: Kern River watershed, Sierra Nevada. Water Resour. Res., 50, 6713–6734, https://doi.org/10.1002/2014WR015346.
Girotto, M., S. A. Margulis, and M. Durand, 2014b: Probabilistic SWE reanalysis as a generalization of deterministic SWE reconstruction techniques. Hydrol. Processes, 28, 3875–3895, https://doi.org/10.1002/hyp.9887.
Gourley, J., Y. Hong, Z. Flamig, L. Li, and J. Wang, 2010: Intercomparison of rainfall estimates from radar, satellite, gauge, and combinations for a season of record rainfall. J. Appl. Meteor. Climatol., 49, 437–452, https://doi.org/10.1175/2009JAMC2302.1.
Hamlet, A., and D. Lettenmaier, 2005: Production of temporally consistent gridded precipitation and temperature fields for the continental United States. J. Hydrometeor., 6, 330–336, https://doi.org/10.1175/JHM420.1.
Hasan, M., A. Sharma, F. Johnson, G. Mariethoz, and A. Seed, 2016: Merging radar and in situ rainfall measurements: An assessment of different combination algorithms. Water Resour. Res., 52, 8384–8398, https://doi.org/10.1002/2015WR018441.
Henn, B., M. P. Clark, D. Kavetski, and J. D. Lundquist, 2015: Estimating mountain basin-mean precipitation from streamflow using Bayesian inference. Water Resour. Res., 51, 8012–8033, https://doi.org/10.1002/2014WR016736.
Henn, B., M. P. Clark, D. Kavetski, B. McGurk, T. H. Painter, and J. D. Lundquist, 2016: Combining snow, streamflow, and precipitation gauge observations to infer basin-mean precipitation. Water Resour. Res., 52, 8700–8723, https://doi.org/10.1002/2015WR018564.
Henn, B., A. J. Newman, B. Livneh, C. Daly, and J. D. Lundquist, 2018: An assessment of differences in gridded precipitation datasets in complex terrain. J. Hydrol., 556, 1205–1219, https://doi.org/10.1016/j.jhydrol.2017.03.008.
Hou, A., and Coauthors, 2014: The Global Precipitation Measurement mission. Bull. Amer. Meteor. Soc., 95, 701–722, https://doi.org/10.1175/BAMS-D-13-00164.1.
Houze, R. A., Jr., and Coauthors, 2017: The Olympic Mountains Experiment (OLYMPEX). Bull. Amer. Meteor. Soc., 98, 2167–2188, https://doi.org/10.1175/BAMS-D-16-0182.1.
Huffman, G. J., D. T. Bolvin, and E. J. Nelkin, 2015: Integrated Multi-satellitE Retrievals for GPM (IMERG) technical documentation. NASA/GSFC Code 612 Tech. Doc., 48 pp., http://pmm.nasa.gov/sites/default/files/document_files/IMERG_doc.pdf.
Hunter, S. M., 1996: WSR-88D radar rainfall estimation: Capabilities, limitations and potential improvements. Natl. Wea. Dig., 20, 26–38.
Kim, K., J. Park, J. Baik, and M. Choi, 2017: Evaluation of topographical and seasonal feature using GPM IMERG and TRMM 3B42 over Far-East Asia. Atmos. Res., 187, 95–105, https://doi.org/10.1016/j.atmosres.2016.12.007.
Kubota, T., and Coauthors, 2007: Global precipitation map using satelliteborne microwave radiometers by the GSMaP project: Production and validation. IEEE Trans. Geosci. Remote Sens., 45, 2259–2275, https://doi.org/10.1109/TGRS.2007.895337.
Kummerow, C., W. Barnes, T. Kozu, J. Shiue, and J. Simpson, 1998: The Tropical Rainfall Measuring Mission (TRMM) sensor package. J. Atmos. Oceanic Technol., 15, 809–817, https://doi.org/10.1175/1520-0426(1998)015<0809:TTRMMT>2.0.CO;2.
Lettenmaier, D. P., 2017: Observational breakthroughs lead the way to improved hydrological predictions. Water Resour. Res., 53, 2591–2597, https://doi.org/10.1002/2017WR020896.
Lettenmaier, D. P., D. Alsdorf, J. Dozier, G. Huffman, M. Pan, and E. Wood, 2015: Inroads of remote sensing into hydrologic science during the WRR era. Water Resour. Res., 51, 7309–7342, https://doi.org/10.1002/2015WR017616.
Leung, L., and Y. Qian, 2003: The sensitivity of precipitation and snowpack simulations to model resolution via nesting in regions of complex terrain. J. Hydrometeor., 4, 1025–1043, https://doi.org/10.1175/1525-7541(2003)004<1025:TSOPAS>2.0.CO;2.
Liang, X., D. P. Lettenmaier, E. Wood, and S. Burges, 1994: A simple hydrologically based model of land-surface water and energy fluxes for general-circulation models. J. Geophys. Res., 99, 14 415–14 428, https://doi.org/10.1029/94JD00483.
Lin, Y., and K. E. Mitchell, 2005: The NCEP Stage II/IV hourly precipitation analyses: development and applications. 19th Conf. on Hydrology, San Diego, CA, Amer. Meteor. Soc., 1.2, https://ams.confex.com/ams/Annual2005/techprogram/paper_83847.htm.
Livneh, B., J. Deems, D. Schneider, J. Barsugli, and N. Molotch, 2014: Filling in the gaps: Inferring spatially distributed precipitation from gauge observations over complex terrain. Water Resour. Res., 50, 8589–8610, https://doi.org/10.1002/2014WR015442.
Livneh, B., T. J. Bohn, D. W. Pierce, F. Munoz-Arriola, B. Nijssen, R. Vose, D. R. Cayan, and L. Brekke, 2015: A spatially comprehensive, hydrometeorological data set for Mexico, the US, and Southern Canada 1950–2013. Sci. Data, 2, https://doi.org/10.1038/sdata.2015.42.
Lundquist, J., J. Minder, P. Neiman, and E. Sukovich, 2010: Relationships between barrier jet heights, orographic precipitation gradients, and streamflow in the Northern Sierra Nevada. J. Hydrometeor., 11, 1141–1156, https://doi.org/10.1175/2010JHM1264.1.
Lundquist, J., M. Hughes, B. Henn, E. Gutmann, B. Livneh, J. Dozier, and P. Neiman, 2015: High-elevation precipitation patterns: Using snow measurements to assess daily gridded datasets across the Sierra Nevada, California. J. Hydrometeor., 16, 1773–1792, https://doi.org/10.1175/JHM-D-15-0019.1.
Maddox, R. A., J. Zhang, J. J. Gourley, and K. W. Howard, 2002: Weather radar coverage over the contiguous United States. Wea. Forecasting, 17, 927–934, https://doi.org/10.1175/1520-0434(2002)017<0927:WRCOTC>2.0.CO;2.
Mass, C., and Coauthors, 2003: Regional environmental prediction over the Pacific Northwest. Bull. Amer. Meteor. Soc., 84, 1353–1366, https://doi.org/10.1175/BAMS-84-10-1353.
Maurer, E. P., J. D. Rhoads, R. O. Dubayah, and D. P. Lettenmaier, 2003: Evaluation of the snow-covered area data product from MODIS. Hydrol. Processes, 17, 59–71, https://doi.org/10.1002/hyp.1193.
Mehran, A., and A. AghaKouchak, 2014: Capabilities of satellite precipitation datasets to estimate heavy precipitation rates at different temporal accumulations. Hydrol. Processes, 28, 2262–2270, https://doi.org/10.1002/hyp.9779.
Mei, Y., E. Anagnostou, E. Nikolopoulos, and M. Borga, 2014: Error analysis of satellite precipitation products in mountainous basins. J. Hydrometeor., 15, 1778–1793, https://doi.org/10.1175/JHM-D-13-0194.1.
Miao, C., H. Ashouri, K. Hsu, S. Sorooshian, and Q. Duan, 2015: Evaluation of the PERSIANN-CDR daily rainfall estimates in capturing the behavior of extreme precipitation events over China. J. Hydrometeor., 16, 1387–1396, https://doi.org/10.1175/JHM-D-14-0174.1.
Minder, J., D. Durran, G. Roe, and A. Anders, 2008: The climatology of small-scale orographic precipitation over the Olympic Mountains: Patterns and processes. Quart. J. Roy. Meteor. Soc., 134, 817–839, https://doi.org/10.1002/qj.258.
Minder, J., P. Mote, and J. Lundquist, 2010: Surface temperature lapse rates over complex terrain: Lessons from the Cascade Mountains. J. Geophys. Res., 115, D14122, https://doi.org/10.1029/2009JD013493.
Molotch, N., 2009: Reconstructing snow water equivalent in the Rio Grande headwaters using remotely sensed snow cover data and a spatially distributed snowmelt model. Hydrol. Processes, 23, 1076–1089, https://doi.org/10.1002/hyp.7206.
Nasrollahi, N., 2015: False alarm in satellite precipitation data. Improving Infrared-Based Precipitation Retrieval Algorithms Using Multi-Spectral Satellite Imagery, N. Nasrollahi, Ed., Springer, 7–12.
Neiman, P., L. Schick, F. Ralph, M. Hughes, and G. Wick, 2011: Flooding in western Washington: The connection to atmospheric rivers. J. Hydrometeor., 12, 1337–1358, https://doi.org/10.1175/2011JHM1358.1.
Newman, A., and Coauthors, 2015: Gridded ensemble precipitation and temperature estimates for the contiguous United States. J. Hydrometeor., 16, 2481–2500, https://doi.org/10.1175/JHM-D-15-0026.1.
Nikolopoulos, E., N. Bartsotas, E. Anagnostou, and G. Kallos, 2015: Using high-resolution numerical weather forecasts to improve remotely sensed rainfall estimates: The case of the 2013 Colorado flash flood. J. Hydrometeor., 16, 1742–1751, https://doi.org/10.1175/JHM-D-14-0207.1.
Okamoto, K., T. Iguchi, N. Takahashi, K. Iwanami, and T. Ushio, 2005: The Global Satellite Mapping of Precipitation (GSMaP) project. Proc. 25th IEEE Int. Geoscience and Remote Sensing Symp., Seoul, South Korea, IEEE, 3414–3416, https://doi.org/10.1109/IGARSS.2005.1526575.
Painter, T., and Coauthors, 2016: The Airborne Snow Observatory: Fusion of scanning lidar, imaging spectrometer, and physically-based modeling for mapping snow water equivalent and snow albedo. Remote Sens. Environ., 184, 139–152, https://doi.org/10.1016/j.rse.2016.06.018.
Prat, O., and B. Nelson, 2015: Evaluation of precipitation estimates over CONUS derived from satellite, radar, and rain gauge data sets at daily to annual scales (2002–2012). Hydrol. Earth Syst. Sci., 19, 2037–2056, https://doi.org/10.5194/hess-19-2037-2015.
Rabiei, E., and U. Haberlandt, 2015: Applying bias correction for merging rain gauge and radar data. J. Hydrol., 522, 544–557, https://doi.org/10.1016/j.jhydrol.2015.01.020.
Ralph, F., P. Neiman, G. Wick, S. Gutman, M. Dettinger, D. Cayan, and A. White, 2006: Flooding on California’s Russian River: Role of atmospheric rivers. Geophys. Res. Lett., 33, L13801, https://doi.org/10.1029/2006GL026689.
Shepard, D. S., 1984: Spatial statistics and models. Computer Mapping: The SYMAP Interpolation Algorithm, G. L. Gaile and C. J. Willmott, Eds., D. Reidel, 133–145.
Shige, S., S. Kida, H. Ashiwake, T. Kubota, and K. Aonashi, 2013: Improvement of TMI rain retrievals in mountainous areas. J. Appl. Meteor. Climatol, 52, 242–254, https://doi.org/10.1175/JAMC-D-12-074.1.
Sinclair, S., and G. Pegram, 2005: Combining radar and rain gauge rainfall estimates using conditional merging. Atmos. Sci. Lett., 6, 19–22, https://doi.org/10.1002/asl.85.
Stampoulis, D., E. Anagnostou, and E. Nikolopoulos, 2013: Assessment of high-resolution satellite-based rainfall estimates over the Mediterranean during heavy precipitation events. J. Hydrometeor., 14, 1500–1514, https://doi.org/10.1175/JHM-D-12-0167.1.
Storck, P., 1999: Trees, snow and flooding: An investigation of forest canopy effects on snow accumulation and melt at the plot and watershed scales in the Pacific Northwest. Water Resources Series Tech. Rep. 161, 176 pp, http://hdl.handle.net/1957/5136.
Tang, G., Y. Ma, D. Long, L. Zhong, and Y. Hong, 2016: Evaluation of GPM Day-1 IMERG and TMPA Version-7 legacy products over Mainland China at multiple spatiotemporal scales. J. Hydrol., 533, 152–167, https://doi.org/10.1016/j.jhydrol.2015.12.008.
Thornton, P., S. Running, and M. White, 1997: Generating surfaces of daily meteorological variables over large regions of complex terrain. J. Hydrol., 190, 214–251, https://doi.org/10.1016/S0022-1694(96)03128-9.
Tian, Y., and Coauthors, 2009: Component analysis of errors in satellite-based precipitation estimates. J. Geophys. Res., 114, D24101, https://doi.org/10.1029/2009JD011949.
Ushio, T., and Coauthors, 2009: A Kalman filter approach to the Global Satellite Mapping of Precipitation (GSMaP) from combined passive microwave and infrared radiometric data. J. Meteor. Soc. Japan, 87A, 137–151, https://doi.org/10.2151/jmsj.87A.137.
Velasco-Forero, C., D. Sempere-Torres, E. Cassiraga, and J. Gomez-Hernandez, 2009: A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data. Adv. Water Resour., 32, 986–1002, https://doi.org/10.1016/j.advwatres.2008.10.004.
Wayand, N. E., A. Massmann, C. Butler, E. Keenan, J. Stimberis, and J. D. Lundquist, 2015: A meteorological and snow observational data set from Snoqualmie Pass (921 m), Washington Cascades, U.S. Water Resour. Res., 51, 10 092–10 103, https://doi.org/10.1002/2015WR017773.
Zagrodnik, J. P., L. A. McMurdie, and R. A. Houze Jr., 2018: Stratiform precipitation processes in cyclones passing over a coastal mountain range. J. Atmos. Sci., https://doi.org/10.1175/JAS-D-17-0168.1, in press.
Zhang, J., 2011: National Mosaic and Multi-Sensor QPE (NMQ) system: Description, results, and future plans. Bull. Amer. Meteor. Soc., 92, 1321–1338, https://doi.org/10.1175/2011BAMS-D-11-00047.1.
Zhang, J., and Coauthors, 2016: Multi-Radar Multi-Sensor (MRMS) quantitative precipitation estimation: Initial operating capabilities. Bull. Amer. Meteor. Soc., 97, 621–637, https://doi.org/10.1175/BAMS-D-14-00174.1.
Zhu, Y., and R. E. Newell, 1994: Atmospheric rivers and bombs. Geophys. Res. Lett., 21, 1999–2002, https://doi.org/10.1029/94GL01710.