Model Performance of Downscaling 1999–2004 Hydrometeorological Fields to the Upper Rio Grande Basin Using Different Forcing Datasets

J. Li Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, The Henry Samueli School of Engineering, University of California, Irvine, Irvine, California

Search for other papers by J. Li in
Current site
Google Scholar
PubMed
Close
,
X. Gao Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, The Henry Samueli School of Engineering, University of California, Irvine, Irvine, California

Search for other papers by X. Gao in
Current site
Google Scholar
PubMed
Close
, and
S. Sorooshian Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, The Henry Samueli School of Engineering, University of California, Irvine, Irvine, California

Search for other papers by S. Sorooshian in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

This study downscaled more than five years of data (1999–2004) for hydrometeorological fields over the upper Rio Grande basin (URGB) to a 4-km resolution using a regional model [fifth-generation Pennsylvania State University–National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5, version 3)] and two forcing datasets that include National Centers for Environmental Prediction (NCEP)–NCAR reanalysis-1 (R1) and North America Regional Reanalysis (NARR) data. The long-term high-resolution simulation results show detailed patterns of hydroclimatological fields that are highly related to the characteristics of the regional terrain; the most important of these patterns are precipitation localization features caused by the complex topography. In comparison with station observational data, the downscaling processing, on whichever forcing field is used, generated more accurate surface temperature and humidity fields than the Eta Model and NARR data, although it still included marked errors, such as a negative (positive) bias toward the daily maximum (minimum) temperature and overestimated precipitation, especially in the cold season.

Comparing the downscaling results forced by the NARR and R1 with both the gridded and station observational data shows that under the NARR forcing, the MM5 model produced generally better results for precipitation, temperature, and humidity than it did under the R1 forcing. These improvements were more apparent in winter and spring. During the warm season, although the use of NARR improved the precipitation estimates statistically at the regional (basin) scale, it substantially underestimated them over the southern upper Rio Grande basin, partly because the NARR forcing data exhibited warm and dry biases in the monsoon-active region during the simulation period and improper domain selection. Analyses also indicate that over mountainous regions, both the Climate Prediction Center’s (CPC’s) gridded (0.25°) and NARR forcings underestimate precipitation in comparison with station gauge data.

Corresponding author address: Jialun Li, CHRS, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697-2175. Email: jialunl@uci.edu

Abstract

This study downscaled more than five years of data (1999–2004) for hydrometeorological fields over the upper Rio Grande basin (URGB) to a 4-km resolution using a regional model [fifth-generation Pennsylvania State University–National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5, version 3)] and two forcing datasets that include National Centers for Environmental Prediction (NCEP)–NCAR reanalysis-1 (R1) and North America Regional Reanalysis (NARR) data. The long-term high-resolution simulation results show detailed patterns of hydroclimatological fields that are highly related to the characteristics of the regional terrain; the most important of these patterns are precipitation localization features caused by the complex topography. In comparison with station observational data, the downscaling processing, on whichever forcing field is used, generated more accurate surface temperature and humidity fields than the Eta Model and NARR data, although it still included marked errors, such as a negative (positive) bias toward the daily maximum (minimum) temperature and overestimated precipitation, especially in the cold season.

Comparing the downscaling results forced by the NARR and R1 with both the gridded and station observational data shows that under the NARR forcing, the MM5 model produced generally better results for precipitation, temperature, and humidity than it did under the R1 forcing. These improvements were more apparent in winter and spring. During the warm season, although the use of NARR improved the precipitation estimates statistically at the regional (basin) scale, it substantially underestimated them over the southern upper Rio Grande basin, partly because the NARR forcing data exhibited warm and dry biases in the monsoon-active region during the simulation period and improper domain selection. Analyses also indicate that over mountainous regions, both the Climate Prediction Center’s (CPC’s) gridded (0.25°) and NARR forcings underestimate precipitation in comparison with station gauge data.

Corresponding author address: Jialun Li, CHRS, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697-2175. Email: jialunl@uci.edu

1. Introduction

The Rio Grande, flowing southward almost 2000 miles from its headwater in southern Colorado to the Gulf of Mexico, drains a basin of more than 350 000 mi2 and is the lifeblood of the semiarid region. The river supplies water for more than 3.5 million people, as well as for agricultural, recreational, hydropower, and industrial uses. Because freshwater supplies are limited and the demand for them is expanding, correctly estimating the variability of water resources in the basin is of practical importance and could have positive economic consequences. The upper Rio Grande basin (URGB) is a typical river basin in the southwestern semiarid region of the United States. In this narrow north–south-oriented river basin (Fig. 1), its elevation from the headwater (∼38.5°N) to the southern outlet (∼31°N) changes from about 4000 to about 1200 m, and accordingly, annual precipitation varies from more than 130 to less than 15 cm. Modeling and analyzing the hydroclimate within such a river basin is a challenge. The complex terrain of the URGB suggests that using a regional climate model to downscale hydrometeorological fields from the coarse-scale fields provided by climate model output or reanalysis is necessary to investigate basin- or catchment-scale hydroclimate.

Many studies over the mountainous western United States, whose research domain wholly or partially covers the URGB, have used the dynamic downscaling method (e.g., Giorgi 1991; Giorgi et al. 1994; Roads et al. 1994; Anderson and Roads 2002; Anderson et al. 2004; Berbery 2001; Gochis et al. 2003; Kim et al. 2000; Leung and Qian 2003; Leung et al. 2003; Schmitz and Mullen 1996; Kanamitsu and Mo 2003; Higgins et al. 1999; Mo et al. 2005). These studies have shown the benefits of using a high spatial resolution over the region to understand how orography affects hydroclimatology. For example, using the European Centre for Medium-Range Weather Forecasts (ECMWF) 1° by 1° reanalysis data, Schmitz and Mullen (1996) showed, as Gochis et al. (2003) summarized, that the moisture flux into the southwestern United States is attributable to low-level stationary components over the Gulf of California and that larger-scale circulation is responsible for transporting moisture from the midtropospheric level. The comparatively small transient component of the moisture flux comprises a substantial portion of the total moisture flux emanating from the northern part of the Gulf of California. In analyzing the direction of integrated moisture flux over the Gulf of California, Berbery (2001) found substantial differences between the results of Schmitz and Mullen (1996) and the outcomes of the 48-km Eta Data Assimilation System (EDAS). Berbery (2001) suggested that these differences in flux fields are attributable to the models’ high-resolution representation of regional topography and are mesoscale in nature. In an investigation of the sensitivity of precipitation and snowpack simulations to model resolution, Leung and Qian (2003) concluded that because “there are no uniform improvements in climate simulations as model resolution increases, processes that are strongly forced by terrain appear to benefit more from the use of higher spatial resolution.”

Currently, although regional climate models can run at grid resolutions as high as a few kilometers to hundreds of meters, they are used mainly for research on short-term weather and hydrologic predictions [see the review of Roebber et al. (2004); Faccani et al. (2003)]. Some previous studies have indicated that this fine-resolution modeling offers the potential to predict storms and streamflow over the mountainous western United States, where very high spatial resolution is needed to model physical processes, surface heterogeneity, and complex topography (Warner and Hsu 2000; Li et al. 2003a, b; Cotton et al. 2006; Saleeby et al. 2007; Westrick and Mass 2001; Westrick et al. 2002). As shown in Fig. 1, a 4-km elevation map can capture much more detailed topography features over the URGB than can a 12-km map. So far, however, few attempts have been made to employ very fine spatial resolutions for (long term) hydroclimatology studies. One of the objectives of this research was to investigate whether, or to what extent, this modeling can improve the accuracy of hydroclimate studies.

In addition to model spatial resolution, the choice of atmospheric forcing field exerts an important influence on model performance. Previous studies (Liang et al. 2001, 2004) have indicated that the use of different forcing fields can result in marked differences in model outcomes. Therefore, we will separately downscale two different forcing datasets [North America Regional Reanalysis (NARR) and National Centers for Environmental Prediction–National Center for Atmospheric Research (NCEP–NCAR) reanalysis-1 (R1)] to the same high-resolution grids (4 km) over the URGB region and compare the major surface hydroclimatological variables of precipitation and temperature. The goal of this research is to examine the performance of the dynamic downscaling technique when applied to the URGB region’s complex terrain at very high resolutions, integrated for multiple years, and using two different forcing datasets.

2. Methodology

a. Selected cases and forcing data

Water resources in the URGB come mainly from winter snowfall and summer monsoon rainfall, depending on the specific location. Thus, we selected six summer seasons and five winter seasons as test cases for the research reported in this paper. This time period included dry, normal, and wet years.

As mentioned before, R1 data (Kalnay et al. 1996), which covers approximately 2.5° and 6-h increments globally, is widely used as forcing data to investigate the southwestern U.S. regional climate variability (e.g., Liang et al. 2004; Leung et al. 2003; Gochis et al. 2003; Li et al. 2005). This research also used the R1 as the forcing field to determine whether it is reliable for use in downscaling to a few kilometers. More recently, a new dataset, the NARR (Mesinger et al. 2006), has become available. In comparison with the R1, the NARR is generated with high spatial (32 km) and temporal (3 hourly) resolution and a more detailed physics process, and it shows great improvements in many aspects, such as surface wind and troposphere state variables, especially in cold seasons (Mesinger et al. 2006). For this reason, NARR data are becoming popular as a benchmark and are now widely used (Mo et al. 2006; Nigam and Ruiz-Barradas 2006). The research reported in this paper also used the NARR as forcing data to determine whether the model result could be improved over the results from the R1 data.

The simulation period was from June 1999 to September 2004. The model ran monthly in the nonsnowfall season and every 4 months in the snowfall season without reinitialization (i.e., December through March of the following year, mainly to check the model’s snowpack performance). A sensitivity test showed that after one week of integration, variations in modeled topsoil moisture that started from different days (30, 20, 10, 5, and 0 days) converged closely, whereas third-layer soil moisture variations were similar, but they maintained a drying trend (see Fig. 3 in Li et al. 2007). Thus, we started the model about 10 days before the monthly (or 4 monthly) simulation and discarded the first 10 days of the simulation data. This way of timing the model’s initialization may decrease the accuracy of the model’s results in comparison with the more frequent model initialization, as indicated by Qian et al. (2003).

b. Domain setup

Using R1 data as forcing fields, this research used four nested and two-way communication domains (see Fig. 2). Domain one (D1), at a 108-km grid resolution, covers the entire United States, Mexico, southern Canada, Central America, and the surrounding oceans. Domain two (D2) covers the western United States and northern Mexico at a 36-km grid resolution. Domain three (D3) covers the southwestern United States, northern Mexico, southern Utah, and Colorado at a 12-km grid resolution. Domain four (D4), at a 4-km grid resolution, covers the URGB at a 4-km grid. With the NARR data as forcing fields, the research used three nested and two-way communication domains. Here, D-1, at a 36-km resolution, is the dashed box in Fig. 2, and its related D-2 and D-3 are the same as the forcing data R1’s D3 and D4, respectively. As mentioned in the introduction, Fig. 1 shows the URGB region’s topography at 12-km (D3 or D-2) and 4-km spatial resolutions (D4 or D-3). The higher-resolution domain resolved finer topographical structures. In D4 (or D-3), the height of mountains and hills becomes higher and the depth of valleys becomes lower than those at 12-km spatial resolution. The 4-km resolution also represents clouds better than the 12-km resolution (Cotton et al. 2006; Saleeby et al. 2007).

The outer domains’ setup responds to the fact that the NCEP reanalysis data’s resolution is about 2.5° and cannot very effectively resolve surface and low-level meteorological fields over land, whereas the NARR data provide higher resolution and more precise surface and low-level meteorological fields (Mesinger et al. 2006).

c. Model setup

A regional model, the fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5, version 3), was chosen to provide integrated physical modeling because many researchers have used it to study the regional climate over the southwestern United States. MM5 provides multiple options and schemes to represent a variety of physical processes. The most important scheme for our purposes was the convective parameterization scheme (CPS), especially for summer rainfall simulations. A previous study shows that the Grell CPS (Grell 1993) generates more reliable atmospheric fields in southern New Mexico (Warner and Hsu 2000) than does the Kain–Fritsch CPS (Kain and Fritsch 1990). Our pretest over this region shows that although performance may vary with rainfall types and model configurations, when the Grell CPS is used in the coarse domains, it generates more realistic rainfall patterns over low-elevation areas in New Mexico in the D4 (D-3) than when using the Kain–Fritsch CPS. This study used the Grell CPS in the 108-, 36-, and 12-km resolution domains (i.e., D1, D2, and D3, or D-1 and D-2). All the physics options and model parameters are listed in Table 1.

d. Observational data

As previous studies (de Ela et al. 2002; Leung and Qian 2003) have shown, even if very high spatial resolution improves weather forecasts, the current methods used to quantify forecast skill and observational networks that lack sufficient spatial coverage will limit the demonstration of the higher spatial resolution’s added skill. This research used multisource data to evaluate the model’s performance, including the following:

  1. The 25-km gridded daily precipitation analysis data from the National Weather Service’s Climate Prediction Center (CPC-P; Higgins et al. 1999). The gauge data were interpolated from daily gauge observations and covered the United States. Approximately 247 grid cells cover the upper Rio Grande basin.

  2. Surface meteorological station data. A total of 64 stations were selected, including 15 routine meteorological observation stations and 49 Snowpack Telemetry (SNOTEL) stations (http://www.wcc.nrcs.usda.gov/snow/). The locations of the 64 stations are labeled in Fig. 2.

  3. Operational NCEP Eta Model 212 grid (40 km) surface analysis data (http://dss.ucar.edu/datasets/ds609.2). Approximately 155 grid cells cover the upper Rio Grande basin. No data are available for October and November 1999. The NARR data were also used in comparison with the MM5 result.

It is important to point out the following two issues related to these datasets:

  1. Precipitation was measured at irregular and widely spaced locations and may not have captured all the precipitation, especially when it was windy and snowing (Roads et al. 1994).

  2. The URGB “area,” as represented in the MM5 model coverage, CPC grid gauge, and Eta Model analysis data coverage, is slightly different because the resolution affects the basin boundary so that the area is not exactly in reference to its actual real-world location.

3. Examination of downscaling results

In this paper, the model performance is examined only in the innermost domain (i.e., D4 and D-3 in Fig. 2), which covers the entire URGB with a 4-km grid mesh. Observational and analysis data are used to evaluate the surface atmospheric variables (with an emphasis on precipitation and temperature) driven by the NARR and R1 forcing data, respectively. To easily identify various data in the comparison, we labeled individual data with a name consisting of the data’s major characteristic, followed by the variable’s symbol. For example, NR-P represents the NARR (NR) forced precipitation (P) data. The following acronyms and terms are used:

  • NR: NARR forced downscaling results,

  • R1: NCEP–NCAR reanalysis-1 forced downscaling results,

  • CPC: CPC gridded daily observations,

  • NARR: North American Regional Reanalysis data,

  • Eta Model: Eta analysis data,

  • P: Precipitation,

  • Tmin: 2-m daily minimum temperature,

  • Tmax: 2-m daily maximum temperature,

  • T2: 2-m air temperature,

  • Q2: 2-m mixing ratio, and

  • U10 and V10: 10-m wind speeds.

a. Precipitation

In most dynamic downscaling studies of hydroclimate, precipitation is the main prognostic variable examined. This paper evaluates primarily (modeled) precipitation distribution using the gridded data and precipitation quantity using the station data.

1) Comparison between the gridded observation and downscaling output

Figure 3 shows the maps of the mean precipitation during the simulation period: the top panels are CPC-P (0.25°), NR-P (4 km), and R1-P (4 km). The average precipitation amounts over the region (D4 or D-3) for CPC-P, NR-P, and R1-P are 27.3, 29.2, and 35.1 mm month−1, respectively. Clearly, NR-P is closer to CPC-P than to R1-P. The model adequately reproduced the precipitation distribution features shown in the CPC-P map and added clear local patterns that are strongly correlated to the 4-km elevation map in Fig. 2. Both the NR-P and R1-P precipitation maps show large amounts of precipitation over the high-elevation areas in northern New Mexico and southern Colorado. These patterns seem physically plausible. For example, over a northern part of the Sacramento Mountains (the solid circle in Fig. 3), the precipitation in the CPC-P map is less than 30, but it reaches 50–100 mm month−1 in the NR-P and R1-P maps. A SNOTEL station (site 1034; location 33.4°N, 105.79°W; elevation 3130 m) is located inside this mountain area, and its precipitation records for the water years of 2003 and 2004 (available for these two years only) are 64 and 75 mm month−1, respectively. These amounts are much higher than the CPC-P value, and they are within the model’s predicted range. The following section will use more station data to check the amounts of local precipitation that were predicted through the downscaling processing.

The R1-P and NR-P precipitation show differences. In the eastern URGB (see the dashed circle in Fig. 3), the R1-P shows an overestimation in comparison with CPC-P, whereas the NR-P shows a slight underestimation. The bottom panel of Fig. 3 shows that the precipitation amounts of R1-P are larger than those of NR-P over the mountains of the northern and southern URGB, as well as the eastern URGB.

Figure 4 shows the seasonal variations in precipitation over the northern and southern parts of the URGB. Figure 4a represents the high-elevation part above the latitude of 36°N. It indicates that the seasonal precipitation variations derived from all sources show a similar trend of two precipitation peaks: one during spring and the other during summer. The NR-P variation is closer to that of CPC-P than to that of the R1-P. Some of the following differences exist: first, the monthly CPC-P amounts are always less than those from the model, except for November and December when the amounts are similar. The R1-P shows the largest amount in most of the months, especially during the cold season. Second, the peak time of monthly precipitation occurs in different months. NR-P and CPC-P show the same peak times, during March and August, whereas the R1-P peak times are in February and August.

Figure 4b is the same as Fig. 4a, except for the low-elevation southern part (<36°N) of the URGB region. It shows that the monthly precipitation represented by NR-P has improved in the late winter and spring in comparison with R1-P and CPC-P. However, the NR-P substantially underestimates precipitation in the monsoon season of August and September.

Figure 5 compares the modeled 2-m air temperature and low-level (surface to 3000 m above ground level) water vapor flux fields in July, August, and September (JAS) forced by NARR (the left column) to R1 (the middle column), respectively (their differences are shown in the right column). Over the southern and southeastern URGB, where the NARR data were used, the model generated higher surface temperatures and less water flux from the boundary than those forced by the R1 data. These model features caused by NARR forcing are not favored for generating monsoon convection in that season. In particular, the low-level water vapor flux fields (the bottom panels) show that southeasterly moisture flux over the southern UGRB was very small when the downscaling model was driven by the NARR data (smaller than when the model was driven by the R1 data), which is inconsistent with many studies’ results on the North American monsoon over the region (e.g., Schmitz and Mullen 1996; Mo et al. 2005). The following two potential factors may be responsible for the meteorological field differences given the same model physics configurations: forcing data and the locations of the two outer domains.

The results from a previous study by Mesinger et al. (2006) indicate that in comparison with observations, NARR data are more accurate than the global coarse analysis data in representing surface and tropospheric fields, especially in winter. An example shown in Fig. 6 compares the atmospheric fields from the forcing datasets (i.e., the NARR and R1) and MM5 results with El Paso, Texas, sounding observations (location 31.8°N, 106.4°W; elevation 1343 m) at 850, 700, and 500 mb during the simulation period. Figure 6a compares the mean fields among R1, NARR, and MM5 results with different forcing and observational data. Figure 6a indicates that at a 5-yr average, the trend of the analysis fields and the MM5 results with different forcing datasets is consistent with El Paso (EPZ) observations, especially for temperature and relative humidity. However, in comparison with observations, the wind field exhibits some different trends, especially at 500 mb, whereas the analysis datasets and MM5 results exhibit similar trends at the pressure level. Figure 6b shows the differences between R1 and observations, between NARR and observations, and between MM5 results with different forcing data and observations. Figure 6b indicates that the NARR dataset’s air temperature, relative humidity, and wind speed are improved in comparison with the R1 dataset, especially in the cold season. However, Fig. 6b also demonstrates that the NARR data (but not the R1) exhibit drier and warmer biases at 850 and 700 mb in the warm season. As Bright and Mullen (2002) suggest, the PBL moisture in this semiarid region is important to convection precipitation during the monsoon season. Therefore, the use of biased NARR data as an initial condition in the monsoon season (the model initialized once per month at this season) may be partly responsible for precipitation errors in the downscaling outcome. Figure 6b also shows the differences between the observations and the MM5 results when different forcing data are used. The MM5 results when using either forcing data are less accurate than both the analysis fields and the observations. However, for most months, the MM5 results are better when NARR is used as forcing data rather than R1.

Figure 7 is the mean precipitable water comparison between R1 and NARR data in July, August, September, and October from 1999 to 2004. The figure indicates that R1 was wetter than NARR in the Rockies, the eastern Pacific, the tropical Pacific (where the D1 southern boundary is located), and Mexico (where the D-1 southern boundary is located). The column water content distributions in Fig. 7 indicate that MM5 showed a wetter eastern and southern boundary when R-1 was used than when NARR was used. This boundary feature from different forcing datasets may also generate precipitation biases over the upper Rio Grande during the warm and monsoon seasons.

Many previous studies (e.g., Seth and Giorgi 1989) also indicate that the selection of the outer domain boundary location can affect the model’s results. This paper briefly describes the differences caused by the outer domain’s location. We performed the following four tests (see Fig. 2) from July to October 1999 (a wet monsoon season): T1 (running G1 with R1), T2 (running G1 with NARR), T3 (running G2 with R1), and T4 (running G2 with NARR). Here, G1 is just slightly smaller than D1, considering the NARR data coverage at the southwestern location of D1, and G2 is the same as D-1. Figure 8 shows the mean precipitation differences over land based on the four runs. The top panel of Fig. 8 shows the precipitation differences caused by the two forcing datasets. It indicates the trend that in comparison with NARR, using R1 for forcing fields can generate more precipitation over the mountainous upper Rio Grande basin but less precipitation over the southern upper Rio Grande basin, although there are some differences in the amounts. The bottom panel of Fig. 8 indicates the precipitation differences caused by differences in the location of domain boundary when the same forcing data are used. It also shows that increasing the domain size generates less precipitation over the upper Rio Grande mountain areas but more precipitation over the southern upper Rio Grande basin. Figure 8 indicates that when NARR is used, in the warm season, enlarging the outer domain of D-1 can remove more precipitation biases over the upper Rio Grande basin than can the current model domain (i.e., D-1) setup. Also, using NARR as a forcing dataset can remove certain biases in precipitation over the URGB mountainous areas.

2) Comparison between station data and downscaling output

The gridded precipitation data are usually interpolated through station measurements. The current interpolation technique will result in errors in gridded precipitation, especially in the gauge-sparse mountainous region. Using high-resolution models to relieve the lack of available observations over the mountainous areas may provide an alternative. Here, we evaluate precipitation for this purpose, comparing the measurements at stations with the interpolated precipitation (CPC-P) or the model’s precipitation (NR-P, R1-P, and NARR-P) at the grid box closest to the station.

Over the study region (D4 or D-3), we found 15 routine rain gauges and 49 SNOTEL stations with uninterrupted measurements (see the triangle locations in Fig. 2). Figure 9 shows the monthly mean precipitation for the 64 stations and for the corresponding precipitation from different sources, including the CPC-P (25 km), NR-P (4 km), R1-P (4 km), and NARR-P (32 km). As shown in Fig. 2, most of the stations are located in the northern part of the study region. Thus, Fig. 9 represents mainly the precipitation feature over the mountainous areas. Because the CPC-P data have been assimilated into the NARR system (Mesinger et al. 2006), NARR-P always matches well with CPC-P. The precipitation seasonal variations derived from all five sources show the two-peak pattern, and they are similar to each other in the transiting months of May, June, July, and December. In Fig. 9, NARR-P is the lowest, R1-P is the highest, and NR-P and the station data are in the middle and are similar to each other. In comparison with the station data, CPC-P and NARR-P possess negative biases, whereas NR-P and R1-P possess positive biases. The NR-P bias is much smaller than that of R1-P.

Figure 10 is the scattering plot of monthly mean precipitation between the station measurements and the CPC-P, NR-P, R1-P, and NARR-P estimates at the corresponding grid boxes for the 5-yr period. The seasonal and annual statistics are listed in Table 2. The bias is calculated according to Giorgi et al. (1994) as follows:
i1525-7541-9-4-677-e1
It measures the deviation of estimates am from their observations ao. Other statistics, such as mean, correlation coefficient, and RSME, are defined as normal.
To check the results’ statistical significance, a Student’s t test was used to check the calculated correlation coefficients. Given a significance level α (0.05 in this study) and the statistical number N, the threshold Student’s t test value ta is given from the t table. The threshold correlation coefficient rc is then calculated as follows:
i1525-7541-9-4-677-e2
In this case, rc equals 0.254 at a significance level of 0.05. Because all the correlation coefficients in Table 2 are greater than 0.254, the statistical analysis data are significant at a monthly to seasonal time scale.

The statistics in Fig. 10 and Table 2 were calculated based on the station’s measurements. They confirm the following conclusions we made based on the comparison with the gridded precipitation (i.e., CPC-P):

  1. CPC-P and NARR-P show negative biases in every season, whereas NR-P and R1-P show positive biases in every season. Considering that the grid box sizes of CPC-P and NARR-P are much larger than those of NR-P and R1-P and that most of the selected stations are located at high elevations, it can be concluded that CPC-P and NARR-P precipitation are underestimated in relation to the “ground truth.”

  2. Checking the absolute value of bias, NR-P and CPC-P have the smallest range (less than 0.35 mm day−1), NARR-P has the medium range (less than 0.52 mm day−1), and R1-P has the largest range (less than 0.76 mm day−1). In the seasons of March, April, May (MAM), September, October, November (SON), December, January, and February [DJF; except for June, July, and August (JJA)], NR-P has smaller bias values than CPC-P. Clearly, downscaling using NARR forcing improved precipitation predictions over the mountainous area.

  3. As addressed by Roads et al. (1994) and others, because the gauge measurements may not capture all precipitation, especially in windy and snowy conditions, the station measurements have negative biases in comparison with the ground truth. Therefore, the positive biases of NR-P and R1-P may be slightly mitigated.

  4. Table 2 shows that the correlation coefficients between the NR-P and R1-P precipitation, and the station measurements for MAM, SON, and DJF, not for JJA, are quite high (0.74–0.85 for NR-P and 0.63–0.79 for R1-P). In addition, these correlation coefficients are close to those of CPC-P and NARR-P. CPC-P uses the station measurements for interpolation, and NARR-P frequently assimilates the CPC-P. This result indicates that the downscaling model can, in general, produce precipitation at the right location and time at monthly to seasonal scales, except for JJA. With the use of JJA, the model did not perform well, partly because of uncertainties caused by the forcing data and the outer domain location selection, as mentioned above.

b. Some other surface hydrometeorological variables

This section examines certain MM5 surface variables using gridded analysis data and station data. The results indicate that a high spatial resolution model has the potential to obtain much more reliable meteorological fields, such as temperature and mixing ratio, especially when using NARR forcing data.

Figure 11 illustrates the monthly variations of the T2 and the Q2, and the 10-m wind components over the upper Rio Grande basin. The differences between the downscaling results and the Eta Model analysis data are plotted on the right side. The surface temperature (NR-T2 and R1-T2) and the mixing ratio (RN-Q2 and R1-Q2) match the Eta Model analysis well at the mean monthly scale, whereas the wind field analysis exhibits the differences between MM5 output and Eta Model analysis data. NR-T2 had warm biases in all of the months except for June and October. R1-T2 exhibited warm biases in the winter and in June, and cold biases in the spring and the monsoon season. The 2-m mixing ratio differences indicate that in comparison with the Eta Model analysis, NR-Q2 and R1-Q2 were characterized by dry biases in summer and wet biases in the other months. The variations of MM5-modeled T2 and Q2 that were caused by different forcing datasets are consistent with the corresponding precipitation. The mean U component indicates a prevailing westerly wind over the basin. However, the westerly wind from R1-U10 was mild from late summer to the following spring. During the simulation period, the NR-U10 was close to the Eta Model analysis. In the V component, the NR-V10 (in contrast with the Eta Model data) showed a weaker southerly wind than the R1-V10, which is consistent with the variations in the low-level V flux shown in Fig. 5.

We also compared T2 and Q2 to the station and the gridded point closest to it; the related statistical features are listed in Table 3 (figures not shown). At the monthly scale, all model temperatures correspond generally well to the station measurements. The results in Table 3 indicate that NR-T2 has the smallest bias (∼0.09°C) and the highest correlation coefficient among the four gridded datasets. NARR T2 exhibits a mean bias as high as 1.54°C during the simulation period. The Q2 statistics shown in Table 3 also indicate that the NR-Q2 has the smallest bias (∼0.05 g kg−1) and the closest mean in comparison with the station data, whereas NARR Q2 has a negative bias (−0.15 g kg−1). The results shown in Table 3 indicate that at high resolutions, the model can mitigate the biases of the NARR forcing data.

Figure 12 is the scattering plot of Tmax and Tmin between the station data and the downscaling results (Tmax and Tmin are not archived in the Eta Model analysis data or in the NARR data). The related statistics are also shown in Table 3. The results from Table 3 and Fig. 12 show that NR-Tmax is better than R1-Tmax (NR-Tmax has lower biases and higher correlation coefficients). However, whichever forcing data were used, the model generated a negative (positive) bias for the daily maximum (minimum) temperature.

The seasonal statistics (figures not shown) for T2 and Q2 were also calculated. In comparison with the station observations, different grid data behave differently. For example, Eta Model T2 shows a negative bias in DJF (−0.49°C) and MAM (−0.21°C), and it exhibits very small biases during the simulation period in JJA and SON. NARR T2 exhibits a positive bias in all seasons, especially in JJA and SON when the bias reaches 2.03° and 1.77°C, respectively. The Eta Model Q2 data exhibit positive biases in all seasons, except in JJA when a very small negative bias (−0.08) appears. On the other hand, NARR Q2 data show a negative bias in JJA (−0.93 g kg−1) and SON (−0.1 g kg−1), and a positive bias in DJF and MAM. The following two points may be summarized based on the seasonal statistical results of the Eta Model analysis and NARR data:

  1. The NARR data’s temperature and humidity features, including the variation in the low troposphere, are partly responsible for the NR-P underestimation in the warm season and overestimation in the cold season, although the NR-P is improved in comparison with R1-P.

  2. Although the Eta Model and NARR data are output from the same assimilation system (i.e., EDAS; see Mesinger et al. 2006), the two datasets exhibit different features.

4. Discussion

This comparative study confirms that high-resolution downscaling improves the capability of hydroclimatological predictions over the mountainous URGB, but the results can be stronfected by many factors. In addition to discussing the downscaling errors caused by the forcing data, we will briefly discuss three possible error sources that have been presented in the literature and raised by the reviewers.

First, as reported by Qian et al. (2003), when the model is reinitialized frequently, its results become more realistic. Qian et al. tested model reinitialization every 10, 30, and 90 days, and they found that the model performed most accurately when it was reinitialized every 10 days. We also conducted similar tests: we changed the monthly runs by reinitializing the model every 10 days, and for the 4-month run, we reinitialized the model monthly. We found that this method works for the cold season (e.g., in October 2000, the monthly NR-P changed from 115 to 51 mm; in January 2001, the R1-P changed from 75 to 63 mm), but it does not work for the summer (e.g., in August 2001, the precipitation changed from 22.5 mm in the monthly initialized run to 17.1 mm in every 10-day initialization run). This result is consistent with the features of the NARR forcing fields, which are improved in the cold season but show large warm and dry biases in the warm season.

The second possible error source has been discussed by Gochis et al. (2003), Liang et al. (2004), and others who reported that CPS would affect precipitation modeling over the southwestern United States. As mentioned in section 2, we have tested the results using the Kain–Fritsch CPS instead of the Grell CPS with the NARR forcing for the month of August in 1999 and 2001. When the Kain–Fritsch CPS was used, the results for rainfall over the URGB became less accurate. For example, in August 1999, the rainfall over the URGB was 45.5 mm when the Grell CPS was used, but it was reduced to 5.9 mm with the Kain–Fritsch CPS, whereas the CPC-P value was 79.4 mm. Similar severe biases have been reported for storm simulations over southern New Mexico (Warner and Hsu 2000).

The third potential error source is the need to further increase the spatial resolution and improve the microphysics schemes (e.g., Saleeby et al. 2007). W. R. Cotton (2007, personal communication) addresses the issue that a 4–5-km spatial resolution is not enough to fully resolve the scale of the clouds. At a 4–5-km resolution, the model can only resolve entrainment processes on scales similar to those of these resolutions. Therefore, the model underpredicts entrainment, and convection is essentially wet adiabatic because the water content is too high, causing the model to overpredict precipitation. Cotton suggests that to fully resolve the cloud scale, the resolution should be about 500 m. With respect to winter storms, Cotton et al. (2006) found that current microphysics schemes have inadequate parameterizations for the types of embedded convection that occur in the mountains’ southwestern slopes. The URGB is located in the southern part of the Rockies and, therefore, it falls within the range of Cotton et al.’s hypothesis. In their most recent paper, Saleeby et al. (2007) use a bin-emulation approach to riming on the supercooled liquid water predication and precipitation instead of the bulk riming scheme used in current mesoscale models. Their case study indicates that the bin-emulation approach can alleviate wintertime precipitation overestimation. This is a promising approach for future research.

5. Summary

The results reported here are from a downscaling study for hydroclimate predictions over the URGB, located in the mountainous semiarid southwestern United States. A novel aspect of this study is the 5-yr-long (June 1999–September 2004) integrals of the regional climate (mesoscale) model at the high grid resolution of 4 km, driven separately by two forcing datasets: the NCEP–NCAR global reanalysis-1 (R1) and the North American Regional Reanalysis (NARR). The results indicate the following:

  1. By downscaling to a 4-km grid mesh, the model, especially when driven by the NARR data, demonstrates a capability of predicting precipitation localization features that is highly correlated with the URGB’s complex terrain characteristics. When these results are checked with observational data, the predicted climatologic patterns of precipitation are physically plausible. The precipitation amounts are close to, and sometimes even more accurate than, those of interpolated station data (CPC-P) and low-resolution but frequently assimilated model results, such as those of the NARR-P.

  2. The quality of the forcing data plays a crucial role in the downscaling approach to modeling. Many studies (see also Fig. 5) have shown that the NARR dataset provides higher resolution (32 km) and more realistic fields than the R1 dataset, so that the predicted hydroclimate variables forced by the NARR dataset are consistently better than those forced by the R1 dataset at monthly and seasonal scales, except for the monsoon season (JAS).

  3. In JAS, the downscaling results forced by the NARR dataset are substantially degraded and even worse than the results forced by the R1 dataset. This is partly because the forcing field NARR data in the study period exhibited greater dryness in the monsoon seasons and partly because the outer domain (i.e., D-1) is an inappropriate selection. Sensitivity tests indicate that enlarging D-1 in the south can remove more of the bias in the prediction of warm-season precipitation over the upper Rio Grande basin than does the current model domain D-1 setup when NARR is used.

  4. A comparative analysis using high-elevation SNOTEL precipitation measurements indicates that the precipitation data interpolated from the station observations (CPC-P) and the low resolution but frequently assimilated model precipitation data (NARR-P) are underestimated over the mountainous northern URGB.

We believe that with continuous improvements in prediction skill and increasing computational power, high-resolution dynamic downscaling technique will be, in the near future, the major technique that is used to meet increasing needs for regional climate, hydrologic, and water resource applications.

Acknowledgments

The suggestions and comments from the reviewers and the editor have been extremely helpful in revising this paper. Primary support for this research was provided under the NASA EOS Interdisciplinary Research Program (NNG04GK35G and NNG-5GA20G), NASA NEWS program (NNG06GB20G), the NOAA GAPP Program (NA04OAR4310086), and the NSF-STC Program (Agreement EAR-9876800). Li would like to thank Dr. Jimmy M. Ferng and other staff at the Computer Center and Information Technology, University of Arizona, for their help and support.

REFERENCES

  • Anderson, B. T., and Roads J. O. , 2002: Regional simulation of summertime precipitation over the southwestern United States. J. Climate, 15 , 33213342.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, B. T., Kanamaru H. , and Roads J. O. , 2004: The summertime atmospheric hydrologic cycle over the southwestern United States. J. Hydrometeor., 5 , 679692.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Berbery, E. H., 2001: Mesoscale moisture analysis of the North American monsoon. J. Climate, 14 , 121137.

  • Bright, D., and Mullen S. , 2002: The sensitivity of the numerical simulation of the southwest monsoon boundary layer to the choice of PBL turbulence parameterization in MM5. Wea. Forecasting, 17 , 99114.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, F., and Dudhia J. , 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129 , 569585.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cotton, W., McAnelly R. , Carrió G. , Mielke P. , and Hartzell C. , 2006: Simulations of snowpack augmentation in the Colorado Rocky Mountains. J. Wea. Modif., 38 , 5865.

    • Search Google Scholar
    • Export Citation
  • de Ela, R., Laprise R. , and Denis B. , 2002: Forecasting skill limits of nested, limited-area models: A perfect-model approach. Mon. Wea. Rev., 130 , 20062023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the winter monsoon experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46 , 30773107.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Faccani, C., Ferretti R. , and Visconti G. , 2003: High-resolution weather forecasting over complex orography: Sensitivity to the assimilation of conventional data. Mon. Wea. Rev., 131 , 136154.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., 1991: Sensitivity of simulated summertime precipitation over the western United States to different physics parameterizations. Mon. Wea. Rev., 119 , 28702888.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., Shields Brodeur C. , and Bates G. T. , 1994: Regional climate change scenarios over the United States produced with a nested regional climate model. J. Climate, 7 , 375399.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gochis, D. J., Shuttleworth W. J. , and Yang Z-L. , 2003: Hydrometeorological response of the modeled North American monsoon to convective parameterization. J. Hydrometeor., 4 , 235250.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grell, G. A., 1993: Prognostic evaluation of assumptions used by cumulus parameterizations. Mon. Wea. Rev., 121 , 764787.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Higgins, R. W., Chen Y. , and Douglas A. V. , 1999: Interannual variability of the North American warm season precipitation regime. J. Climate, 12 , 653680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, S-Y., and Pan H-L. , 1996: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124 , 23222339.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Fritsch J. M. , 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47 , 27842802.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-Year Reanalysis Project. Bull. Amer. Meteor. Soc., 77 , 437471.

  • Kanamitsu, M., and Mo K. C. , 2003: Dynamical effect of land surface processes on summer precipitation over the southwestern United States. J. Climate, 16 , 496509.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kim, J., Miller N. , Farrara J. , and Hong S-Y. , 2000: A seasonal precipitation and stream flow hindcast and prediction study in the western United States during the 1997/98 winter season using a dynamic downscaling system. J. Hydrometeor., 1 , 311329.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, L. R., and Qian Y. , 2003: The sensitivity of precipitation and snowpack simulations to model resolution via nesting in regions of complex terrain. J. Hydrometeor., 4 , 10251043.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, L. R., Qian Y. , and Bian X. , 2003: Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part I: Seasonal statistics. J. Climate, 16 , 18921911.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , Maddox R. A. , Sorooshian S. , and Hsu K. , 2003a: Summer weather simulation for the semiarid lower Colorado River basin: Case tests. Mon. Wea. Rev., 131 , 521541.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Maddox R. A. , Gao X. , Sorooshian S. , and Hsu K. , 2003b: A numerical investigation of storm structure and evolution during the July 1999 Las Vegas flash flood. Mon. Wea. Rev., 131 , 20382059.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , Maddox R. A. , and Sorooshian S. , 2005: Sensitivity of North American monsoon rainfall to multisource sea surface temperatures in MM5. Mon. Wea. Rev., 133 , 29222939.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , and Sorooshian S. , 2007: Modeling and analysis of the variability of the water cycle in the upper Rio Grande basin at high resolution. J. Hydrometeor.,, 8 , 805824.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liang, X-Z., Kunkel K. E. , and Samel A. N. , 2001: Development of a regional climate model for U.S. Midwest applications. Part I: Sensitivity to buffer zone treatment. J. Climate, 14 , 43634378.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liang, X-Z., Li L. , Kunkel K. E. , Ting M. , and Wang J. X. L. , 2004: Regional climate model simulation of U.S. precipitation during 1982–2002. Part I: Annual cycle. J. Climate, 17 , 35103529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mesinger, F., and Coauthors, 2006: North American regional reanalysis. Bull. Amer. Meteor. Soc., 87 , 343360.

  • Mo, K. C., Chelliah M. , Carrera M. L. , Higgins W. R. , and Ebisuzaki W. , 2005: Atmospheric moisture transport over the United States and Mexico as evaluated in the NCEP regional reanalysis. J. Hydrometeor., 6 , 710728.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mo, K. C., Schemm J. E. , Kim H. , and Higgins W. R. , 2006: Influence of initial conditions on summer precipitation simulations over the United States and Mexico. J. Climate, 19 , 36403658.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nigam, S., and Ruiz-Barradas A. , 2006: Seasonal hydroclimate variability over North America in global and regional reanalysis and AMIP simulations: Varied representation. J. Climate, 19 , 815837.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qian, J-H., Seth A. , and Zebiak S. , 2003: Reinitialized versus continuous simulations for regional climate downscaling. Mon. Wea. Rev., 131 , 28572874.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roads, J. O., Chen S-C. , Guetter A. K. , and Georgakakos K. P. , 1994: Large-scale aspects of the United States hydrologic cycle. Bull. Amer. Meteor. Soc., 75 , 15891610.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., Schultz D. M. , Colle B. A. , and Stensrud D. J. , 2004: Toward improved prediction: High-resolution and ensemble modeling system in operations. Wea. Forecasting, 19 , 936949.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Saleeby, S., Cheng W. , and Cotton W. , 2007: New developments in the regional atmospheric modeling system suitable for simulations of snowpack augmentation over complex terrain. J. Wea. Modif., 39 , 3749.

    • Search Google Scholar
    • Export Citation
  • Schmitz, J. T., and Mullen S. L. , 1996: Water vapor transport associated with the summertime North American monsoon as depicted by ECMWF analyses. J. Climate, 9 , 16211634.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Seth, A., and Giorgi F. , 1998: The effects of domain choice on summer precipitation simulation and sensitivity in a regional climate model. J. Climate, 11 , 26982712.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Warner, T. T., and Hsu H-M. , 2000: Nested-model simulation of moist convection: The impact of coarse-grid parameterized convection on fine-grid resolved convection. Mon. Wea. Rev., 128 , 22112231.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Westrick, K. J., and Mass C. L. , 2001: An evaluation of a high-resolution hydrometeorological modeling system for prediction of a cool-season flood event in a coastal mountainous watershed. J. Hydrometeor., 2 , 161180.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Westrick, K. J., Storck P. , and Mass C. F. , 2002: Description and evaluation of a hydrometeorological forecast system for mountainous watersheds. Wea. Forecasting, 17 , 250262.

    • Crossref
    • Search Google Scholar
    • Export Citation

Fig. 1.
Fig. 1.

Contours of the topography over the upper Rio Grande basin: (left) 12-km resolution and (right) 4-km resolution.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 2.
Fig. 2.

Model domain setup. NCEP–NCAR reanalysis data are used with D1, D2, D3, and D4. NARR data are used with D-1, D-2, and D-3. D-2 and D3 are the same coverage, whereas D-3 and D4 are the same coverage. (left) Whole domains shown with G1 and G2. (right) Enlarged D3 (D-2) and D4 (D-3) showing the boundary of the upper Rio Grande basin (solid line). Triangles in the figure represent surface observation stations.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 3.
Fig. 3.

Precipitation distribution from June 1999 to September 2004. (top) Mean precipitation for CPC 0.25° gauge data (CPC-P), MM5 precipitation in D4 with R1-P, and MM5 precipitation D-3 with NR-P. Besides the upper Rio Grande basin boundary, the dashed circles and solid circles are explained in the text. (bottom) Difference between NR-P and R1-P.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 4.
Fig. 4.

Precipitation monthly mean (mm month−1) over the basin showing (a) over the upper part of the upper Rio Grande basin (>36°N) and (b) over the southern part of the upper Rio Grande basin (<36°N).

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 5.
Fig. 5.

(top left) Models NR-T2 and (top middle) R1-T2, and (top right) their differences from JAS from 1999 to 2004. (bottom) Same as in top, but for lower-level water vapor fluxes vector. The gray shaded areas denote V-flux values.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 6.
Fig. 6.

(a) Mean trend of temperature, relative humidity, and U and V components between analysis data, MM5 output, and sounding observation at EPZ from July 1999 to September 2004 showing observation (line), the NCEP reanalysis data (filled triangle), the NARR data and observation (filled square), the MM5 result when NCEP reanalysis data forcing (R1 run; empty triangle), and the MM5 result when NARR forcing (NARR run; empty square). (b) Same as in (a), but the differences are shown as R1 and observation (filled triangle), NARR data and observation (filled square), the MM5 result when R1 forcing and observation (empty triangle), and the MM5 result when NARR forcing and observation (NARR run; empty square).

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 6.
Fig. 6.

(Continued)

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 7.
Fig. 7.

Mean precipitable water comparison between R1 and NARR in July, August, September, and October from 1999 to 2004. The NARR coverage does not reach to the southwest corner (i.e., D1 coverage in MM5).

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 8.
Fig. 8.

Mean precipitation differences over land between different sensitivity runs (T1, T2, T3, and T4) from July to October 1999.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 9.
Fig. 9.

Precipitation monthly mean from 64 stations and the grid point that is closest to the station from June 1999 to September 2004.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 10.
Fig. 10.

Scattering plot of monthly mean precipitation between the station measurements and the CPC-P, NR-P, R1-P, and NARR-P estimates at the corresponding grid boxes for the 5-yr study.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 11.
Fig. 11.

Surface variable comparison between model mean and Eta Model analysis data, and their difference at the basin scale.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Fig. 12.
Fig. 12.

Same as in Fig. 10, but for 2-m daily maximum and minimum temperature between the station measurement and the NR and R-1 estimates.

Citation: Journal of Hydrometeorology 9, 4; 10.1175/2008JHM912.1

Table 1.

Model and physics parameters.

Table 1.
Table 2.

Precipitation statistics.

Table 2.
Table 3.

Statistics of surface meteorological fields.

Table 3.
Save
  • Anderson, B. T., and Roads J. O. , 2002: Regional simulation of summertime precipitation over the southwestern United States. J. Climate, 15 , 33213342.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Anderson, B. T., Kanamaru H. , and Roads J. O. , 2004: The summertime atmospheric hydrologic cycle over the southwestern United States. J. Hydrometeor., 5 , 679692.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Berbery, E. H., 2001: Mesoscale moisture analysis of the North American monsoon. J. Climate, 14 , 121137.

  • Bright, D., and Mullen S. , 2002: The sensitivity of the numerical simulation of the southwest monsoon boundary layer to the choice of PBL turbulence parameterization in MM5. Wea. Forecasting, 17 , 99114.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Chen, F., and Dudhia J. , 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129 , 569585.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cotton, W., McAnelly R. , Carrió G. , Mielke P. , and Hartzell C. , 2006: Simulations of snowpack augmentation in the Colorado Rocky Mountains. J. Wea. Modif., 38 , 5865.

    • Search Google Scholar
    • Export Citation
  • de Ela, R., Laprise R. , and Denis B. , 2002: Forecasting skill limits of nested, limited-area models: A perfect-model approach. Mon. Wea. Rev., 130 , 20062023.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Dudhia, J., 1989: Numerical study of convection observed during the winter monsoon experiment using a mesoscale two-dimensional model. J. Atmos. Sci., 46 , 30773107.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Faccani, C., Ferretti R. , and Visconti G. , 2003: High-resolution weather forecasting over complex orography: Sensitivity to the assimilation of conventional data. Mon. Wea. Rev., 131 , 136154.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., 1991: Sensitivity of simulated summertime precipitation over the western United States to different physics parameterizations. Mon. Wea. Rev., 119 , 28702888.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Giorgi, F., Shields Brodeur C. , and Bates G. T. , 1994: Regional climate change scenarios over the United States produced with a nested regional climate model. J. Climate, 7 , 375399.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gochis, D. J., Shuttleworth W. J. , and Yang Z-L. , 2003: Hydrometeorological response of the modeled North American monsoon to convective parameterization. J. Hydrometeor., 4 , 235250.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grell, G. A., 1993: Prognostic evaluation of assumptions used by cumulus parameterizations. Mon. Wea. Rev., 121 , 764787.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Higgins, R. W., Chen Y. , and Douglas A. V. , 1999: Interannual variability of the North American warm season precipitation regime. J. Climate, 12 , 653680.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hong, S-Y., and Pan H-L. , 1996: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124 , 23222339.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kain, J. S., and Fritsch J. M. , 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47 , 27842802.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., and Coauthors, 1996: The NCEP/NCAR 40-Year Reanalysis Project. Bull. Amer. Meteor. Soc., 77 , 437471.

  • Kanamitsu, M., and Mo K. C. , 2003: Dynamical effect of land surface processes on summer precipitation over the southwestern United States. J. Climate, 16 , 496509.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kim, J., Miller N. , Farrara J. , and Hong S-Y. , 2000: A seasonal precipitation and stream flow hindcast and prediction study in the western United States during the 1997/98 winter season using a dynamic downscaling system. J. Hydrometeor., 1 , 311329.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, L. R., and Qian Y. , 2003: The sensitivity of precipitation and snowpack simulations to model resolution via nesting in regions of complex terrain. J. Hydrometeor., 4 , 10251043.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leung, L. R., Qian Y. , and Bian X. , 2003: Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part I: Seasonal statistics. J. Climate, 16 , 18921911.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , Maddox R. A. , Sorooshian S. , and Hsu K. , 2003a: Summer weather simulation for the semiarid lower Colorado River basin: Case tests. Mon. Wea. Rev., 131 , 521541.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Maddox R. A. , Gao X. , Sorooshian S. , and Hsu K. , 2003b: A numerical investigation of storm structure and evolution during the July 1999 Las Vegas flash flood. Mon. Wea. Rev., 131 , 20382059.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , Maddox R. A. , and Sorooshian S. , 2005: Sensitivity of North American monsoon rainfall to multisource sea surface temperatures in MM5. Mon. Wea. Rev., 133 , 29222939.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Li, J., Gao X. , and Sorooshian S. , 2007: Modeling and analysis of the variability of the water cycle in the upper Rio Grande basin at high resolution. J. Hydrometeor.,, 8 , 805824.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liang, X-Z., Kunkel K. E. , and Samel A. N. , 2001: Development of a regional climate model for U.S. Midwest applications. Part I: Sensitivity to buffer zone treatment. J. Climate, 14 , 43634378.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liang, X-Z., Li L. , Kunkel K. E. , Ting M. , and Wang J. X. L. , 2004: Regional climate model simulation of U.S. precipitation during 1982–2002. Part I: Annual cycle. J. Climate, 17 , 35103529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mesinger, F., and Coauthors, 2006: North American regional reanalysis. Bull. Amer. Meteor. Soc., 87 , 343360.

  • Mo, K. C., Chelliah M. , Carrera M. L. , Higgins W. R. , and Ebisuzaki W. , 2005: Atmospheric moisture transport over the United States and Mexico as evaluated in the NCEP regional reanalysis. J. Hydrometeor., 6 , 710728.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mo, K. C., Schemm J. E. , Kim H. , and Higgins W. R. , 2006: Influence of initial conditions on summer precipitation simulations over the United States and Mexico. J. Climate, 19 , 36403658.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nigam, S., and Ruiz-Barradas A. , 2006: Seasonal hydroclimate variability over North America in global and regional reanalysis and AMIP simulations: Varied representation. J. Climate, 19 , 815837.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Qian, J-H., Seth A. , and Zebiak S. , 2003: Reinitialized versus continuous simulations for regional climate downscaling. Mon. Wea. Rev., 131 , 28572874.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roads, J. O., Chen S-C. , Guetter A. K. , and Georgakakos K. P. , 1994: Large-scale aspects of the United States hydrologic cycle. Bull. Amer. Meteor. Soc., 75 , 15891610.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Roebber, P. J., Schultz D. M. , Colle B. A. , and Stensrud D. J. , 2004: Toward improved prediction: High-resolution and ensemble modeling system in operations. Wea. Forecasting, 19 , 936949.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Saleeby, S., Cheng W. , and Cotton W. , 2007: New developments in the regional atmospheric modeling system suitable for simulations of snowpack augmentation over complex terrain. J. Wea. Modif., 39 , 3749.

    • Search Google Scholar
    • Export Citation
  • Schmitz, J. T., and Mullen S. L. , 1996: Water vapor transport associated with the summertime North American monsoon as depicted by ECMWF analyses. J. Climate, 9 , 16211634.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Seth, A., and Giorgi F. , 1998: The effects of domain choice on summer precipitation simulation and sensitivity in a regional climate model. J. Climate, 11 , 26982712.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Warner, T. T., and Hsu H-M. , 2000: Nested-model simulation of moist convection: The impact of coarse-grid parameterized convection on fine-grid resolved convection. Mon. Wea. Rev., 128 , 22112231.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Westrick, K. J., and Mass C. L. , 2001: An evaluation of a high-resolution hydrometeorological modeling system for prediction of a cool-season flood event in a coastal mountainous watershed. J. Hydrometeor., 2 , 161180.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Westrick, K. J., Storck P. , and Mass C. F. , 2002: Description and evaluation of a hydrometeorological forecast system for mountainous watersheds. Wea. Forecasting, 17 , 250262.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Contours of the topography over the upper Rio Grande basin: (left) 12-km resolution and (right) 4-km resolution.

  • Fig. 2.

    Model domain setup. NCEP–NCAR reanalysis data are used with D1, D2, D3, and D4. NARR data are used with D-1, D-2, and D-3. D-2 and D3 are the same coverage, whereas D-3 and D4 are the same coverage. (left) Whole domains shown with G1 and G2. (right) Enlarged D3 (D-2) and D4 (D-3) showing the boundary of the upper Rio Grande basin (solid line). Triangles in the figure represent surface observation stations.

  • Fig. 3.

    Precipitation distribution from June 1999 to September 2004. (top) Mean precipitation for CPC 0.25° gauge data (CPC-P), MM5 precipitation in D4 with R1-P, and MM5 precipitation D-3 with NR-P. Besides the upper Rio Grande basin boundary, the dashed circles and solid circles are explained in the text. (bottom) Difference between NR-P and R1-P.

  • Fig. 4.

    Precipitation monthly mean (mm month−1) over the basin showing (a) over the upper part of the upper Rio Grande basin (>36°N) and (b) over the southern part of the upper Rio Grande basin (<36°N).

  • Fig. 5.

    (top left) Models NR-T2 and (top middle) R1-T2, and (top right) their differences from JAS from 1999 to 2004. (bottom) Same as in top, but for lower-level water vapor fluxes vector. The gray shaded areas denote V-flux values.

  • Fig. 6.

    (a) Mean trend of temperature, relative humidity, and U and V components between analysis data, MM5 output, and sounding observation at EPZ from July 1999 to September 2004 showing observation (line), the NCEP reanalysis data (filled triangle), the NARR data and observation (filled square), the MM5 result when NCEP reanalysis data forcing (R1 run; empty triangle), and the MM5 result when NARR forcing (NARR run; empty square). (b) Same as in (a), but the differences are shown as R1 and observation (filled triangle), NARR data and observation (filled square), the MM5 result when R1 forcing and observation (empty triangle), and the MM5 result when NARR forcing and observation (NARR run; empty square).

  • Fig. 6.

    (Continued)

  • Fig. 7.

    Mean precipitable water comparison between R1 and NARR in July, August, September, and October from 1999 to 2004. The NARR coverage does not reach to the southwest corner (i.e., D1 coverage in MM5).

  • Fig. 8.

    Mean precipitation differences over land between different sensitivity runs (T1, T2, T3, and T4) from July to October 1999.

  • Fig. 9.

    Precipitation monthly mean from 64 stations and the grid point that is closest to the station from June 1999 to September 2004.

  • Fig. 10.

    Scattering plot of monthly mean precipitation between the station measurements and the CPC-P, NR-P, R1-P, and NARR-P estimates at the corresponding grid boxes for the 5-yr study.

  • Fig. 11.

    Surface variable comparison between model mean and Eta Model analysis data, and their difference at the basin scale.

  • Fig. 12.

    Same as in Fig. 10, but for 2-m daily maximum and minimum temperature between the station measurement and the NR and R-1 estimates.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 447 347 48
PDF Downloads 75 22 0