1. Introduction
Accurate projections of future regional-scale climates are needed to assess the possible societal impacts of climate change. These may include impacts on water availability, agriculture, human health, air quality, and so on. Uncertainties in projections of regional-scale climate change complicate the process of assessing societal impacts and of making policy decisions to cope with climate change. Systematic studies are needed to quantify uncertainties in regional climate changes, to identify the sources of those uncertainties, and ultimately to reduce them.
In this paper we start the process of evaluating uncertainties in future climate in the western United States, by intercomparing simulations of this region performed with four regional climate models (RCMs) nested within two different global climate models (GCMs). Our goals are to assess 1) how well the different RCM/GCM combinations simulate aspects of the present climate in this region; and 2) the intermodel range of projected regional climate responses to increased atmospheric greenhouse gases. We also assess the skill of spatiotemporal detail produced by dynamical downscaling. Because we are particularly interested in the possible impacts of climate change on water availability, we focus on meteorological variables relevant to this problem: near-surface temperatures, precipitation, and water-equivalent snow depth. We emphasize that errors in the RCM results are not necessarily due to problems in the RCM itself, but often instead reflect errors in the GCM-based lateral boundary conditions. Thus in our analyses of present-climate simulations we are not evaluating the RCMs per se but rather the coupled RCM/GCM models.
Our approach has some significant limitations. First, the RCM simulations we analyzed use different spatial resolutions, different geographical domains, different increased greenhouse gas scenarios for future-climate simulations, and (in some cases) different lateral boundary conditions. Thus, this is not a formal model intercomparison study, but rather an attempt to learn from available simulations. Several carefully controlled studies—the U.S. Project to Intercompare Regional Climate Simulations (PIRCS; Takle et al. 1999); the European Prediction of Regional Scenarios and Uncertainties for Defining European Climate Change Risks and Effects (PRUDENCE; Christensen et al. 2002) project, the Canadian Climate Impacts Scenarios (CCIS, information available online at http://www.cics.uvic.ca/scenarios/database/index.cgi) project, the North American regional Climate Change Prediction Project (NARCCAP), and the Regional Model Intercomparison Project (RMIP 2003) for Asia—are underway, however. Second, it is important to avoid equating future-climate uncertainties to intermodel differences. There are important uncertainties (notably in future greenhouse gas levels, and other climate perturbations) that are external to climate models. Also, of course, there may be important errors common to all the models we look at. For these reasons, intermodel differences in projected future climates may be smaller than actual uncertainties in future climate. Furthermore, the true future climate may be outside the envelope of model projections. Finally, the approach of evaluating future-climate uncertainties by assessing intermodel differences implicitly assumes that all models are equally credible. In principle, models that do a better job of simulating the present climate should give more credible projections than other models do. However, Coquard et al. (2004) in examining simulations of western United States climate in the Coupled Model Intercomparison Project (CMIP) simulations, found that models that did relatively poorly at simulating a range of observations in general made future-climate projections that were statistically indistinguishable from those made by models that simulated the same observations relatively well.
Several previous studies have simulated present and/or future climates in the western United States using nested regional climate models. In fact, the first use of a nested RCM to simulate climate (Dickinson et al. 1989) focused on this region. While the large-scale climate responses to anthropogenic perturbations are strongly influenced by the driving global model, nested RCMs can be used to assess the spatiotemporal variations (e.g., seasonal dependences) of climate responses within the model domain. For example, Giorgi et al. (1994) found different regional-scale responses to increased atmospheric CO2 in a nested version of the Pennsylvania State University (PSU)–National Center for Atmospheric Research (NCAR) Mesoscale Model version 4 (MM4) regional model than in the driving global model [the NCAR Global Environmental Ecological Simulation of Interactive Systems (GENESIS)]. In some instances these differences were plausibly attributed to surface forcings that were represented more realistically in the nested model. Similarly, Kim (2001) and Kim et al. (2002) found stronger precipitation and temperature responses (increases) at higher elevations in California. These were not seen in the driving model, whose representation of topography was limited by its coarse spatial resolution. Elevation-dependent climate change signals were also found by Giorgi et al. (1997) and by Leung and Ghan (1999b), who nested the Pacific Northwest National Laboratory (PNNL) version of the fifth-generation PSU–NCAR Mesoscale Model (MM5) within the NCAR Community Climate atmospheric model (CCM3); this in turn was forced by prescribed SSTs and sea ice concentrations. Leung and Ghan found that the climate change signals in the nested model could differ significantly from those in the driving GCM. For example, wintertime temperature responses were greater in the nested model because of snow–albedo feedbacks, which are absent in the driving model because its coarse resolution prohibits formation of snow in the study area. Thus, again, better representation of surface forcings allowed the nested model to produce different climate responses from the driving global model.
Snyder et al. (2002) used an approach similar to that of Leung and Ghan in that they used a version of the Second-Generation Regional Climate Model (RegCM2) nested within the CCM3 atmospheric model, which in turn was forced by prescribed SSTs. They found in response to doubled atmospheric CO2, a larger response in near-surface temperatures inland and at higher elevations, and a larger precipitation response in the northern part of the state.
The methodological issue of assessing the added skill produced by dynamical downscaling, which we address in section 4 here, has also been previously addressed. Giorgi et al. (1994) showed that downscaled precipitation produced by the MM4 nested model is superior by several objective measures of skill demonstrate than precipitation in the driving GCM (GENESIS). Kim and Lee (2003) showed that precipitation simulated by the Mesoscale Atmospheric Simulation (MAS) model driven by reanalysis generally agrees more closely with rain gauge data than the reanalysis itself does. Similar findings were obtained by Laprise et al. (1998), Christensen et al. (1998), and Leung and Ghan (1999a). Among the conclusions of Pan et al. (2001) is that for simulated December–February (DJF) precipitation in California, errors introduced by dynamical downscaling with the RegCM2 and the combined High Resolution Limited Area Model (HIRLAM) and ECHAM regional climate model (HIRHAM) nested RCMs exceed those due to errors in lateral boundary conditions from the HadCM2 global climate model. A few studies (Bhaskaran et al. 1998; Hassell and Jones 1999) have shown that nested RCMs can produce superior simulations of seasonal time-scale variability than the driving GCM. On daily time scales, nested RCMs can produce superior simulations of the statistics of extreme precipitation events than the driving GCM (Christensen et al. 1998; Durman et al. 2001). For California, Kim (2004) found that increased atmospheric CO2 results in an increase in the frequency of days with greater than 0.5 mm of precipitation at all elevations. This results entirely from an increase in frequency of days with strong precipitation events. In some regions, dramatic increases in the frequency of strong precipitation events were found.
2. Description of models, simulations, and observations
We analyzed simulations of present and future climates performed with four different RCMs. These RCM simulations were all driven by lateral boundary conditions from global ocean–atmosphere general circulation models. The simulations with the MM5 and Regional Spectral Model (RSM) nested models were both driven by results from the NCAR–Department of Energy (DOE) Parallel Climate Model (PCM); however different PCM simulations were used for the two RCMs. The MAS and RegCM2 simulations were both driven by the same simulation performed with the HadCM2 GCM. Salient properties of the models and simulations are listed in Table 1. For all simulations, 10 yr of results were analyzed (although the MM5 simulations are longer). Each of the models is described briefly below.
The RSM was developed at the National Centers for Environmental Prediction (NCEP; Juang and Kanamitsu 1994; Juang et al. 1997) to provide a physically consistent regional model for the NCEP global model. The RSM was further modified at the Experimental Climate Prediction Center (ECPC; Roads et al. 2003) at the Scripps Institution of Oceanography, University of California, San Diego. The version of the RSM and the regional climate simulations analyzed here were previously described by Han and Roads (2004).
The PNNL regional climate model was developed based on MM5 (Grell et al. 1993). Leung et al. (2003) described the model configuration and results of a 20-yr simulation driven by the NCEP reanalysis. The simulations analyzed here were driven by one PCM control simulation and an ensemble of three PCM simulations for the future climate following a business as usual scenario. Leung and Qian (2003a) and Leung et al. (2004) analyzed the hydrologic impacts of climate change in the Columbia River Basin, Sacramento–San Joaquin Basin, and the Georgia Basin/Puget Sound region based on the regional simulations. In this study, we analyzed only the ensemble mean of the three PCM and regional climate simulations for the future climate although differences are quite large among the ensemble members even when averaged over 2040–60.
The MAS model is a limited-area atmospheric model written on a sigma coordinate (Soong and Kim 1996). Atmospheric–land surface interactions are computed by the Soil–Plant–Snow (SPS) model (Mahrt and Pan 1984; Kim and Ek 1995) that is interactively coupled with the MAS. The coupled MAS–SPS model was originally developed at the Lawrence Livermore National Laboratory (LLNL) and is currently developed and used at University of California, Los Angeles (UCLA) for regional climate and extended range forecast studies. Earlier analyses of the regional climate change data used in this study were presented by Kim (2001, 2004) and Kim et al. (2002). In addition, performance of the MAS model used for this study was evaluated by Kim and Lee (2003) in an 8-yr hindcast study.
The RegCM2 (Giorgi et al. 1993a, b) simulation performed by Iowa State University (ISU) computed precipitation using a simplified version (Giorgi and Shields 1999) of the Hsie et al. (1984) explicit moisture scheme and the Grell et al. (1993) convection parameterization. The model also used the Biosphere–Atmosphere Transfer Scheme (BATS) Version 1e (Dickinson et al. 1992) land surface model and the Holtslag et al. (1990) nonlocal boundary layer turbulence parameterization. Radiative transfer used the CCM2 radiation package (Briegleb 1992). Pan et al. (2001) give further details of the simulation and discuss general features of the precipitation output and its change under greenhouse warming.
We evaluate these simulations by comparing them to a range of observational data products. In general, these are gridded (i.e., spatially complete) data products that have been produced by applying physically based spatial interpolation methods to sparse observations. The exception is near-surface temperature data from the Global Historical Climatology Network (GHCN), whose station data we display after averaging onto a 0.5° grid. The value we show in each 0.5° box is the mean of all stations within that box; if there are no stations in the box that value is missing. Thus, no interpolation was performed on this data. Table 2 lists salient properties of the observational datasets used in this study.
3. Results
a. Present climate: Seasonal means
We start by analyzing the ability of the four RCM/GCM simulations to reproduce aspects of the present climate in the western United States. All four simulations overestimate spatially averaged monthly mean wintertime precipitation, in some months by as much as a factor of 2 (Fig. 1). This figure also suggests that these biases in the RCM results are due to similar biases in the driving GCMs. (That is, the RCMs produce too much precipitation because too much moisture enters their domains from the GCM.) This is consistent with the findings of Coquard et al. (2004), who showed that global climate models tend to overestimate precipitation in the western United States. However, the RSM model, for example, overpredicts western United States precipitation when forced with lateral boundary conditions from reanalysis (Han and Roads 2004), as does RegCM2 (Pan et al., 2001; Gutowski et al. 2004); thus the tendency to overpredict western United States precipitation may be to some extent inherent in the RCMs. The MAS model, on the other hand, has little systematic bias in monthly mean precipitation when forced with reanalysis boundary conditions (Miller et al. 1999). Although the different control simulations have similar spatially averaged precipitation amounts, the different RCMs differ in how they spatially distribute wintertime precipitation (Fig. 2). The too-wet bias is also apparent in Fig. 2.
The impressions gained from Fig. 2 are quantified in Fig. 3a, a Taylor diagram (Taylor 2001) evaluating monthly mean, spatially resolved precipitation in the RCMs and GCMs discussed here. This diagram compares simulated spatially resolved quantities (in this case precipitation) to gridded observations [in this case Vegetation/Ecosystem Modeling and Analysis Project (VEMAP)]. Before these comparisons are made, the model results were interpolated to the grid of the observed dataset (in this case 0.5° × 0.5°). This fine grid (rather than, say, the grid of one of the GCMs) is used for this comparison in order to allow us to assess to what extent dynamical downscaling adds value in the sense of enhancing the finescale solution. (This issue is discussed in section 4.) Caution must be taken that gridded observational data can be biased because of uneven distribution of stations. In particular, a lack of high-elevation stations in the western United States can cause systematic errors such as underestimation of spatial variability and warm biases in the near-surface air temperature (Kim and Lee 2003). Two statistics are shown, both based on climatological monthly mean values at each location. The radial coordinate represents the standard deviation of model results divided by the standard deviation of observed values. This compares the magnitude of simulated spatiotemporal variability to observed variability; it confirms that for the western United States precipitation the MAS model has the most variability of all models considered, and more variability than observed precipitation. It might seem obvious that this results from this model's spatial resolution, which is finer than that of the other nested models and of the VEMAP data. However, it is clear from Figs. 2 and 3 that the spatial variability in precipitation is higher in the MAS model results than in the National Oceanic and Atmospheric Administration (NOAA) observations, which use an even finer grid. So the spatial variability in precipitation in the MAS model seems to be excessive, and indicative of a problem in the model. Figure 3a also shows that the RCMs have more spatiotemporal variability in simulated western United States precipitation than the GCMs do; it is typical for coarser-resolution models to have less variability. The angular coordinate in Fig. 3 is the correlation coefficient between model results and observations; this measures the extent to which the maxima and minima in simulated quantities occur at the correct locations and times. Figure 3a shows that RCM-simulated precipitation correlates more strongly with observed precipitation than GCM-simulated precipitation does. In the Taylor diagram the results of an ideal model would be plotted on the horizontal axis at a radial coordinate value of 1; the distance on the plot from this ideal point measures the root-mean-square error (rmse) in the model results. Thus, of all the models considered here, the MM5 model has the smallest rmse in western United States precipitation. To give a feel for the importance of observational uncertainties, we plot in Fig. 3a the NOAA observational dataset in the same manner as the models.
The RCM/GCM control climates also show some significant biases in spatially averaged monthly mean near-surface temperatures (Fig. 4). All the control climates are too cold in late winter and spring; the HadCm2/RegCM2 (Iowa State) simulation is also too cold in summer. These biases in the RSM and MM5 results seem to result from similar biases in the driving PCM simulation. Maps of annually averaged near-surface temperatures (Fig. 5) show that all the RCMs simulate the basic spatial pattern of near-surface temperature quite well; this is not surprising as this pattern is strongly determined by topographic variations. As with precipitation, the MAS model shows the most spatial variability of the four RCMs in near-surface temperatures. This likely results, at least in part, from the higher spatial resolution used in this model compared to the other RCMs, which allows more accurate representation of topography.
A Taylor diagram (Fig. 3b) shows that simulated near-surface temperatures correlate much more strongly with observed values than simulated precipitation does. As with precipitation, the MAS results have the most spatiotemporal variability of all the simulations considered here. Figure 3b also shows that three of the RCMs have higher correlation coefficients (relative to the VEMAP data) in near-surface temperatures than does the NCEP reanalysis. One of these models has a smaller rmse in near-surface temperature than does NCEP. This no doubt results from the relatively coarse spatial resolution of the reanalysis, which makes it unable to capture topographically induced variations in near-surface temperatures.
Compared to data from National Operational Hydrologic Remote Sensing Center (NOHRSC), the PCM/RSM (ECPC), and PCM/MM5 (PNNL) simulations severely underestimate water-equivalent snow depths, or snow-water equivalent (SWE), in the western United States in every month when there is observed snow (Fig. 6). The other control simulations overestimate SWE in spring, summer, autumn, and early winter. Maps of seasonal-mean SWE for March–May (MAM; Fig. 7) confirm the biases seen in the spatially averaged SWE results. In addition, Fig. 7 shows some significant apparent errors in the simulated spatial distributions of SWE. For example, the RegCM2 model predicts too much SWE in Nevada and eastern Oregon, and too little SWE in the Cascade Mountains of Oregon and Washington.
The snow amounts seen in the RSM and MM5 control simulations appear to be inconsistent with those simulations' biases in monthly averaged near-surface temperature and precipitation. Specifically, although these simulations underpredict SWE relative to NOHRSC in every month when snow is observed, both these simulations overestimate regionally averaged precipitation throughout the rainy season (Fig. 1), and underestimate regionally averaged near-surface temperatures from January onward (Fig. 4). This suggests that these simulations should overestimate SWE, the opposite of what we find.
To shed light on this puzzle, in Fig. 8 we show scatterplots of monthly mean near-surface temperature biases versus monthly mean precipitation biases. Here, the bias at each location is defined to be the difference between monthly mean model result and climatological monthly mean observed value from VEMAP. To confine the analysis to locations where snow is on the ground, we show results only for November through March, and only at locations where the observed NOHRSC SWE exceeds zero. (We interpolated all results to the VEMAP grid for this analysis; thus, each point on the plot therefore corresponds to a 0.5° × 0.5° grid cell.) All the control simulations are predominately biased toward being too cold and wet, which should lead to too much SWE. For example, the median bias in near-surface temperature is −3.24°C in the RSM simulation and −1.70°C in the MM5 simulation; median precipitation biases are 1.00 and 0.79 mm day−1, respectively. Thus, in most locations where snow cover is observed, these simulations are too cold and too wet. In some locations, the temperature biases in the RCM results exceed 15°C. The same scatterplot analysis using an alternative near-surface data temperature set obtained from the Surface Water Modeling group at the University of Washington (http://www.hydro.washington.edu/Lettenmaier/gridded_data/), the development of which is described by Maurer et al. (2002), gave very similar results (not shown).
Thus, it is not clear from the meteorology shown in Figs. 1, 4 and 8 why the RSM and MM5 control simulations should underestimate snow amounts. One possibility is that snow amounts may increase nonlinearly with surface elevation; if this is the case, then in simulations such as those analyzed here where topography is underresolved, one would expect SWE to be underestimated. Another possibility involves daily time-scale temperature and precipitation errors in these simulations. Specifically, our findings could result from positive temperature errors on days with large precipitation amounts (i.e., if the models are too warm during strong precipitation events). This has been seen in some regions of the western United States in other simulations with the MM5 model (Leung et al. 2003). Without access to daily temperature and precipitation results, however, we cannot determine if this is occurring in these RCM simulations.
Defects in representations of land surface processes can also cause large snow-accumulation biases. Leung and Qian (2003b) analyzed regional climate simulations driven by the NCEP reanalysis for the western United States and found a large negative bias in snowpack. Their analysis suggested that up to 50% of the snowpack bias was related to temperature and precipitation bias, but deficiency in the land surface model likely accounted for a substantial part of the remaining bias. Similar results were seen in a recent intercomparison in which 21 land surface models were forced with observed meteorology for 18-yr simulations (Slater et al. 2001). Since snowfall was prescribed in these simulations, all intermodel differences in SWE, and, in principle, any model biases in SWE, result from inadequacies in the land surface models. This intercomparison revealed large (up to factor of ∼4) intermodel scatter in simulated SWE. The models' biases relative to observed SWE were predominately positive in some years and negative in others. It was also found that early season biases tended to persist throughout the snow year. Further suspicion of defects in land surface models being a major source of model bias is the fact that both the RSM and MM5 model used a land surface model based on the Oregon State University (OSU) model with a single layer of snow. Clearly, the sorts of defects in land surface models could be an important factor in SWE biases in the RCMs considered here.
Finally, the apparent inconsistency between the RSM and MM5 simulations' biases in near-surface temperature and precipitation and their biases in SWE could result at least in part from the limited number of years represented in the NOHRSC snow data. Specifically, this dataset represents only 1996–2000, which may have more snow than normal in part because of the strong El Niño in 1997–98. A snow dataset including more years might result in smaller apparent model biases.
b. Present climate: Interannual variability
Interannual variations in climate in the western United States have important societal impacts. Variations in precipitation can be particularly important, resulting in stress on water infrastructure, floods and mudslides.
The primary source of interannual variability in the study region is El Niño–Southern Oscillation (ENSO), which introduces variability primarily on times scales of 4 to 7 yr. This affects spatially averaged precipitation in the study region by varying the amount of moisture advected into the region. The two PCM simulations have very different estimates of interannual variability of spatially averaged precipitation (Fig. 9a); however, the nested RCMs (MM5 and the RSM) closely reproduce the interannual variability of their respective driving models (whether this is close to correct or not). A similar situation obtains near-surface temperature (Fig. 9b). Here, both PCM simulations overestimate winter season interannual variability, and this problem is reproduced by the nested models. Thus, as is perhaps to be expected, dynamical downscaling seems to have little effect on interannual variability of spatially averaged surface variables. (Effects of spatially resolved surface variables are discussed in section 4 below.)
The control climates overestimate interannual variability of monthly mean, regionally averaged precipitation in nearly every month (Fig. 9). This is perhaps to be expected given that the monthly mean precipitation itself is also too high in all the RCMs. The RCM errors in both mean precipitation and interannual variability of monthly mean precipitation are largest in the winter months, when precipitation is also largest. All the simulations successfully represent the higher interannual variability in February relative to January and March. In January and February, the two PCM simulations differ greatly from each other in interannual variability of precipitation; however, each RCM nonetheless seems to closely follow its driving GCM.
Observations show relatively little seasonal cycle in interannual variability of monthly mean, regionally averaged near-surface temperatures (Fig. 9). The PCM model and the RCMs driven by PCM, however, show more variability in winter than in summer and more variability than is observed in winter. The two RCMs that were driven by HadCM2 (RegCM2 and MAS) do better at estimating wintertime variability in time- and space-averaged near-surface temperatures.
Maps of interannual variability of seasonal-mean precipitation (Fig. 10) show that locations of high interannual variability generally coincide with locations of high seasonal-mean precipitation (Fig. 2). The RCMs generally reproduce the observed spatial pattern of high interannual variability over mountains in California, Oregon, and Washington, and low variability over the dry regions in eastern Oregon, eastern Washington, and Nevada. The RegCM2 model, however, has not enough variability in the mountains and too much in the dry regions. The RSM does not reproduce the observed high variability over the mountains in Washington and Oregon.
Maps of interannual variability of seasonal-mean near-surface temperature (Fig. 11) show that the excessive variability seen in the RSM's spatially averaged temperatures (Fig. 9b) is due primarily to excessive variability inland (in eastern Oregon and Washington, Idaho, and northern Nevada). This clearly results from excessive variability in the same locations in the driving PCM simulation.
c. Simulated responses to increased greenhouse gases
We start by examining the simulated response of regional precipitation to increased CO2. In the two RCMs driven by results from the PCM global model, the regionally averaged monthly mean response is consistent with zero in every month (Fig. 12). This reflects a similarly insignificant regional precipitation response in the PCM results. Especially in the MM5 results, it is striking how closely the RCM response follows that of PCM; this similarity includes not only the multiyear average response, but also the magnitude of interannual variability (indicated by error bars in Fig. 12). Because the comparison is made over a geographical area that is significantly smaller than the domain of the nested model, it is noteworthy that the level of agreement is as high as it is. The lack of a significant precipitation response in PCM and in the RCMs driven by PCM is consistent with the generally weak climate sensitivity of the PCM model to increased greenhouse gases (Barnett et al. 2001), and with the relatively small CO2 increases considered in these simulations (1.36× and 1.41×).
To avoid confusing a response to increased CO2 with interannual variability, we assessed the statistical significance of simulated precipitation responses relative to interannual variability at each model grid cell. This was done using a two-sided Student's t test. The RCM simulations driven by results from PCM show almost no area where the simulated precipitation response is significant at a 90% or greater confidence level (Fig. 13). This no doubt results, as noted above, from the relatively small CO2 increases used in these simulations, and the low sensitivity of the PCM model to increased atmospheric CO2. The MAS and RegCM2 simulations show statistically significant increases in precipitation in northern California, eastern Oregon, and central Idaho. The spatial pattern of simulated precipitation response is quite similar in these two models, when only regions with statistically significant responses are considered. This pattern of precipitation response is similar to that found by Snyder et al. (2002), who used the CCM3 GCM to drive RegCM2.
Simulated responses in near-surface temperatures to increased CO2 show no significant seasonal cycle (Fig. 14). As expected, the larger CO2 increases in the MAS and RegCM2 simulations, combined with the larger climate sensitivity of HADCM2 than PCM, produce larger responses in near-surface temperatures. As with simulated precipitation responses, it is striking how closely the spatially averaged response in the RSM and PNNL RCMs follows that in their respective driving GCM results. Maps of annual-mean near-surface temperature responses (Fig. 15) show that the MAS and RegCM2 models produce uniformly larger responses in near-surface temperatures.
To allow easier comparison of the spatial patterns of temperature responses across the various models, we show in Fig. 16 normalized near-surface temperature responses. Here the simulated temperature response in each model has been multiplied by a scalar chosen so that the spatial mean of the normalized response is one. The RCMs agree that warming will be greater inland than near the coast, but they do not agree on details of the pattern of temperature response. The RSM model has a notably different pattern of surface temperature response than the other models.
Several previous studies (e.g., Giorgi et al. 1997; Leung and Ghan 1999a, b) have reported increased surface temperature responses at higher elevations in winter; this is interpreted as evidence of a snow–albedo feedback. To look for this effect in our simulations, we calculated the ratio of DJF to June–August (JJA) surface temperature response to increased CO2 (Fig. 17). The surface temperature response tends to be higher in inland regions; since the elevations are also higher there, looking at this ratio helps to separate these two effects. If a snow–albedo feedback were increasing the DJF surface temperature response, we would expect this ratio to be elevated in snow-covered regions. Comparing Figs. 7 and 17 makes it clear that this is not the case, except possibly in the MM5 simulation. This is perhaps not surprising given the unrealistically small snow cover in most of these simulations and the relatively small CO2 increases that were simulated.
4. Assessment of added value from nested RCM simulations
The value of any downscaling approach lies in its ability to add meaningful spatiotemporal detail to the large-scale driving solution. This in principle should be possible for nested RCMs because of their more highly resolved representations of physical processes and surface forcings (coastlines, surface elevations, land-cover types, snow, etc.). Han and Roads (2004) also argue that superior formulations of model physical processes can contribute to improved skill in downscaled results. The extent to which dynamical downscaling produces skilful spatiotemporal detail will depend on specific properties of the particular simulation, such as domain size, spatial resolution, and the meteorology of the region in question. In this section we review the results presented above with the specific goal of assessing the added value of downscaled simulations discussed here.
The Taylor diagrams shown in Fig. 3 are useful for this purpose because the statistics on which they are based were calculated after removal of biases (errors in the spatiotemporal means), which we have shown are very similar in the nested and driving models. (We reiterate that the level of agreement we found is not necessarily to be expected, because in some cases the study area is a small fraction of the nested model domain.) Thus, the Taylor diagrams isolate exactly the question at hand. Furthermore, as noted above, the Taylor diagram statistics were calculated on a fine spatial grid; this procedure preserves the finescale information needed to assess the value added by dynamical downscaling. Figure 3a shows that, for precipitation, correlation coefficients (against observations; shown as the angular coordinate on the Taylor diagram) for all of the nested models are higher than for either simulation with the PCM global model. Thus, dynamical downscaling is improving this measure of model skill. The radial coordinate of the Taylor diagram is the spatiotemporal standard deviation of the model results normalized by that of the observations, which is known as the normalized standard deviation (NSD). Thus, for example, NSD of the PCM precipitation results are <1 (Fig. 3a) indicating not enough spatiotemporal variability in precipitation within the study area. The NSDs of the RCM results are higher, in one case much too high. The normalized rms errors of the downscaled results, shown on the Taylor diagram as the distance from the point on the horizontal axis with a radial coordinate value of 1, are in some cases (PNNL/MM5, ECPC/RSM, and ISU/RegCM2) less than that of the driving global model, and in one case (UCLA/MAS) significantly greater. Thus, dynamical downscaling in some but not all cases improves simulated precipitation according to this measure of model skill. For near-surface temperature, dynamical downscaling consistently produces improved model skill, as measured by normalized rms errors shown on the Taylor diagram (Fig. 3b). Since both precipitation and near-surface temperature are strongly influenced by topographic variations, it is easy to understand why dynamical downscaling could produce meaningful regional-scale detail in these quantities. The more consistent additional skill seen in temperature versus precipitation is perhaps a result of the more complex physics involved in simulating precipitation.
What about interannual variability? The nested RCMs clearly reproduce the general features of the observed pattern of interannual variability of DJF precipitation (Fig. 10). This is in contrast to the two PCM simulations, in which interannual variability of precipitation is much more uniform (spatially) than observed. Thus, this is an example of dynamical downscaling adding value, in this case primarily through improved representation of topography. On the other hand, for JJA precipitation, the nested models predict spatial variations in interannual variability that do not appear in the observations or in the driving global simulations (which are much more uniform). So here, dynamical downscaling is producing fictitious spatial detail.
The two PCM simulations produce very different estimates of the spatial pattern of interannual variability in DJF near-surface temperatures. Figure 11a clearly shows that pattern of variability produced by the nested models more closely follows that of the driving GCMs (even when these are unrealistic) than that of the observations. Thus, in this instance, the driving model is determining not only the regional-mean results, but also the large-scale spatial pattern within the study area. Thus, regional results produced by dynamical downscaling can be erroneous; that is, dynamical downscaling appears to add little of no ability to simulate interannual variability in near-surface temperature in the study area.
The concept of added value in downscaled climate change projections presumes that is it useful to know about spatial and/or temporal response variations within the study area even though the spatiotemporal mean climate response is uncertain. Thus, for example, we assume that it would be useful to know if the Sierra Nevada mountains will warm more than the Central Valley, or if warm-season temperature increases will be larger or smaller than cold-season changes, even if we do not know the actual magnitudes of these responses.
Even if one accepts this premise, it is not clear how much useful skill the downscaled simulations examined here add to the GCMs' climate change projections. For example, the four rightmost panels of Fig. 16 very strongly suggest that the difference in near-surface temperature response between coastal and inland regions in the study area is determined more by the driving GCM than by the RCM. The RSM downscaled results show a broad pattern of temperature response that closely follows that in the driving PCM simulation, with inland regions warming significantly more than the coast. The PNNL downscaled results, by contrast, show essentially no difference in temperature response between coastal regions and Nevada, again following the response pattern in the driving GCM. Thus, there is no evidence here of downscaling adding skill on this (relatively large) spatial scale. On a smaller scale, the PNNL downscaling predicts a response contrast between the Sierra Nevada Mountains and surrounding regions. If it were correct, this prediction would represent added value from dynamical downscaling. None of the other downscaled solutions shows the same phenomenon, however, so it is hard to have confidence in this projection.
For precipitation, the downscaled solutions driven by PCM results show statistically significant responses only in some very small regions; these constitute a small enough fraction of the study area that the apparently significant responses could easily be coincidental, however. The downscaled solutions driven by HadCM2 agree that the precipitation response will be larger in the northern part of the study area; the strong similarity between these two downscaled patterns suggests that the pattern is produced by the common driving GCM.
Leung and Ghan (1999b) found that the DJF response of near-surface temperature to increased CO2 was distinctly larger in their downscaled results than in the driving global simulation. This was plausibly attributed to a stronger snow–albedo feedback in the nested model, and is an example of superior representation of surface forcings in a nested model producing different, and arguably more realistic, climate responses than the driving global model. As discussed above, however, we see no evidence for a different seasonal cycle in response between driving and nested models, for either precipitation or near-surface temperature. This may not be completely contradictory to the Leung and Ghan results, however. For one thing, because of the model's low sensitivity and the small CO2 increases considered, the PCM simulations show no significant precipitation response. It would be surprising if the nested models amplified this into something significant, since the moisture in the RCMs is supplied by the driving GCMs. For near-surface temperature, PCM's responses are much smaller than those seen by Leung and Ghan. Together with the deficiency in snow amounts in the downscaled simulations examined here, this may account for the lack of amplification of the wintertime response in the nested simulations. The two HadCM2-driven downscaled solutions that we examine show a possible larger near-surface temperature response in July. This, and other similarities between the seasonal cycle of responses (Fig. 14) in the two HadCM2-driven simulations, suggest that this seasonal cycle originates in the driving GCM.
5. Summary of findings
To incorporate climate change into planning processes, policymakers need projections of climate change that include quantitative information about uncertainties. One approach is to compare results across a range of equally credible models. This provides a range of outcomes that are also equally credible. For uncertainty estimates to be rigorous, a carefully coordinated study in which all models consider the same climate change scenario, etc., is needed. Such studies require a major, multi-institutional effort, however, and are beyond the scope of this paper. Here, we have instead compared available RCM simulations of the western United States, a region with diverse climates and clear vulnerabilities to climate change. This study may be viewed as one step toward the broader analysis that would involve carefully coordinated, multi-institutional simulations and cross comparisons.
We analyzed the ability of the four RCM/GCM combinations to reproduce observations of the present climate, and the intermodel range of predicted responses to increased atmospheric greenhouse gases. In simulations of the present climate, the RCM results show significant biases; in most cases where driving GCM results are available, the RCM biases are very similar to the biases of the driving GCM within the RCM domain. For example, the PNNL and RSM models have positive precipitation biases in winter that are very similar to the biases in the driving PCM simulations. The MAS and RegCM2 models also have positive precipitation biases in winter. While we did not have access to the particular HadCM2 simulation used to drive the MAS and RegCM2 models, this bias is very similar to that seen in other HadCM2 simulations. Although the GCM simulations exert large control over the regional mean precipitation of the RCMs, the spatial distribution of precipitation can vary substantially among RCMs even when driven by the same GCM. These differences result from different representations of relevant physical processes and surface forcings (especially topography) in different models. All the RCMs analyzed here seem to have less SWE than one would expect from their biases in precipitation and near-surface temperature. In particular, the PNNL and RSM models have much less SWE than is observed, despite being too cold and having too much precipitation in most snow-covered locations in our study area.
There is little consistency among the models as to responses in precipitation and near-surface temperatures to increased greenhouse gases. The two models driven by PCM (PNNL and RSM) project no significant changes in regionally averaged monthly mean precipitation. Projected precipitation changes are not significant at the 90% confidence level in any location in the study area. This no doubt results from the small CO2 increases (1.41× and 1.36×, respectively) in these simulations, and the low climate sensitivity of the PCM. The two RCMs driven by HadCM2 (MAS and RegCM2) predict increases in monthly mean regionally averaged wintertime precipitation that are comparable in magnitude to the interannual variability of the precipitation response (one standard deviation), that is, are barely significant. These RCMs predict precipitation increases that are significant at the 90% confidence in northern California, eastern Oregon, and central Idaho. All the RCMs predict warming in response to increased greenhouse gases. The models that simulated larger CO2 increases and were driven by GCMs with larger climate sensitivity (MAS and RegCM2) predict greater warming. There is no significant seasonal cycle to the predicted warming in any RCM, and the spatial patterns of predicted warming are quite different in the different RCMs. This lack of a seasonal cycle in temperature response contradicts some earlier studies that considered larger increases in atmospheric CO2.
An important methodological question is whether or not dynamical downscaling adds meaningful spatiotemporal detail to climate change projections. The superior representations of surface forcings (topography, coastlines, land cover, snow) and more finely resolved physics that generally obtain in nested models argue that this should be so. Perhaps because the climate perturbations considered here are relatively small, however, we see inconsistent evidence for added value from dynamical downscaling.
Acknowledgments
This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract W-7405-Eng-48. The ECPC RSM work was supported in part by NOAA NA17RJ1231.
REFERENCES
Barnett, T. P., D. W. Pierce, and R. Schnur, 2001: Detection of anthropogenic climate change in the world's oceans. Science, 292 , 270–274.
Bhaskaran, B., J. M. Murphy, and R. G. Jones, 1998: Intraseasonal oscillation in the Indian summer monsoon simulated by global and nested regional climate models. Mon. Wea. Rev, 126 , 3124–3134.
Briegleb, B. P., 1992: Delta-Eddington approximation for solar radiation in the NCAR Community Climate Model. J. Geophys. Res, 97D , 7603–7612.
Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part 1: Model implementation and sensitivity. Mon. Wea. Rev, 129 , 569–585.
Christensen, J. H., T. Carter, and F. Giorgi, 2002: PRUDENCE employs new methods to assess European climate change. EOS, Trans. Amer. Geophys. Union, 82 , 147.
Christensen, O. B., J. H. Christensen, B. Machenhauer, and M. Botzet, 1998: Very high-resolution climate change simulations over Scandinavia—Present climate. J. Climate, 11 , 3204–3229.
Coquard, J., P. B. Duffy, K. E. Taylor, and J. P. Iorio, 2004: Present and future surface climate in the Western USA as simulated by 15 global climate models. Climate Dyn, 23 , 455–472.
Dickinson, R. E., R. M. Errico, F. Giorgi, and G. T. Bates, 1989: A regional climate model for western United States. Climate Change, 15 , 383–422.
Dickinson, R. E., A. Henderson-Sellers, and P. J. Kennedy, 1992: Biosphere–atmosphere transfer scheme (BATS) version 1e as coupled to NCAR community climate model. NCAR Tech. Note 387+STR, 72 pp.
Durman, C. F., J. M. Gregory, D. H. Hassell, R. G. Jones, and J. M. Murphy, 2001: The comparison of extreme European daily precipitation simulated by a global and a regional climate model for present and future climates. Quart. J. Roy. Meteor. Soc, 127 , 1005–1015.
Giorgi, F., and C. Shields, 1999: Tests of precipitation parameterizations available in latest version of NCAR regional climate model (RegCM) over continental United States. J. Geophys Res, 104D , 6353–6375.
Giorgi, F., M. R. Marinucci, and G. T. Bates, 1993a: Development of a second-generation regional climate model (RegCM2). Part I: Boundary-layer and radiative transfer processes. Mon. Wea. Rev, 121 , 2794–2813.
Giorgi, F., M. R. Marinucci, and G. T. Bates, 1993b: Development of a second-generation regional climate model (RegCM2). Part II: Convective processes and assimilation of lateral boundary conditions. Mon. Wea. Rev, 121 , 2814–2832.
Giorgi, F., C. Sheilds Brodeur, and G. T. Bates, 1994: Regional climate change scenarios over the continental United States produced with a nested regional climate model. J. Climate, 7 , 375–399.
Giorgi, F., J. W. Hurrell, M. R. Marinucci, and M. Beniston, 1997: Elevation signal in surface climate change: A model study. J. Climate, 10 , 288–296.
Grell, G. A., J. Dudhia, and D. R. Stauffer, 1993: A description of the fifth-generation Penn State/NCAR mesoscale model (MM5). NCAR Tech. Note NCAR/TN-398+STR, 117 pp.
Gutowski, W. J., F. Otieno, R. W. Arritt, E. S. Takle, and Z. Pan, 2004: Diagnosis and attribution of a seasonal precipitation deficit in a U.S. regional climate simulation. J. Hydrometeor, 5 , 230–242.
Han, J., and J. Roads, 2004: U.S. climate sensitivity simulated with the NCEP regional spectral model. Climate Change, 62 , 115–154. doi:10.1023/B:CLIM.0000013675.66917.15.
Hassell, D., and R. Jones, 1999: Simulating climatic change of the southern Asian monsoon using a nested regional climate model (HadRM2). Hadley Centre Tech. Note HCTN 8.
Holtslag, A. A. M., E. I. F. D. Bruijn, and H-L. Pan, 1990: A high resolution air mass transformation model for short-range weather forecasting. Mon. Wea. Rev, 118 , 1561–1575.
Hsie, E. Y., R. A. Anthes, and D. Keyser, 1984: Numerical simulation of frontogenesis in a moist atmosphere. J. Atmos. Sci, 41 , 2581–2594.
Juang, H-M. H., and M. Kanamitsu, 1994: The NMC nested Regional Spectral Model. Mon. Wea. Rev, 122 , 3–26.
Juang, H-M. H., S-Y. Hong, and M. Kanamitsu, 1997: The NCEP Regional Spectral Model: An update. Bull. Amer. Meteor. Soc, 78 , 2125–2143.
Kim, J., 2001: A nested modeling study of elevation-dependent climate change signals in California induced by increased atmospheric CO2. Geophys. Res. Lett, 28 , 2951–2954.
Kim, J., 2004: A projection of the effects of the climate change induced by increased CO2 on extreme hydrologic events in the western U.S. Climatic Change, 68 , 153–168.
Kim, J., and M. Ek, 1995: A simulation of the surface energy budget and soil water content over the Hydrologic Atmospheric Pilot Experiment. J. Geophys. Res, 100D , 20845–20854.
Kim, J., and J. Lee, 2003: A multiyear regional climate hindcast for the western United States using the Mesoscale Atmospheric Simulation model. J. Hydrometeor, 4 , 878–890.
Kim, J., T. Kim, R. W. Arritt, and N. L. Miller, 2002: Impacts of increased atmospheric CO2 on the hdyroclimate of the western United States. J. Climate, 15 , 1926–1942.
Laprise, R., D. Caya, M. Giguere, G. Bergeron, H. Cote, J-P. Blanchet, G. Boer, and N. MacFarlane, 1998: Climate and climate change in Western Canada as simulated by the Canadian regional climate model. Atmos.–Ocean, 36 , 119–167.
Leung, L. R., and S. Ghan, 1999a: Pacific Northwest climate sensitivity simulated by a regional climate model driven by a GCM. Part I: Control simulations. J. Climate, 12 , 2010–2030.
Leung, L. R., and S. Ghan, 1999b: Pacific Northwest climate sensitivity simulated by a regional climate model driven by a GCM. Part II: 2×CO2 simulations. J. Climate, 12 , 2031–2053.
Leung, L. R., and Y. Qian, 2003a: Changes in seasonal and extreme hydrologic conditions of the Georgia Basin/Puget Sound in an ensemble regional climate simulation for the mid-century. Can. Water Resour. J, 28 , 605–631.
Leung, L. R., and Y. Qian, 2003b: The sensitivity of precipitation and snowpack simulations to model resolution via nesting in regions of complex terrain. J. Hydrometeor, 4 , 1025–1043.
Leung, L. R., Y. Qian, and X. Bian, 2003: Hydroclimate of the western United States based on observations and regional climate simulation of 1981–2000. Part I: Seasonal statistics. J. Climate, 16 , 1892–1911.
Leung, L. R., Y. Qian, X. Bian, W. M. Washington, J. Han, and J. O. Roads, 2004: Mid-century ensemble regional climate change scenarios for the western United States. Climate Change, 62 , 1–3. 75–113.
Mahrt, L., and H-L. Pan, 1984: A two-layer model of soil hydrology. Bound.-Layer Meteor, 29 , 1–20.
Maurer, E. P., A. W. Wood, J. C. Adam, D. P. Lettenmaier, and B. Nijssen, 2002: A long-term hydrologically based dataset of land surface fluxes and states for the conterminous United States. J. Climate, 15 , 3237–3251.
Miller, N. L., J. Kim, R. Hartman, and J. Farrara, 1999: Downscaled climate and streamflow study of the southwestern United States. J. Amer. Water Resour. Assoc, 35 , 1525–1537.
Pan, Z., J. H. Christensen, R. W. Arritt, W. J. Gutowski Jr., E. S. Takle, and F. Otieno, 2001: Evaluation of uncertainties in regional climate change simulations. J. Geophys. Res, 106 , 17735–17752.
RMIP, cited. 2003: A continuation to Regional Climate Model Intercomparison Project for Asia. [Available online at http://www.start.org/project_pages/rcm.html.].
Roads, J., S-C. Chen, and M. Kanamitsu, 2003: U.S. regional climate simulations and seasonal forecasts. J. Geophys. Res, 108D .8606, doi:10.1029/2002JD002232.
Slater, A. G., and Coauthors, 2001: The representation of snow in land-surface schemes: Results from PILPS 2(d). J. Hydrometeor, 2 , 7–25.
Snyder, M. A., J. L. Bell, L. C. Sloan, P. B. Duffy, and B. Govindasamy, 2002: Climate responses to a doubling of atmospheric carbon dioxide for a climatically vulnerable region. Geophys. Res. Lett, 29 .1514, doi:10.1029/2001GL14431.
Soong, S-T., and J. Kim, 1996: Simulation of a heavy wintertime precipitation event in California. Climate Change, 32 , 55–77.
Takle, E. S., and Coauthors, 1999: Project to Intercompare Regional Climate Simulations (PIRCS): Description and initial results. J. Geophys. Res, 104 , 19443–19461.
Taylor, K. E., 2001: Summarizing multiple aspects of model performance in a single diagram. J. Geophys. Res, 106 , 7183–7192.
Properties of simulations analyzed here.
Observational datasets used for model evaluation. “Quantities used” lists only meteorological quantities that were used in this study; additional quantities may be available from the same data source. T = near-surface temperature; P = precipitation; SWE = snow-water equivalent (i.e., water-equivalent snow depth). VEMAP = Vegetation/Ecosystem Modeling and Analysis Project; NOAA = National Oceanic and Atmospheric Administration; GHCN = Global Historical Climatology Network. NOHRSC = National Operational Hydrologic Remote Sensing Center; UW = University of Washington.