1. Introduction
Many previous efforts to estimate future climate on finer scales have employed dynamical downscaling where coarsely resolved global-scale climate simulations were used to provide temporal and spatial boundary information for finescale meteorological models (Giorgi 1990). A climate downscaling study was recently conducted using the Weather Research and Forecasting model (WRF; Skamarock et al. 2008) on a nested 108/36-km modeling grid (Otte et al. 2012; Bowden et al. 2013). These studies demonstrated some optimization of WRF in this regard by using the National Centers for Environmental Prediction–U.S. Department of Energy (NCEP–DOE) Atmospheric Model Intercomparison Project (AMIP)-II Reanalysis data (Kanamitsu et al. 2002) as a surrogate for global climate model information and then comparing WRF outputs with finer-scale reanalysis products. The use of historical meteorological data to provide forcing fields for the dynamical modeling and to provide data with which to evaluate the results is the only way to test dynamical climate downscaling methods since there are no future observations with which to evaluate downscaling results from future climate simulations.
While the previous dynamical downscaling at 108- and 36-km grid spacing was successful in providing added detail and accuracy, environmental managers and urban planners have expressed a desire for future climate projections at even finer scales. By taking into account the effect of local geophysical features on surface air temperature, humidity, wind, and precipitation, finescale dynamical downscaling has the potential to provide more useful information to guide local officials in their climate change adaptation efforts.
To take the previous downscaling effort one step farther, this work applies one-way nesting in WRF to provide information on a 12-km horizontal grid for calendar year 2006. This study period was chosen based on the availability of over 11 million hourly observations of surface temperature, water vapor mixing ratio, and wind speed with which to evaluate model performance. We restricted our simulations to one year to allow testing of various model configurations with regard to interior nudging type and nudging strength. Longer-term (~20 yr) simulations are anticipated based on the results of this study. In the course of our investigation we also tested some alternate physics options. WRF was applied in three modes. The first is the standard WRF application where the simulation is constrained only by the provision of meteorological data at the lateral boundaries and surface conditions (e.g., topography, land surface type, sea surface temperatures). For the other two modes, internal forcing of meteorological variables using four-dimensional data assimilation (Stauffer and Seaman 1990) is also applied. This internal forcing, also called interior nudging, is applied in two different ways, “analysis nudging” and “spectral nudging.” As in Otte et al. (2012), the basis for all interior nudging was the NCEP–DOE AMIP-II Reanalysis (R-2) data with approximately 200-km horizontal grid spacing.
While analysis nudging on a fine grid based on coarser information is known to damp high-resolution features desired from the finescale simulation (Stauffer and Seaman 1994), analysis nudging was found to be generally superior to spectral nudging at the 36-km scale when appropriate nudging coefficients were chosen to adjust the strength of the nudging force in the WRF governing equations (Otte et al. 2012). This study investigates further adjustments to those coefficients for 12-km WRF applications. Spectral nudging, when applied with appropriate options for the 12-km WRF domain, should not damp high-resolution features in the 12-km simulation the way analysis nudging can. This study also investigates adjustments to the spectral nudging strength coefficients to achieve optimal performance.
2. Model description
The Advanced Research configuration of WRF, version 3.3.1, was used in a number of different configurations as outlined in Table 1. All simulations were initialized at 0000 UTC 2 December 2005 to provide a 30-day spinup time before the calendar year 2006 test period. The model was run continuously through 0000 UTC 1 January 2007 with no reinitialization. The 108- and 36-km horizontal domains used in Otte et al. (2012) and the 12-km domain used here are shown in Fig. 1. WRF was run on the 12-km domain with the same 34-layer configuration and 50-hPa model top used in Otte et al. (2012). Initial and lateral boundary data were derived from their 36-km analysis-nudged simulation using standard WRF input data processing software with a 1-h update interval for the lateral boundaries. The input data for the lower boundary and for interior nudging (when applied) were the global T62 Gaussian analyses from the R-2 data, which provide a 6-h history interval.
Specifications for all 12-km WRF test simulations conducted.
Modeling domains used for previous 108- and 36-km dynamical downscaling and 12-km domain (d03) used for this study.
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
Regarding the lower boundary definitions, we noticed an issue with inland lake surface temperatures similar to that recently described by Gao et al. (2012). Unrealistic discontinuities in temperature between inland lakes and their surrounding land surfaces were produced from the water surface temperature data available from the R-2 analysis. When inland lakes are far removed from the closest sea surface temperature data available in the lower boundary input file, WRF normally uses a nearest-neighbor approach to estimate their surface skin temperature. The R-2 data resolve the five Great Lakes with only three data points, and all other inland lakes in our 12-km WRF domain are not resolved at all. An alternative method for setting inland lake water temperatures was tested (“alternate lakes” cases in Table 1) whereby 2-m air temperatures from R-2 were averaged over the previous month and used to set inland lake surface temperatures. This alternate lakes method was applied without any nudging and with spectral nudging. In neither case were we able to simulate realistic lake surface temperatures and ice cover. The Great Lakes could be better resolved by higher-resolution general circulation models or corresponding reanalysis products, but smaller inland lakes will continue to remain unresolved. We believe that adding a capability in WRF to realistically simulate the exchanges of energy between inland lakes and the atmosphere above could significantly improve future finescale dynamical downscaling efforts.
In regard to the WRF physics options used in this study, we generally used the same options as did Otte et al. (2012). These include the Rapid Radiative Transfer Model for global climate models (RRTMG; Iacono et al. 2008) for longwave and shortwave radiation, the Yonsei University planetary boundary layer (PBL) scheme (Hong et al. 2006), and the Noah land surface model (Chen and Dudhia 2001). Soil temperature and moisture in the land surface model were initialized by interpolating from the 36-km parent domain via the WRF ndown program. For this study, the initialization time was 18 yr into the 36-km simulation. We also used the WRF single-moment six-class microphysics scheme (Hong and Lim 2006) in most of the 12-km simulations, but instead applied the Morrison double-moment scheme (Morrison et al. 2009) in two separate sensitivity tests as indicated in Table 1. We also used the Grell-3 convective parameterization scheme (Grell and Dévényi 2002) in most of our 12-km simulations, but as Table 1 shows, we applied the Kain–Fritsch scheme (Kain 2004) two different ways to test sensitivity to subgrid convective parameterization.
All simulations applied nudging toward the lateral boundary values using a five-point sponge zone (Davies and Turner 1977). For interior nudging, three options were used: no nudging, analysis nudging, and spectral nudging. Simulation test cases for which no interior nudging was used are designated with NN, cases where analysis nudging was used are designated with AN, and cases where spectral nudging was used are designated with SN. Both forms of interior nudging have been shown to reduce errors in WRF-based regional climate modeling (Lo et al. 2008; Bowden et al. 2012).
Analysis nudging in WRF is thought to be most appropriate when the target data fields have a similar spatial resolution as the model grid (Stauffer and Seaman 1994). In this study the target data for nudging were of considerably coarser resolution than the 12-km model grid. It was expected that some adjustments to the analysis-nudging coefficients used by Otte et al. (2012) for their 36-km simulations might be necessary to optimize model performance. In general, weaker nudging is recommended for finer-resolved model grids (Stauffer and Seaman 1994). Therefore we tested the analysis-nudging technique at 12-km grid spacing with nudging strengths varied between one-fourth and equal to the base values used by Otte et al. (2012) in their 36-km modeling. Analysis nudging was applied to horizontal wind components, potential temperature, and water vapor mixing ratio. This interior nudging was only applied above the PBL.
Spectral nudging (Miguez-Macho et al. 2004) differs from analysis nudging in that its effect is scale selective so that finescale features in the model simulation can be preserved. Spectral nudging is based on a spectral decomposition of the same difference field (model solution versus reference analysis) used in analysis nudging. By using only the longer spectral waves (lower wavenumbers) to reconstitute the difference field used to nudge the simulation, the effect of nudging on finer-scale features in the simulation is avoided. A maximum wavenumber of 2 (i.e., two full waves across the simulation domain) was selected for both horizontal dimensions to account for the size of the 12-km domain and the limited resolution power of the R-2 data. Spectral nudging in public releases of WRF can only be applied to the horizontal wind components, potential temperature, and geopotential. There is currently no capability to apply spectral nudging to water vapor mixing ratio as can be done with analysis nudging. As with our analysis nudging tests, spectral nudging was only applied above the PBL in this study. The scale-selective effects of spectral nudging should reduce model sensitivity to the nudging coefficients. Nonetheless, sensitivity to the spectral nudging coefficients was tested with simulations using one-half and 2 times the base values chosen for 12-km modeling.
3. Evaluation of WRF simulations against hourly surface observations
Previous dynamical downscaling to 36-km grid spacing by Otte et al. (2012) used North American Regional Reanalysis data with 32-km grid spacing to evaluate WRF simulation results. For our 12-km results, more highly resolved ground truth data were required. Instead of using a meteorological reanalysis product, hourly observations of temperature, humidity, and wind speed from the National Oceanic and Atmospheric Administration Meteorological Assimilation Data Ingest System (MADIS) were used. To assure data quality, we only used aviation routine weather reports (METAR) and surface airways observation (SAO) reports from the MADIS data repository. These reports provided over 11 × 106 hourly observations across the 12-km WRF modeling domain during 2006. Comparisons of simulated and observed data were made using the Atmospheric Model Evaluation Tool (AMET) described in Appel et al. (2011).
The first evaluations performed were intended to gauge the improvements offered by 12-km WRF modeling over the previous 36-km results. As mentioned previously, the 36-km WRF results obtained with analysis nudging were deemed to be generally superior and were used in a one-way nesting operation to define all lateral boundary values for the 12-km modeling. Figure 2 shows monthly evaluations of mean bias and mean absolute error for the parent 36-km WRF simulation (36AN) and our base-case 12-km nested simulations with no interior nudging, with analysis nudging, and with spectral nudging in comparison with hourly surface data from MADIS. These analyses were produced with AMET, which allows the area of comparison to be specified in longitude and latitude space. The area specified for all AMET products in this study was 25°–48°N and 67°–108°W, which covers the 12-km model domain to the greatest extent possible. The WRF physics options used in these base-case 12-km simulations were the same used in the previous 36-km simulation. Note, however, that version 3.3.1 of WRF was used for the present study, whereas Otte et al. (2012) used version 3.2.1. Tables 2, 3, and 4 show annual evaluation statistics for temperature, water vapor mixing ratio, and wind speed, respectively, for all four of these WRF simulations. The equations used to calculate the evaluation statistics are shown in the appendix.
Monthly evaluations of (left) mean absolute error and (right) mean bias for 36AN (black) and the 12-km NN (red), AN (green), and SN simulations (purple).
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
Annual evaluation statistics for temperature (K).
Annual evaluation statistics for water vapor mixing ratio (g kg−1).
Annual evaluation statistics for wind speed (m s−1).
In general, the 12-km simulation with no interior nudging has a larger annual mean absolute error than the parent 36-km simulation. However, using either analysis or spectral nudging at 12-km grid spacing reduces the mean absolute errors for temperature and wind speed from those from the 36-km simulation. The 12-km simulations with either type of interior nudging improve anomaly correlation over the 36-km results in all cases, except for water vapor mixing ratio from spectral nudging where the scores are the same. This improvement in 12-km accuracy when WRF is applied with interior nudging is consistent with the results of Bowden et al. (2012), who found that nudging on the 108- and 36-km nested interior domain was beneficial. A positive bias in water vapor is apparent in all runs and this bias is stronger in all of the 12-km simulations. This suggests that some physics options used at 36-km grid spacing might not be optimal for 12-km modeling. This issue is addressed to some degree in sensitivity tests described below.
Figure 3 shows spatial maps of the annual mean bias in 2-m temperature for all four test cases across the latitude/longitude area of the statistical evaluations described above. The 36-km parent simulation shows a positive bias in temperature over the plains states and into the northern Ohio Valley and southern Great Lakes regions. There is also an indication of positive bias along the immediate coastline of the Gulf of Mexico and in Atlantic coastal areas. A negative temperature bias is seen over the Appalachian and Rocky Mountain regions and over the northern Great Lakes region. The 12-km simulation performed without any interior nudging shows generally the same pattern in temperature bias, but the positive bias areas are diminished and the negative bias areas are noticeably expanded. The analysis-nudged and spectral-nudged simulations both show temperature bias patterns that are more similar to the 36-km results, with a lesser shift toward negative bias than in the no-nudge case.
Annual mean bias of 2-m temperature (°C) for (top left) the 36-km parent simulation and the three 12-km simulations with (top right) NN, (bottom left) AN, and (bottom right) SN.
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
Figures 4 and 5 show similar spatial maps for bias in water vapor mixing ratio and wind speed, respectively. For water vapor, the 12-km simulations all show an obvious shift toward a positive bias in nearly all areas relative to the parent 36-km simulation. The areas of greatest shift appear to be in the plains and Midwest states. There is some indication that spectral nudging reduces the positive bias in water vapor, but only slightly so. The analysis nudging coefficient for water vapor is an order of magnitude less than the coefficient for temperature and wind and water vapor is not nudged at all in the spectral method. Also, when nudging is applied it is only done so above the PBL. Interior nudging does not appear to offer much help in overcoming what appears to be a basic model bias toward too much moisture near the surface, especially in 12-km simulations. For wind speed, there is very little change in the pattern of bias between the 36- and 12-km simulations. Figure 2 indicates a general decrease in the positive bias in wind speed for all months in the 12-km simulations, more so when nudging is applied. But this is poorly evident in the spatial maps of the annual mean (Fig. 5). It is interesting to note that the model bias is generally small in areas of the Great Plains where wind instrument exposure is less likely to be a factor.
As in Fig. 3, but for water vapor mixing ratio (g kg−1).
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
As in Fig. 3, but for 10-m wind speed (m s−1).
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
4. Evaluation of WRF simulations of precipitation
Because of the positive bias that was found for surface-level water vapor, we believed it was important to also investigate simulated precipitation amounts. We obtained precipitation data from three separate sources, gridded analyses from the Multisensor Precipitation Estimator (MPE) and the Parameter-Elevation Regressions on Independent Slopes Model (PRISM), and site-specific data from the National Atmospheric Deposition Program's National Trends Network (NTN).
The MPE is a precipitation analysis system developed by the National Weather Service Office of Hydrology in March 2000. It is used by National Weather Service River Forecast Centers to produce gridded precipitation estimates for various hydrological applications. Observational data sources include weather radar data, automated rain gauges, and satellite remote sensors. We obtained Stage IV datasets from the Earth Observing Laboratory at the National Center for Atmospheric Research (http://data.eol.ucar.edu/codiac/dss/id=21.093). These provided hourly precipitation analyses at 4-km horizontal grid spacing that we reanalyzed to our 12- and 36-km modeling domains using the program metgrid, which is part of the standard WRF Preprocessing System (WPS). Specifically, we used the gridcell average interpolator (option average_gcell in METGRID.TBL), which is described in chapter 3 of the online WRF User's Guide (http://www.mmm.ucar.edu/wrf/users/docs/user_guide_V3/users_guide_chap3.htm). We restricted our WRF evaluations based on MPE data to non-oceanic areas because of the limited precipitation information available over oceans. We also restricted our evaluations of monthly total precipitation to those areas where the hourly MPE data were at least 90% complete for each month. Where the MPE data were not 100% complete, we scaled the monthly totals linearly to 100%.
Figure 6a shows a graph of average monthly precipitation from the WRF simulations compared to the MPE data. The 36-km WRF simulation results (from Otte et al. 2012) were trimmed to match the 12-km modeling domain to allow for proper comparison. All of the WRF simulations produced more precipitation than the MPE data indicate, with the only exception being the 36-km results for October. The greatest exceedances were in the spring and summer months. The 12-km simulations show higher positive bias than the 36-km case in nearly all instances. The positive bias is most obvious for the no-nudge 12-km case. We also calculated monthly-mean absolute error versus MPE (not shown) and found only slight differences between the WRF simulations. However, the 12-km cases did show slightly larger error, especially when no nudging was applied.
Average monthly precipitation (mm) from WRF simulations in comparison with observational data from (a) MPE, (b) PRISM, and (c) NTN. The WRF simulations are 36AN and 12-km resolution with NN, AN, and SN.
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
The PRISM precipitation data (Daly et al. 1994) provide a second gridded analysis product with which to evaluate WRF performance. These high-resolution (0.041 67° latitude–longitude) monthly precipitation data are fully documented (http://www.prism.oregonstate.edu/docs/). We used software from the R Project for Statistical Computing (http://www.r-project.org/) to perform area-weighted grid-to-grid mapping to upscale the PRISM data to the 12- and 36-km modeling grids. Figure 6b shows a graph of average monthly precipitation from the WRF simulations compared to PRISM. Precipitation data from PRISM are only available over land areas so the results in Figs. 6a and 6b both exclude oceanic areas. The PRISM results confirm what was found in our comparisons to MPE. The lines showing WRF-simulated precipitation in Figs. 6a and 6b are nearly identical, but there are some small differences because the MPE data did not cover all land areas of the 12-km WRF domain for some months. It is interesting to note how similar the MPE and PRISM values are throughout the entire year. In the PRISM evaluation, all WRF simulations exceeded the indicated precipitation for every month with no exceptions and the exceedances were greatest during the spring and summer.
We obtained weekly NTN precipitation data at 209 sites within the 12-km WRF modeling domain (the NTN is described at http://nadp.sws.uiuc.edu/ntn/). The spatial distribution of NTN monitors is generally homogeneous across land areas of the 12-km WRF domain with slightly higher network density in the central and eastern sections. NTN samples were grouped by month based on the end of their sampling period. Most months had four weekly sampling periods in this analysis, but April, July, and September had five. WRF-simulated precipitation was compared to NTN samples based on the exact period for each sample. We calculated the mean of WRF-simulated and NTN-observed weekly totals for each month, then scaled those 7-day means to match the actual number of days in each month to provide monthly average values for NTN that could be directly compared with the monthly MPE and PRISM results above. These monthly totals based on the WRF–NTN comparisons are shown in Fig. 6c. Here, as with the MPE and PRISM comparisons, WRF-simulated precipitation generally exceeded the observed amounts, with the worst excesses generally coming from the 12-km simulation with no interior nudging. Because of the higher NTN station density in the central and eastern parts of the study domain where more precipitation normally falls, the average monthly NTN precipitation values are slightly higher that indicated for the MPE and PRISM data. But the average WRF-simulated precipitation is also higher at the NTN station locations, and once again the WRF results exceed observations in nearly all instances. The exceedances are again especially large in the warm months and more so for the 12-km WRF when no nudging is used.
5. Testing adjustments to nudging strength
The results shown above demonstrate that the physics options for WRF employed in previous dynamical downscaling to 36-km grid spacing can be used at 12-km grid spacing to provide some additional accuracy for temperature, humidity, and wind speed when interior nudging is applied with reductions in nudging strength to account for finer horizontal resolution. However, the reductions we applied were rather arbitrary. To test model sensitivity to the choice of analysis-nudging and spectral-nudging coefficients, values of one-half and twice the base values were also applied.
Figure 7 shows monthly-mean absolute error and mean bias for all three analysis nudging cases (ANlow, AN, and ANhigh) and all three spectral nudging cases (SNlow, SN, and SNhigh) for temperature, water vapor mixing ratio, and wind speed. Generally, the differences in mean absolute error were quite small throughout the year, especially for wind speed. For temperature, the differences in mean absolute error are quite small throughout the year. Nonetheless, the base-value coefficients for both analysis and spectral nudging produced the lowest errors in temperature for nearly every month. However, water vapor error increased during the summer months as nudging strength increased for both nudging methods. Nudging of water vapor has been somewhat controversial because doing so adds or subtracts mass from the simulated atmosphere. For this reason, we chose our strength for analysis nudging of water vapor to be one-tenth the strength of the other variables in all cases. Nudging of water vapor is not performed at all with spectral nudging in published WRF codes. Nonetheless, there are still discernible differences in the mean absolute error for water vapor between the spectral nudging cases. For wind speed, increasing the nudging strength nearly always resulted in a very small increase in mean absolute error. However, this effect was so small as to be nearly undetectable in Fig. 7.
Monthly (left) mean absolute error and (right) mean bias for WRF simulations testing nudging strength for AN and SN. Low nudging strength is one-half the base value. High nudging strength is 2 times the base value.
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
Figure 7 shows some interesting changes in model bias as nudging strengths are changed. For temperature, bias is increased with stronger analysis nudging in all months except November and December. Model biases were already positive in all months except June, so stronger analysis nudging generally degraded the temperature results. This could indicate a positive bias in the R-2 temperature data the model is being nudged toward. Temperature bias was only slightly affected by changes in the strength of spectral nudging with no definite relation of nudging strength to bias correction. The positive model bias in water vapor mixing ratio is improved by stronger analysis nudging and by stronger spectral nudging in every month. Because water vapor is directly nudged in the analysis-nudging method, we might expect to see improvement from that form of nudging. However, the link between stronger spectral nudging and improved bias in water vapor is not direct and suggests complex interactions of model physics. Wind speed bias was improved to a small degree by stronger analysis nudging, but changes to spectral nudging strength had little effect.
We also tested the effect of nudging strength on the amount of precipitation simulated by the 12-km WRF. Figure 8a shows the average monthly total precipitation for all 12-km WRF cells over land when analysis nudging strength is varied up and down by a factor of 2. Figure 8b shows similar results for spectral nudging. The 12-km precipitation behavior is much more sensitive to changes in the strength of analysis nudging than spectral nudging. The strongest analysis nudging reduces the simulated precipitation by ~5%–10%, with the greatest effect in the spring and summer months. Variations in the strength of spectral nudging have little effect in any month. Unlike analysis nudging, spectral nudging is designed to preserve smaller-scale features of the simulation. The lack of sensitivity to spectral nudging strength suggests that the positive precipitation bias is due more to smaller-scale phenomena. Analysis nudging strength has its greatest effect on precipitation amount in the spring and summer when convection is more dominant. The evidence here points to small-scale circulations and convection being a critical component to the large positive bias in precipitation simulated by the 12-km WRF.
Average of the monthly total precipitation (mm) simulated by the 12-km WRF over land with high, base, and low nudging strengths for (a) AN and (b) SN.
Citation: Journal of Applied Meteorology and Climatology 53, 1; 10.1175/JAMC-D-13-030.1
6. Testing alternate physics options
Because of the positive biases found in both water vapor and precipitation, we wanted to see if alternate choices for convective parameterization and cloud microphysics might reduce these biases. The tests we conducted are in no way conclusive, but a brief discussion of their results are worthy of presentation.
Our physics options based on the previous 36-km modeling included use of the Grell-3 subgrid convection scheme. To test model sensitivity to this choice, we conducted simulations with and without spectral nudging using the Kain–Fritsch scheme instead. The differences we found in mean absolute error and mean bias for temperature, water vapor, and wind speed were all quite small. The strong positive biases in water vapor and precipitation remained. Alapaty et al. (2012) identified a weakness in many convective parameterization schemes where the effects of subgrid convective clouds on radiation are not taken into account. Their treatment for the radiative effects of subgrid convection significantly reduced simulated precipitation. Our research group at the U.S. Environmental Protection Agency is also working to modify convective parameterizations in other ways so as to be applicable at finer scales where current formulations may not be appropriate and may be contributing to the type of positive precipitation bias we found here. In the future, we plan to test these developing techniques for 12-km dynamical downscaling with WRF.
The WRF configuration for the previous work at 36-km grid spacing and for the base case 12-km simulations performed here used the WRF single-moment 6-class microphysics scheme. To test model sensitivity, we instead applied the Morrison double-moment scheme with and without spectral nudging. We found mixed results in terms of model error and bias. There was a reduction in surface temperature during the warmer months (May through September), which led to a negative bias and a general increase in model error. During these same warm months we found a decrease in water vapor, which reduced model error and bias for that variable.
Obviously, there are other WRF options that could influence the simulation of water vapor and precipitation (e.g., land surface model or radiation model). Correcting the positive bias in water vapor and precipitation that we found in nearly all of our 12-km WRF simulations will likely require a follow-on investigation of the entire hydrologic cycle as it is simulated by all model processes.
7. Summary
This work has applied a dynamical downscaling technique previously developed for WRF at 36-km horizontal grid spacing to a finer 12-km grid. Our one-way nesting technique does provide more accurate information for surface-level temperature and wind speed as long as proper adjustments are made to the interior nudging coefficients. Water vapor and precipitation remain problems to be addressed. Mean absolute error in water vapor is not so much degraded in going from 36- to 12-km grid spacing as is the mean bias, which becomes more positive. Stronger interior nudging of either type, analysis or spectral, can provide some improvement to the positive bias in water vapor at the surface. Stronger analysis nudging can reduce the positive bias in precipitation, but stronger spectral nudging does not have much effect. The overall optimum adjustments depend somewhat on the time of year and meteorological variables of most interest, but the base nudging strengths chosen for this study were found to be generally appropriate when both mean absolute error and mean bias are considered. The evaluation against observations demonstrates that interior nudging is required to provide additional accuracy from downscaling to 12-km grid spacing.
Optimum simulation of water vapor mixing ratio and precipitation in 12-km simulations may require a change in physics options from those applied previously with 36-km grid spacing. Previously identified positive biases in water vapor and precipitation from 36-km WRF simulations (Otte et al. 2012) became more pronounced in our 12-km simulations when the same physics options were used. Changing to an alternate convective parameterization scheme had little effect on precipitation bias. We suspect that at this finer horizontal resolution, some larger convective elements in the atmosphere may be resolvable by the model and subgrid convective parameterizations might be accounting for their precipitation a second time. But investigation of this conjecture is beyond the scope of this study. Besides, surface-level water vapor was also positively biased. We are left with a kind of “chicken or egg” conundrum. Which came first, too much water vapor or too much precipitation? Understanding why our surface-level water vapor and precipitation are both too high requires an investigation of the entire hydrologic cycle that is also beyond the scope of this study.
We intend to move forward with long-term (10–20 yr) applications of 12-km dynamical downscaling with WRF once we have addressed the issues of inland lake surface temperatures and subgrid cloud radiation effects. The required computational and data storage resources are also a concern. However, more spatially refined climate projections have been identified as a critical need by hydrologic and urban air quality managers.
Acknowledgments
The U.S. Environmental Protection Agency through its Office of Research and Development funded and managed the research described here. It has been subjected to Agency review and approved for publication.
We thank the three anonymous reviewers for their comments and suggestions that improved the presentation of our research findings.
APPENDIX
Definition of Statistics
The following statistics are calculated as shown with X representing model simulation values and Y representing observed values.
REFERENCES
Alapaty, K., J. A. Herwehe, T. L. Otte, C. G. Nolte, O. R. Bullock, M. S. Mallard, J. S. Kain, and J. Dudhia, 2012: Introducing subgrid-scale cloud feedbacks to radiation for regional meteorological and climate modeling. Geophys. Res. Lett., 39, L24809, doi:10.1029/2012GL054031.
Appel, K. W., R. C. Gilliam, N. Davis, A. Zubrow, and S. C. Howard, 2011: Overview of the atmospheric model evaluation tool (AMET) v1.1 for evaluating meteorological and air quality models. Environ. Modell. Software, 26, 434–443.
Bowden, J. H., T. L. Otte, C. G. Nolte, and M. J. Otte, 2012: Examining interior grid nudging techniques using two-way nesting in the WRF model for regional climate modeling. J. Climate, 25, 2805–2823.
Bowden, J. H., C. G. Nolte, and T. L. Otte, 2013: Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology. Climate Dyn., 40, 1903–1920.
Chen, F., and J. Dudhia, 2001: Coupling an advanced land surface–hydrology model with the Penn State NCAR MM5 modeling system. Part I: Model implementation and sensitivity. Mon. Wea. Rev., 129, 569–585.
Daly, C., R. P. Neilson, and D. L. Phillips, 1994: A statistical–topographic model for mapping climatological precipitation over mountainous terrain. J. Appl. Meteor., 33, 140–158.
Davies, H. C., and R. E. Turner, 1977: Updating prediction models by dynamical relaxation: An examination technique. Quart. J. Roy. Meteor. Soc., 103, 225–245.
Gao, Y., J. S. Fu, L. B. Drake, Y. Liu, and J.-F. Lamarque, 2012: Projected changes of extreme weather events in the eastern United States based on a high resolution climate modeling system. Environ. Res. Lett., 7, 044025, doi:10.1088/1748-9326/7/4/044025.
Giorgi, F., 1990: Simulation of regional climate using a limited area model nested in a general circulation model. J. Climate, 3, 941–963.
Grell, G. A., and D. Dévényi, 2002: A generalized approach to parameterizing convection combining ensemble and data assimilation techniques. Geophys. Res. Lett., 29, 1963.
Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). J. Korean Meteor. Soc., 42 (2), 129–151.
Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318–2341.
Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, doi:10.1029/2008JD009944.
Kain, J. S., 2004: The Kain–Fritsch convective parameterization: An update. J. Appl. Meteor., 43, 170–181.
Kanamitsu, M., W. Ebisuzaki, J. Woollen, S.-K. Yang, J. J. Hnilo, M. Fiorino, and G. L. Potter, 2002: NCEP–DOE AMIP-II Reanalysis (R-2). Bull. Amer. Meteor. Soc., 83, 1631–1643.
Lo, J. C.-F., Z.-L. Yang, and R. A. Pielke Sr., 2008: Assessment of three dynamical downscaling methods using the Weather Research and Forecasting (WRF) model. J. Geophys. Res., 113, D09112, doi:10.1029/2007JD009216.
Miguez-Macho, G., G. L. Stenchikov, and A. Robock, 2004: Spectral nudging to eliminate the effects of domain position and geometry in regional climate model simulations. J. Geophys. Res., 109, D13104, doi:10.1029/2003JD004495.
Morrison, H., G. Thompson, and V. Tatarskii, 2009: Impact of cloud microphysics on the development of trailing stratiform precipitation in a simulated squall line: Comparison of one- and two-moment schemes. Mon. Wea. Rev., 137, 991–1007.
Otte, T. L., C. G. Nolte, M. J. Otte, and J. H. Bowden, 2012: Does nudging squelch the extremes in regional climate modeling? J. Climate, 25, 7046–7066.
Skamarock, W. C., J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, M. Duda, X.-Y. Huang, W. Wang, and J. G. Powers, 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp.
Stauffer, D. R., and N. L. Seaman, 1990: Use of four-dimensional data assimilation in a limited-area model. Part I: Experiments with synoptic-scale data. Mon. Wea. Rev., 118, 1250–1277.
Stauffer, D. R., and N. L. Seaman, 1994: Multiscale four-dimensional data assimilation. J. Appl. Meteor., 33, 416–434.