Search Results
You are looking at 1 - 4 of 4 items for :
- Author or Editor: E. S. Takle x
- Journal of Hydrometeorology x
- Refine by Access: Content accessible to me x
Abstract
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; east fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model.
Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash–Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and −0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from −0.08 to 0.72).
These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
Abstract
Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; east fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model.
Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash–Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and −0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from −0.08 to 0.72).
These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
Abstract
Changes in daily precipitation versus intensity under a global warming scenario in two regional climate simulations of the United States show a well-recognized feature of more intense precipitation. More important, by resolving the precipitation intensity spectrum, the changes show a relatively simple pattern for nearly all regions and seasons examined whereby nearly all high-intensity daily precipitation contributes a larger fraction of the total precipitation, and nearly all low-intensity precipitation contributes a reduced fraction. The percentile separating relative decrease from relative increase occurs around the 70th percentile of cumulative precipitation, irrespective of the governing precipitation processes or which model produced the simulation. Changes in normalized distributions display these features much more consistently than distribution changes without normalization.
Further analysis suggests that this consistent response in precipitation intensity may be a consequence of the intensity spectrum’s adherence to a gamma distribution. Under the gamma distribution, when the total precipitation or number of precipitation days changes, there is a single transition between precipitation rates that contribute relatively more to the total and rates that contribute relatively less. The behavior is roughly the same as the results of the numerical models and is insensitive to characteristics of the baseline climate, such as average precipitation, frequency of rain days, and the shape parameter of the precipitation’s gamma distribution. Changes in the normalized precipitation distribution give a more consistent constraint on how precipitation intensity may change when climate changes than do changes in the nonnormalized distribution. The analysis does not apply to extreme precipitation for which the theory of statistical extremes more likely provides the appropriate description.
Abstract
Changes in daily precipitation versus intensity under a global warming scenario in two regional climate simulations of the United States show a well-recognized feature of more intense precipitation. More important, by resolving the precipitation intensity spectrum, the changes show a relatively simple pattern for nearly all regions and seasons examined whereby nearly all high-intensity daily precipitation contributes a larger fraction of the total precipitation, and nearly all low-intensity precipitation contributes a reduced fraction. The percentile separating relative decrease from relative increase occurs around the 70th percentile of cumulative precipitation, irrespective of the governing precipitation processes or which model produced the simulation. Changes in normalized distributions display these features much more consistently than distribution changes without normalization.
Further analysis suggests that this consistent response in precipitation intensity may be a consequence of the intensity spectrum’s adherence to a gamma distribution. Under the gamma distribution, when the total precipitation or number of precipitation days changes, there is a single transition between precipitation rates that contribute relatively more to the total and rates that contribute relatively less. The behavior is roughly the same as the results of the numerical models and is insensitive to characteristics of the baseline climate, such as average precipitation, frequency of rain days, and the shape parameter of the precipitation’s gamma distribution. Changes in the normalized precipitation distribution give a more consistent constraint on how precipitation intensity may change when climate changes than do changes in the nonnormalized distribution. The analysis does not apply to extreme precipitation for which the theory of statistical extremes more likely provides the appropriate description.
Abstract
A regional climate model simulation of the period of 1979–88 over the contiguous United States, driven by lateral boundary conditions from the National Centers for Environmental Prediction–National Center for Atmospheric Research reanalysis, was analyzed to assess the ability of the model to simulate heavy precipitation events and seasonal precipitation anomalies. Heavy events were defined by precipitation totals that exceed the threshold value for a specified return period and duration. The model magnitudes of the thresholds for 1-day heavy precipitation events were in good agreement with observed thresholds for much of the central United States. Model thresholds were greater than observed for the eastern and intermountain western portions of the region and were smaller than observed for the lower Mississippi River basin. For 7-day events, model thresholds were in good agreement with observed thresholds for the eastern United States and Great Plains, were less than observed for the most of the Mississippi River valley, and were greater than observed for the intermountain western region. The interannual variability in frequency of heavy events in the model simulation exhibited similar behavior to that of the observed variability in the South, Southwest, West, and North-Central study regions. The agreement was poorer for the Midwest and Northeast, although the magnitude of variability was similar for both model and observations. There was good agreement between the model and observational data in the seasonal distribution of extreme events for the West and North-Central study regions; in the Southwest, Midwest, and Northeast, there were general similarities but some differences in the details of the distributions. The most notable differences occurred for the southern Gulf Coast region, for which the model produced a summer peak that is not present in the observational data. There was not a very high correlation in the timing of individual heavy events between the model and observations, reflecting differences between model and observations in the speed and path of many of the synoptic-scale events triggering the precipitation.
Abstract
A regional climate model simulation of the period of 1979–88 over the contiguous United States, driven by lateral boundary conditions from the National Centers for Environmental Prediction–National Center for Atmospheric Research reanalysis, was analyzed to assess the ability of the model to simulate heavy precipitation events and seasonal precipitation anomalies. Heavy events were defined by precipitation totals that exceed the threshold value for a specified return period and duration. The model magnitudes of the thresholds for 1-day heavy precipitation events were in good agreement with observed thresholds for much of the central United States. Model thresholds were greater than observed for the eastern and intermountain western portions of the region and were smaller than observed for the lower Mississippi River basin. For 7-day events, model thresholds were in good agreement with observed thresholds for the eastern United States and Great Plains, were less than observed for the most of the Mississippi River valley, and were greater than observed for the intermountain western region. The interannual variability in frequency of heavy events in the model simulation exhibited similar behavior to that of the observed variability in the South, Southwest, West, and North-Central study regions. The agreement was poorer for the Midwest and Northeast, although the magnitude of variability was similar for both model and observations. There was good agreement between the model and observational data in the seasonal distribution of extreme events for the West and North-Central study regions; in the Southwest, Midwest, and Northeast, there were general similarities but some differences in the details of the distributions. The most notable differences occurred for the southern Gulf Coast region, for which the model produced a summer peak that is not present in the observational data. There was not a very high correlation in the timing of individual heavy events between the model and observations, reflecting differences between model and observations in the speed and path of many of the synoptic-scale events triggering the precipitation.
Abstract
Thirteen regional climate model (RCM) simulations of June–July 1993 were compared with each other and observations. Water vapor conservation and precipitation characteristics in each RCM were examined for a 10° × 10° subregion of the upper Mississippi River basin, containing the region of maximum 60-day accumulated precipitation in all RCMs and station reports.
All RCMs produced positive precipitation minus evapotranspiration (P − E > 0), though most RCMs produced P − E below the observed range. RCM recycling ratios were within the range estimated from observations. No evidence of common errors of E was found. In contrast, common dry bias of P was found in the simulations.
Daily cycles of terms in the water vapor conservation equation were qualitatively similar in most RCMs. Nocturnal maximums of P and C (convergence) occurred in 9 of 13 RCMs, consistent with observations. Three of the four driest simulations failed to couple P and C overnight, producing afternoon maximum P. Further, dry simulations tended to produce a larger fraction of their 60-day accumulated precipitation from low 3-h totals.
In station reports, accumulation from high (low) 3-h totals had a nocturnal (early morning) maximum. This time lag occurred, in part, because many mesoscale convective systems had reached peak intensity overnight and had declined in intensity by early morning. None of the RCMs contained such a time lag. It is recommended that short-period experiments be performed to examine the ability of RCMs to simulate mesoscale convective systems prior to generating long-period simulations for hydroclimatology.
Abstract
Thirteen regional climate model (RCM) simulations of June–July 1993 were compared with each other and observations. Water vapor conservation and precipitation characteristics in each RCM were examined for a 10° × 10° subregion of the upper Mississippi River basin, containing the region of maximum 60-day accumulated precipitation in all RCMs and station reports.
All RCMs produced positive precipitation minus evapotranspiration (P − E > 0), though most RCMs produced P − E below the observed range. RCM recycling ratios were within the range estimated from observations. No evidence of common errors of E was found. In contrast, common dry bias of P was found in the simulations.
Daily cycles of terms in the water vapor conservation equation were qualitatively similar in most RCMs. Nocturnal maximums of P and C (convergence) occurred in 9 of 13 RCMs, consistent with observations. Three of the four driest simulations failed to couple P and C overnight, producing afternoon maximum P. Further, dry simulations tended to produce a larger fraction of their 60-day accumulated precipitation from low 3-h totals.
In station reports, accumulation from high (low) 3-h totals had a nocturnal (early morning) maximum. This time lag occurred, in part, because many mesoscale convective systems had reached peak intensity overnight and had declined in intensity by early morning. None of the RCMs contained such a time lag. It is recommended that short-period experiments be performed to examine the ability of RCMs to simulate mesoscale convective systems prior to generating long-period simulations for hydroclimatology.