Search Results
You are looking at 1 - 10 of 10 items for
- Author or Editor: Terri Hogue x
- Refine by Access: All Content x
Abstract
This paper outlines the development of a continuous, daily time series of potential evapotranspiration (PET) using Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data from the Terra satellite platform. The approach is based on the Priestley–Taylor equation, incorporating a daily net radiation model during cloudless days. A simple algorithm using “theoretical clear-sky” net radiation (incorporating daily cloud fraction and cloud optical thickness) and PET is then used to estimate net radiation and PET under cloudy conditions. The method requires minimal ground-based observations for initial calibration of regional radiation algorithm coefficients. Point-scale comparisons are undertaken at four flux-tower sites in North America covering a range of hydroclimatic conditions and biomes. Preliminary results at the daily time step for a 4-yr period (2001–04) show good correlation (R 2 = 0.89) and low bias (0.34 mm day−1) for three of the more humid sites. Results are further improved when aggregated to the monthly time scale (R 2 = 0.95, bias = 0.31 mm day−1). Performance at the semiarid site is less satisfactory (R 2 = 0.95, bias = 2.05 mm day−1 at the daily time step). In general, the MODIS-based daily PET estimates derived in this study are promising and show the potential for use in theoretical and operational water resource studies in both gauged and ungauged basins.
Abstract
This paper outlines the development of a continuous, daily time series of potential evapotranspiration (PET) using Moderate Resolution Imaging Spectroradiometer (MODIS) sensor data from the Terra satellite platform. The approach is based on the Priestley–Taylor equation, incorporating a daily net radiation model during cloudless days. A simple algorithm using “theoretical clear-sky” net radiation (incorporating daily cloud fraction and cloud optical thickness) and PET is then used to estimate net radiation and PET under cloudy conditions. The method requires minimal ground-based observations for initial calibration of regional radiation algorithm coefficients. Point-scale comparisons are undertaken at four flux-tower sites in North America covering a range of hydroclimatic conditions and biomes. Preliminary results at the daily time step for a 4-yr period (2001–04) show good correlation (R 2 = 0.89) and low bias (0.34 mm day−1) for three of the more humid sites. Results are further improved when aggregated to the monthly time scale (R 2 = 0.95, bias = 0.31 mm day−1). Performance at the semiarid site is less satisfactory (R 2 = 0.95, bias = 2.05 mm day−1 at the daily time step). In general, the MODIS-based daily PET estimates derived in this study are promising and show the potential for use in theoretical and operational water resource studies in both gauged and ungauged basins.
Abstract
The current research examines the influence of irrigation on urban hydrological cycles through the development of an irrigation scheme within the Noah land surface model (LSM)–Single Layer Urban Canopy Model (SLUCM) system. The model is run at a 30-m resolution for a 2-yr period over a 49 km2 urban domain in the Los Angeles metropolitan area. A sensitivity analysis indicates significant sensitivity relative to both the amount and timing of irrigation on diurnal and monthly energy budgets, hydrological fluxes, and state variables. Monthly residential water use data and three estimates of outdoor water consumption are used to calibrate the developed irrigation scheme. Model performance is evaluated using a previously developed MODIS–Landsat evapotranspiration (ET) and Landsat land surface temperature (LST) products as well as hourly ET observations through the California Irrigation Management Information System (CIMIS). Results show that the Noah LSM–SLUCM realistically simulates the diurnal and seasonal variations of ET when the irrigation module is incorporated. However, without irrigation, the model produces large biases in ET simulations. The ET errors for the nonirrigation simulations are −56 and −90 mm month−1 for July 2003 and 2004, respectively, while these values decline to −6 and −11 mm month−1 over the same 2 months when the proposed irrigation scheme is adopted. Results show that the irrigation-induced increase in latent heat flux leads to a decrease in LST of about 2°C in urban parks. The developed modeling framework can be utilized for a number of applications, ranging from outdoor water use estimation to climate change impact assessments.
Abstract
The current research examines the influence of irrigation on urban hydrological cycles through the development of an irrigation scheme within the Noah land surface model (LSM)–Single Layer Urban Canopy Model (SLUCM) system. The model is run at a 30-m resolution for a 2-yr period over a 49 km2 urban domain in the Los Angeles metropolitan area. A sensitivity analysis indicates significant sensitivity relative to both the amount and timing of irrigation on diurnal and monthly energy budgets, hydrological fluxes, and state variables. Monthly residential water use data and three estimates of outdoor water consumption are used to calibrate the developed irrigation scheme. Model performance is evaluated using a previously developed MODIS–Landsat evapotranspiration (ET) and Landsat land surface temperature (LST) products as well as hourly ET observations through the California Irrigation Management Information System (CIMIS). Results show that the Noah LSM–SLUCM realistically simulates the diurnal and seasonal variations of ET when the irrigation module is incorporated. However, without irrigation, the model produces large biases in ET simulations. The ET errors for the nonirrigation simulations are −56 and −90 mm month−1 for July 2003 and 2004, respectively, while these values decline to −6 and −11 mm month−1 over the same 2 months when the proposed irrigation scheme is adopted. Results show that the irrigation-induced increase in latent heat flux leads to a decrease in LST of about 2°C in urban parks. The developed modeling framework can be utilized for a number of applications, ranging from outdoor water use estimation to climate change impact assessments.
Abstract
Future operational frameworks for estimating surface turbulent fluxes over the necessary spatial and temporal scales will undoubtedly require the use of remote sensing products. Techniques used to estimate surface fluxes from radiometric surface temperature generally fall into two categories: retrieval-based and data assimilation approaches. Up to this point, there has been little comparison between retrieval- and assimilation-based techniques. In this note, the triangle retrieval method is compared to a variational data assimilation approach for estimating surface turbulent fluxes from radiometric surface temperature observations. Results from a set of synthetic experiments and an application using real data from the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site indicate that the assimilation approach performs slightly better than the triangle method because of the robustness of the estimation to measurement errors and parsimony of the system model, which leads to fewer sources of structural model errors. Future comparison work using retrieval and data assimilation algorithms will provide more insight into the optimal approach for diagnosis of land surface fluxes using remote sensing observations.
Abstract
Future operational frameworks for estimating surface turbulent fluxes over the necessary spatial and temporal scales will undoubtedly require the use of remote sensing products. Techniques used to estimate surface fluxes from radiometric surface temperature generally fall into two categories: retrieval-based and data assimilation approaches. Up to this point, there has been little comparison between retrieval- and assimilation-based techniques. In this note, the triangle retrieval method is compared to a variational data assimilation approach for estimating surface turbulent fluxes from radiometric surface temperature observations. Results from a set of synthetic experiments and an application using real data from the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE) site indicate that the assimilation approach performs slightly better than the triangle method because of the robustness of the estimation to measurement errors and parsimony of the system model, which leads to fewer sources of structural model errors. Future comparison work using retrieval and data assimilation algorithms will provide more insight into the optimal approach for diagnosis of land surface fluxes using remote sensing observations.
Abstract
Hydrologic model evaluations have traditionally focused on measuring how closely the model can simulate various characteristics of historical observations. Although advancing hydrologic forecasting is an often-stated goal of numerous modeling studies, testing in a forecasting mode is seldom undertaken, limiting information derived from these analyses. One can overcome this limitation through generation, and subsequent analysis, of ensemble hindcasts. In this study, long-range ensemble hindcasts are generated for the available period of record for a basin in southwestern Idaho for the purpose of evaluating the Snow–Atmosphere–Soil Transfer (SAST) model against the current operational benchmark, the National Weather Service’s (NWS) snow accumulation and ablation model SNOW17. Both snow models were coupled with the NWS operational rainfall runoff model and ensembles of seasonal discharge and weekly snow water equivalent (SWE) were evaluated. Ensemble predictions from both the SAST and SNOW17 models were better than climatology forecasts, for the period studied. In most cases, the accuracy of the SAST-generated predictions was similar to the SNOW17-generated predictions, except during periods of significant melting. Differences in model performance are partially attributed to initial condition errors. After updating the SWE state in the snow models with the observed SWE, the forecasts were improved during the first 2–4 weeks of the forecast window and the skills were essentially equal in both forecasting systems for the study watershed. Climate dominated the forecast uncertainty in the latter part of the forecast window while initial conditions controlled the forecast skill in the first 3–4 weeks of the forecast. The use of hindcasting in the snow model analysis revealed that, given the dominance of the initial conditions on forecast skill, streamflow predictions will be most improved through the use of state updating.
Abstract
Hydrologic model evaluations have traditionally focused on measuring how closely the model can simulate various characteristics of historical observations. Although advancing hydrologic forecasting is an often-stated goal of numerous modeling studies, testing in a forecasting mode is seldom undertaken, limiting information derived from these analyses. One can overcome this limitation through generation, and subsequent analysis, of ensemble hindcasts. In this study, long-range ensemble hindcasts are generated for the available period of record for a basin in southwestern Idaho for the purpose of evaluating the Snow–Atmosphere–Soil Transfer (SAST) model against the current operational benchmark, the National Weather Service’s (NWS) snow accumulation and ablation model SNOW17. Both snow models were coupled with the NWS operational rainfall runoff model and ensembles of seasonal discharge and weekly snow water equivalent (SWE) were evaluated. Ensemble predictions from both the SAST and SNOW17 models were better than climatology forecasts, for the period studied. In most cases, the accuracy of the SAST-generated predictions was similar to the SNOW17-generated predictions, except during periods of significant melting. Differences in model performance are partially attributed to initial condition errors. After updating the SWE state in the snow models with the observed SWE, the forecasts were improved during the first 2–4 weeks of the forecast window and the skills were essentially equal in both forecasting systems for the study watershed. Climate dominated the forecast uncertainty in the latter part of the forecast window while initial conditions controlled the forecast skill in the first 3–4 weeks of the forecast. The use of hindcasting in the snow model analysis revealed that, given the dominance of the initial conditions on forecast skill, streamflow predictions will be most improved through the use of state updating.
Abstract
A satellite-based potential evapotranspiration (PET) estimate derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations was tested for input to the spatially lumped and gridded Sacramento Soil Moisture Accounting (SAC-SMA) model. The 15 forecast points within the National Weather Service (NWS) North Central River Forecast Center (NCRFC) forecasting region were the basis for this analysis. Through a series of case studies, the MODIS-derived PET estimate (M-PET) was evaluated for input to the SAC-SMA model by comparing streamflow simulations with those from traditional SAC-SMA evapotranspiration (ET) demand. Two prior studies have evaluated the M-PET data 1) to compute new long-term average ET demand values and 2) to input a time series (i.e., daily time-varying PET) to the NWS Hydrology Laboratory–Research Distributed Hydrologic Model (HL-RDHM), a spatially distributed version of the SAC-SMA model. This current paper presents results from a third test in which the M-PET time series is input to the lumped SAC-SMA model. In all cases, evaluation is between the M-PET data and the long-term average values used by the NWS. Similar to prior studies, results of the current analysis are mixed with improved model evaluation statistics for 4 of 15 basins tested. Of the three cases, using the time-varying M-PET as input to the distributed SAC-SMA model led to the most promising results, with model simulations that are at least as good as those when using the SAC-SMA ET demand. Analyses of the model-simulated ET suggest that the time-varying M-PET input may produce a more physically realistic representation of ET processes in both the lumped and distributed versions of the SAC-SMA model.
Abstract
A satellite-based potential evapotranspiration (PET) estimate derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations was tested for input to the spatially lumped and gridded Sacramento Soil Moisture Accounting (SAC-SMA) model. The 15 forecast points within the National Weather Service (NWS) North Central River Forecast Center (NCRFC) forecasting region were the basis for this analysis. Through a series of case studies, the MODIS-derived PET estimate (M-PET) was evaluated for input to the SAC-SMA model by comparing streamflow simulations with those from traditional SAC-SMA evapotranspiration (ET) demand. Two prior studies have evaluated the M-PET data 1) to compute new long-term average ET demand values and 2) to input a time series (i.e., daily time-varying PET) to the NWS Hydrology Laboratory–Research Distributed Hydrologic Model (HL-RDHM), a spatially distributed version of the SAC-SMA model. This current paper presents results from a third test in which the M-PET time series is input to the lumped SAC-SMA model. In all cases, evaluation is between the M-PET data and the long-term average values used by the NWS. Similar to prior studies, results of the current analysis are mixed with improved model evaluation statistics for 4 of 15 basins tested. Of the three cases, using the time-varying M-PET as input to the distributed SAC-SMA model led to the most promising results, with model simulations that are at least as good as those when using the SAC-SMA ET demand. Analyses of the model-simulated ET suggest that the time-varying M-PET input may produce a more physically realistic representation of ET processes in both the lumped and distributed versions of the SAC-SMA model.
Abstract
Satellite-derived potential evapotranspiration (PET) estimates computed from Moderate Resolution Imaging Spectroradiometer (MODIS) observations and the Priestley–Taylor formula (M-PET) are evaluated as input to the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). The HL-RDHM is run at a 4-km spatial and 6-h temporal resolution for 13 watersheds in the upper Mississippi and Red River basins for 2003–10. Simulated discharge using inputs of daily M-PET is evaluated for all watersheds, and simulated evapotranspiration (ET) is evaluated at two watersheds using nearby latent heat flux observations. M-PET–derived model simulations are compared to output using the long-term average PET values (default-PET) provided as part of the HL-RDHM application. In addition, uncalibrated and calibrated simulations are evaluated for both PET data sources. Calibrating select model parameters is found to substantially improve simulated discharge for both datasets. Overall average percent bias (PBias) and Nash–Sutcliffe efficiency (NSE) values for simulated discharge are better from the default-PET than the M-PET for the calibrated models during the verification period, indicating that the time-varying M-PET input did not improve the discharge simulation in the HL-RDHM. M-PET tends to produce higher NSE values than the default-PET for the Wisconsin and Minnesota basins, but lower NSE values for the Iowa basins. M-PET–simulated ET matches the range and variability of observed ET better than the default-PET at two sites studied and may provide potential model improvements in that regard.
Abstract
Satellite-derived potential evapotranspiration (PET) estimates computed from Moderate Resolution Imaging Spectroradiometer (MODIS) observations and the Priestley–Taylor formula (M-PET) are evaluated as input to the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM). The HL-RDHM is run at a 4-km spatial and 6-h temporal resolution for 13 watersheds in the upper Mississippi and Red River basins for 2003–10. Simulated discharge using inputs of daily M-PET is evaluated for all watersheds, and simulated evapotranspiration (ET) is evaluated at two watersheds using nearby latent heat flux observations. M-PET–derived model simulations are compared to output using the long-term average PET values (default-PET) provided as part of the HL-RDHM application. In addition, uncalibrated and calibrated simulations are evaluated for both PET data sources. Calibrating select model parameters is found to substantially improve simulated discharge for both datasets. Overall average percent bias (PBias) and Nash–Sutcliffe efficiency (NSE) values for simulated discharge are better from the default-PET than the M-PET for the calibrated models during the verification period, indicating that the time-varying M-PET input did not improve the discharge simulation in the HL-RDHM. M-PET tends to produce higher NSE values than the default-PET for the Wisconsin and Minnesota basins, but lower NSE values for the Iowa basins. M-PET–simulated ET matches the range and variability of observed ET better than the default-PET at two sites studied and may provide potential model improvements in that regard.
Abstract
This paper investigates the performance of the National Centers for Environmental Prediction (NCEP) Noah land surface model at two semiarid sites in southern Arizona. The goal is to evaluate the transferability of calibrated parameters (i.e., direct application of a parameter set to a “similar” site) between the sites and to analyze model performance under the various climatic conditions that can occur in this region. A multicriteria, systematic evaluation scheme is developed to meet these goals. Results indicate that the Noah model is able to simulate sensible heat, ground heat, and ground temperature observations with a high degree of accuracy, using the optimized parameter sets. However, there is a large influx of moist air into Arizona during the monsoon period, and significant latent heat flux errors are observed in model simulations during these periods. The use of proxy site parameters (transferred parameter set), as well as traditional default parameters, results in diminished model performance when compared to a set of parameters calibrated specifically to the flux sites. Also, using a parameter set obtained from a longer-time-frame calibration (i.e., a 4-yr period) results in decreased model performance during nonstationary, short-term climatic events, such as a monsoon or El Niño. Although these results are specific to the sites in Arizona, it is hypothesized that these results may hold true for other case studies. In general, there is still the opportunity for improvement in the representation of physical processes in land surface models for semiarid regions. The hope is that rigorous model evaluation, such as that put forth in this analysis, and studies such as the Project for the Intercomparison of Land-Surface Processes (PILPS) San Pedro–Sevilleta, will lead to advances in model development, as well as parameter estimation and transferability, for use in long-term climate and regional environmental studies.
Abstract
This paper investigates the performance of the National Centers for Environmental Prediction (NCEP) Noah land surface model at two semiarid sites in southern Arizona. The goal is to evaluate the transferability of calibrated parameters (i.e., direct application of a parameter set to a “similar” site) between the sites and to analyze model performance under the various climatic conditions that can occur in this region. A multicriteria, systematic evaluation scheme is developed to meet these goals. Results indicate that the Noah model is able to simulate sensible heat, ground heat, and ground temperature observations with a high degree of accuracy, using the optimized parameter sets. However, there is a large influx of moist air into Arizona during the monsoon period, and significant latent heat flux errors are observed in model simulations during these periods. The use of proxy site parameters (transferred parameter set), as well as traditional default parameters, results in diminished model performance when compared to a set of parameters calibrated specifically to the flux sites. Also, using a parameter set obtained from a longer-time-frame calibration (i.e., a 4-yr period) results in decreased model performance during nonstationary, short-term climatic events, such as a monsoon or El Niño. Although these results are specific to the sites in Arizona, it is hypothesized that these results may hold true for other case studies. In general, there is still the opportunity for improvement in the representation of physical processes in land surface models for semiarid regions. The hope is that rigorous model evaluation, such as that put forth in this analysis, and studies such as the Project for the Intercomparison of Land-Surface Processes (PILPS) San Pedro–Sevilleta, will lead to advances in model development, as well as parameter estimation and transferability, for use in long-term climate and regional environmental studies.
Abstract
This study compares mean areal precipitation (MAP) estimates derived from three sources: an operational rain gauge network (MAPG), a radar/gauge multisensor product (MAPX), and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) satellite-based system (MAPS) for the time period from March 2000 to November 2003. The study area includes seven operational basins of varying size and location in the southeastern United States. The analysis indicates that agreements between the datasets vary considerably from basin to basin and also temporally within the basins. The analysis also includes evaluation of MAPS in comparison with MAPG for use in flow forecasting with a lumped hydrologic model [Sacramento Soil Moisture Accounting Model (SAC-SMA)]. The latter evaluation investigates two different parameter sets, the first obtained using manual calibration on historical MAPG, and the second obtained using automatic calibration on both MAPS and MAPG, but over a shorter time period (23 months). Results indicate that the overall performance of the model simulations using MAPS depends on both the bias in the precipitation estimates and the size of the basins, with poorer performance in basins of smaller size (large bias between MAPG and MAPS) and better performance in larger basins (less bias between MAPG and MAPS). When using MAPS, calibration of the parameters significantly improved the model performance.
Abstract
This study compares mean areal precipitation (MAP) estimates derived from three sources: an operational rain gauge network (MAPG), a radar/gauge multisensor product (MAPX), and the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) satellite-based system (MAPS) for the time period from March 2000 to November 2003. The study area includes seven operational basins of varying size and location in the southeastern United States. The analysis indicates that agreements between the datasets vary considerably from basin to basin and also temporally within the basins. The analysis also includes evaluation of MAPS in comparison with MAPG for use in flow forecasting with a lumped hydrologic model [Sacramento Soil Moisture Accounting Model (SAC-SMA)]. The latter evaluation investigates two different parameter sets, the first obtained using manual calibration on historical MAPG, and the second obtained using automatic calibration on both MAPS and MAPG, but over a shorter time period (23 months). Results indicate that the overall performance of the model simulations using MAPS depends on both the bias in the precipitation estimates and the size of the basins, with poorer performance in basins of smaller size (large bias between MAPG and MAPS) and better performance in larger basins (less bias between MAPG and MAPS). When using MAPS, calibration of the parameters significantly improved the model performance.
Abstract
Operational flood forecasting models vary in complexity, but nearly all have parameters for which values must be estimated. The traditional and widespread manual calibration approach requires considerable training and experience and is typically laborious and time consuming. Under the Advanced Hydrologic Prediction System modernization program, National Weather Service (NWS) hydrologists must produce rapid calibrations for roughly 4000 forecast points throughout the United States. The classical single-objective automatic calibration approach, although fast and objective, has not received widespread acceptance among operational hydrologists. In the work reported here, University of Arizona researchers and NWS personnel have collaborated to combine the strengths of the manual and automatic calibration strategies. The result is a multistep automatic calibration scheme (MACS) that emulates the progression of steps followed by NWS hydrologists during manual calibration and rapidly provides acceptable parameter estimates. The MACS approach was tested on six operational basins (drainage areas from 671 to 1302 km2) in the North Central River Forecast Center (NCRFC) area. The results were found to compare favorably with the NCRFC manual calibrations in terms of both visual inspection and statistical measures, such as daily root-mean-square error and percent bias by flow group. Further, implementation of the MACS procedure requires only about 3–4 person hours per basin, in contrast to the 15–20 person hours typically required using the manual approach. Based on this study, the NCRFC has opted to perform further testing of the MACS procedure at a large number of forecast points that constitute the Grand River (Michigan) forecast group. MACS is a time-saving, reliable approach that can provide calibrations that are of comparable quality to the NCRFC’s current methods.
Abstract
Operational flood forecasting models vary in complexity, but nearly all have parameters for which values must be estimated. The traditional and widespread manual calibration approach requires considerable training and experience and is typically laborious and time consuming. Under the Advanced Hydrologic Prediction System modernization program, National Weather Service (NWS) hydrologists must produce rapid calibrations for roughly 4000 forecast points throughout the United States. The classical single-objective automatic calibration approach, although fast and objective, has not received widespread acceptance among operational hydrologists. In the work reported here, University of Arizona researchers and NWS personnel have collaborated to combine the strengths of the manual and automatic calibration strategies. The result is a multistep automatic calibration scheme (MACS) that emulates the progression of steps followed by NWS hydrologists during manual calibration and rapidly provides acceptable parameter estimates. The MACS approach was tested on six operational basins (drainage areas from 671 to 1302 km2) in the North Central River Forecast Center (NCRFC) area. The results were found to compare favorably with the NCRFC manual calibrations in terms of both visual inspection and statistical measures, such as daily root-mean-square error and percent bias by flow group. Further, implementation of the MACS procedure requires only about 3–4 person hours per basin, in contrast to the 15–20 person hours typically required using the manual approach. Based on this study, the NCRFC has opted to perform further testing of the MACS procedure at a large number of forecast points that constitute the Grand River (Michigan) forecast group. MACS is a time-saving, reliable approach that can provide calibrations that are of comparable quality to the NCRFC’s current methods.