Search Results
You are looking at 1 - 10 of 36 items for
- Author or Editor: David Mocko x
- Refine by Access: All Content x
Abstract
The Regional Atmospheric Modeling System (RAMS), developed at Colorado State University, was used to predict boundary-layer clouds and diagnose fractional cloudiness. The primary case study for this project occurred on 7 July 1987 off the coast of southern California. On this day, a transition in the type of boundary-layer cloud was observed from a clear area, to an area of small scattered cumulus, to an area of broken stratocumulus, and finally, to an area of solid stratocumulus. This case study occurred during the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment field study. RAMS was configured as a nested-grid mesoscale model with a fine grid having 5-km horizontal grid spacing covering the transition area.
Various fractional cloudiness schemes found in the literature were implemented into RAMS and tested against each other to determine which best represented observed conditions. The complexities of the parameterizations used to diagnose the fractional cloudiness varied from simple functions of relative humidity to a function of the model's subgrid variability. It was found that some of the simpler schemes identified the cloud transition better, while others performed poorly.
Abstract
The Regional Atmospheric Modeling System (RAMS), developed at Colorado State University, was used to predict boundary-layer clouds and diagnose fractional cloudiness. The primary case study for this project occurred on 7 July 1987 off the coast of southern California. On this day, a transition in the type of boundary-layer cloud was observed from a clear area, to an area of small scattered cumulus, to an area of broken stratocumulus, and finally, to an area of solid stratocumulus. This case study occurred during the First ISCCP (International Satellite Cloud Climatology Project) Regional Experiment field study. RAMS was configured as a nested-grid mesoscale model with a fine grid having 5-km horizontal grid spacing covering the transition area.
Various fractional cloudiness schemes found in the literature were implemented into RAMS and tested against each other to determine which best represented observed conditions. The complexities of the parameterizations used to diagnose the fractional cloudiness varied from simple functions of relative humidity to a function of the model's subgrid variability. It was found that some of the simpler schemes identified the cloud transition better, while others performed poorly.
Abstract
Four different methods of estimating land surface evapotranspiration are compared by forcing each scheme with near-surface atmospheric and soil- and vegetation-type forcing data available through International Satellite Land Surface Climatology Project Initiative I for a 2-yr period (1987–88). The three classical energy balance methods by Penman, by Priestley–Taylor, and by Thornthwaite are chosen; however, the Thornthwaite method is combined with a Mintz formulation of the relationship between actual and potential evapotranspiration. The fourth method uses the Simplified Simple Biosphere Model (SSiB), which is currently used in the climate version of the Goddard Earth Observing System II GCM. The goal of this study is to determine the benefit of using SSiB as opposed to one of the energy balance schemes for accurate simulation of surface fluxes and hydrology. Direct comparison of sensible and latent fluxes and ground temperature is not possible because such datasets are not available. However, the schemes are intercompared. The Penman and Priestley–Taylor schemes produce higher evapotranspiration than SSiB, while the Mintz–Thornthwaite scheme produces lower evapotranspiration than SSiB. Comparisons of model-derived soil moisture with observations show SSiB performs well in Illinois but performs poorly in central Russia. This later problem has been identified to be emanating from errors in the calculation of snowmelt and its infiltration. Overall, runoff in the energy balance schemes show less of a seasonal cycle than does SSiB, partly because a larger contribution of snowmelt in SSiB goes directly into runoff. However, basin- and continental-scale runoff values from SSiB validate better with observations as compared to each of the three energy balance methods. This implies a better evapotranspiration and hydrologic cycle simulation by SSiB as compared to the energy balance methods.
Abstract
Four different methods of estimating land surface evapotranspiration are compared by forcing each scheme with near-surface atmospheric and soil- and vegetation-type forcing data available through International Satellite Land Surface Climatology Project Initiative I for a 2-yr period (1987–88). The three classical energy balance methods by Penman, by Priestley–Taylor, and by Thornthwaite are chosen; however, the Thornthwaite method is combined with a Mintz formulation of the relationship between actual and potential evapotranspiration. The fourth method uses the Simplified Simple Biosphere Model (SSiB), which is currently used in the climate version of the Goddard Earth Observing System II GCM. The goal of this study is to determine the benefit of using SSiB as opposed to one of the energy balance schemes for accurate simulation of surface fluxes and hydrology. Direct comparison of sensible and latent fluxes and ground temperature is not possible because such datasets are not available. However, the schemes are intercompared. The Penman and Priestley–Taylor schemes produce higher evapotranspiration than SSiB, while the Mintz–Thornthwaite scheme produces lower evapotranspiration than SSiB. Comparisons of model-derived soil moisture with observations show SSiB performs well in Illinois but performs poorly in central Russia. This later problem has been identified to be emanating from errors in the calculation of snowmelt and its infiltration. Overall, runoff in the energy balance schemes show less of a seasonal cycle than does SSiB, partly because a larger contribution of snowmelt in SSiB goes directly into runoff. However, basin- and continental-scale runoff values from SSiB validate better with observations as compared to each of the three energy balance methods. This implies a better evapotranspiration and hydrologic cycle simulation by SSiB as compared to the energy balance methods.
Abstract
Refinements to the snow-physics scheme of the Simplified Simple Biosphere Model (SSiB) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture of snowpack to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers: the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snowmelt in Global Soil Wetness Project (GSWP)-style simulations using International Satellite Land Surface Climatology Project (ISLSCP) Initiative I data for 1987–88 in the Russian Wheat Belt region.
To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further helps to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations.
In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-yr single-site dataset from Valdai, Russia, the revised version of SSiB described in this paper again produces the earliest onset of snowmelt. Soil moisture and deep soil temperatures also compare favorably with observations.
Abstract
Refinements to the snow-physics scheme of the Simplified Simple Biosphere Model (SSiB) are described and evaluated. The upgrades include a partial redesign of the conceptual architecture of snowpack to better simulate the diurnal temperature of the snow surface. For a deep snowpack, there are two separate prognostic temperature snow layers: the top layer responds to diurnal fluctuations in the surface forcing, while the deep layer exhibits a slowly varying response. In addition, the use of a very deep soil temperature and a treatment of snow aging with its influence on snow density is parameterized and evaluated. The upgraded snow scheme produces better timing of snowmelt in Global Soil Wetness Project (GSWP)-style simulations using International Satellite Land Surface Climatology Project (ISLSCP) Initiative I data for 1987–88 in the Russian Wheat Belt region.
To simulate more realistic runoff in regions with high orographic variability, additional improvements are made to SSiB's soil hydrology. These improvements include an orography-based surface runoff scheme as well as interaction with a water table below SSiB's three soil layers. The addition of these parameterizations further helps to simulate more realistic runoff and accompanying prognostic soil moisture fields in the GSWP-style simulations.
In intercomparisons of the performance of the new snow-physics SSiB with its earlier versions using an 18-yr single-site dataset from Valdai, Russia, the revised version of SSiB described in this paper again produces the earliest onset of snowmelt. Soil moisture and deep soil temperatures also compare favorably with observations.
Abstract
While investigating linkages between afternoon peak rainfall amount and land–atmosphere coupling strength, a statistically significant trend in phase 2 of the North American Land Data Assimilation System (NLDAS-2) warm season (April–September) afternoon (1700–2259 UTC) precipitation was noted for a large fraction of the conterminous United States, namely, two-thirds of the area east of the Mississippi River, during the period from 1979 to 2015. To verify and better characterize this trend, a thorough statistical analysis is undertaken. The analysis focuses on three aspects of precipitation: amount, frequency, and intensity at 6-hourly time scale and for each calendar month separately. At the NLDAS-2 native resolution of 0.125° × 0.125°, Kendall’s tau and Sen’s slope estimators are used to detect and estimate trends and the Pettitt test is used to detect breakpoints. Parallel analyses are conducted on both NARR and Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), subdaily precipitation estimates. Widespread breakpoints of field significance at the α = 0.05 level are detected in the NLDAS-2 frequency and intensity series for all months and 6-h periods that are absent from the analogous NARR and MERRA-2 datasets. These breakpoints are shown to correspond with a July 1996 NLDAS-2 transition away from hourly 2° × 2.5° NOAA/CPC precipitation estimates to hourly 4-km stage II Doppler radar precipitation estimates in the temporal disaggregation of CPC daily gauge analyses. While NLDAS-2 may provide the most realistic diurnal precipitation cycle overall, users should be aware of this discontinuity and its direct effect on long-term trends in subdaily precipitation and indirect effects on trends in modeled soil moisture, surface temperature, surface energy and water fluxes, snow cover, snow water equivalent, and runoff/streamflow.
Abstract
While investigating linkages between afternoon peak rainfall amount and land–atmosphere coupling strength, a statistically significant trend in phase 2 of the North American Land Data Assimilation System (NLDAS-2) warm season (April–September) afternoon (1700–2259 UTC) precipitation was noted for a large fraction of the conterminous United States, namely, two-thirds of the area east of the Mississippi River, during the period from 1979 to 2015. To verify and better characterize this trend, a thorough statistical analysis is undertaken. The analysis focuses on three aspects of precipitation: amount, frequency, and intensity at 6-hourly time scale and for each calendar month separately. At the NLDAS-2 native resolution of 0.125° × 0.125°, Kendall’s tau and Sen’s slope estimators are used to detect and estimate trends and the Pettitt test is used to detect breakpoints. Parallel analyses are conducted on both NARR and Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), subdaily precipitation estimates. Widespread breakpoints of field significance at the α = 0.05 level are detected in the NLDAS-2 frequency and intensity series for all months and 6-h periods that are absent from the analogous NARR and MERRA-2 datasets. These breakpoints are shown to correspond with a July 1996 NLDAS-2 transition away from hourly 2° × 2.5° NOAA/CPC precipitation estimates to hourly 4-km stage II Doppler radar precipitation estimates in the temporal disaggregation of CPC daily gauge analyses. While NLDAS-2 may provide the most realistic diurnal precipitation cycle overall, users should be aware of this discontinuity and its direct effect on long-term trends in subdaily precipitation and indirect effects on trends in modeled soil moisture, surface temperature, surface energy and water fluxes, snow cover, snow water equivalent, and runoff/streamflow.
Abstract
Using information theory, our study quantifies the importance of selected indicators for the U.S. Drought Monitor (USDM) maps. We use the technique of mutual information (MI) to measure the importance of any indicator to the USDM, and because MI is derived solely from the data, our findings are independent of any model structure (conceptual, physically based, or empirical). We also compare these MIs against the drought representation effectiveness ratings in the North America Drought Indices and Indicators Assessment (NADIIA) survey for Köppen climate zones. This reveals 1) agreement between some ratings and our MI values [high for example indicators like standardized precipitation evapotranspiration index (SPEI)]; 2) some divergences (e.g., soil moisture has high ratings but near-zero MIs for ESA Climate Change Initiative (CCI) soil moisture in the Western United States, indicating the need of another remotely sensed soil moisture source); and 3) new insights into the importance of variables such as snow water equivalent (SWE) that are not included in sources like NADIIA. Further analysis of the MI results yields findings related to 1) hydrological mechanisms (summertime SWE domination during individual drought events through snowmelt into the water-scarce soil); 2) hydroclimatic types (the top pair of inputs in the Western and non-Western regions are SPEIs and soil moistures, respectively); and 3) predictability (high for the California 2012–17 event, with longer-time scale indicators dominating). Finally, the high MIs between multiple indicators jointly and the USDM indicate potentially high drought forecasting accuracies achievable using only model-based inputs, and the potential for global drought monitoring using only remotely sensed inputs, especially for locations having insufficient in situ observations.
Significance Statement
Drought maps from the U.S. Drought Monitor and the Objective Short- and Long-Term Drought Indicator Blends and Blend Equivalents are integrated information sources of the different types of drought. Multiple indicators go into creation of these maps, yet it is usually not clear to both public and private stakeholders like local agencies and insurance companies about the importance of any indicator in any region and season to the drought maps. Our study provides such objective information to enable understanding the mechanism and type of drought occurring at a location, season, and possibly event of interest, as well as to potentially aid in better drought monitoring and forecasting using smaller custom sets of indicators.
Abstract
Using information theory, our study quantifies the importance of selected indicators for the U.S. Drought Monitor (USDM) maps. We use the technique of mutual information (MI) to measure the importance of any indicator to the USDM, and because MI is derived solely from the data, our findings are independent of any model structure (conceptual, physically based, or empirical). We also compare these MIs against the drought representation effectiveness ratings in the North America Drought Indices and Indicators Assessment (NADIIA) survey for Köppen climate zones. This reveals 1) agreement between some ratings and our MI values [high for example indicators like standardized precipitation evapotranspiration index (SPEI)]; 2) some divergences (e.g., soil moisture has high ratings but near-zero MIs for ESA Climate Change Initiative (CCI) soil moisture in the Western United States, indicating the need of another remotely sensed soil moisture source); and 3) new insights into the importance of variables such as snow water equivalent (SWE) that are not included in sources like NADIIA. Further analysis of the MI results yields findings related to 1) hydrological mechanisms (summertime SWE domination during individual drought events through snowmelt into the water-scarce soil); 2) hydroclimatic types (the top pair of inputs in the Western and non-Western regions are SPEIs and soil moistures, respectively); and 3) predictability (high for the California 2012–17 event, with longer-time scale indicators dominating). Finally, the high MIs between multiple indicators jointly and the USDM indicate potentially high drought forecasting accuracies achievable using only model-based inputs, and the potential for global drought monitoring using only remotely sensed inputs, especially for locations having insufficient in situ observations.
Significance Statement
Drought maps from the U.S. Drought Monitor and the Objective Short- and Long-Term Drought Indicator Blends and Blend Equivalents are integrated information sources of the different types of drought. Multiple indicators go into creation of these maps, yet it is usually not clear to both public and private stakeholders like local agencies and insurance companies about the importance of any indicator in any region and season to the drought maps. Our study provides such objective information to enable understanding the mechanism and type of drought occurring at a location, season, and possibly event of interest, as well as to potentially aid in better drought monitoring and forecasting using smaller custom sets of indicators.
Abstract
The downwelling shortwave radiation on the earth’s land surface is affected by the terrain characteristics of slope and aspect. These adjustments, in turn, impact the evolution of snow over such terrain. This article presents a multiscale evaluation of the impact of terrain-based adjustments to incident shortwave radiation on snow simulations over two midlatitude regions using two versions of the Noah land surface model (LSM). The evaluation is performed by comparing the snow cover simulations against the 500-m Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover product. The model simulations are evaluated using categorical measures, such as the probability of detection of “yes” events (PODy), which measure the fraction of snow cover presence that was correctly simulated, and false alarm ratio (FAR), which measures the fraction of no-snow events that was incorrectly simulated. The results indicate that the terrain-based correction of radiation leads to systematic improvements in the snow cover estimates in both domains and in both LSM versions (with roughly 12% overall improvement in PODy and 5% improvement in FAR), with larger improvements observed during snow accumulation and melt periods. Increased contribution to PODy and FAR improvements is observed over the north- and south-facing slopes, when the overall improvements are stratified to the four cardinal aspect categories. A two-dimensional discrete Haar wavelet analysis for the two study areas indicates that the PODy improvements in snow cover estimation drop to below 10% at scales coarser than 16 km, whereas the FAR improvements are below 10% at scales coarser than 4 km.
Abstract
The downwelling shortwave radiation on the earth’s land surface is affected by the terrain characteristics of slope and aspect. These adjustments, in turn, impact the evolution of snow over such terrain. This article presents a multiscale evaluation of the impact of terrain-based adjustments to incident shortwave radiation on snow simulations over two midlatitude regions using two versions of the Noah land surface model (LSM). The evaluation is performed by comparing the snow cover simulations against the 500-m Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover product. The model simulations are evaluated using categorical measures, such as the probability of detection of “yes” events (PODy), which measure the fraction of snow cover presence that was correctly simulated, and false alarm ratio (FAR), which measures the fraction of no-snow events that was incorrectly simulated. The results indicate that the terrain-based correction of radiation leads to systematic improvements in the snow cover estimates in both domains and in both LSM versions (with roughly 12% overall improvement in PODy and 5% improvement in FAR), with larger improvements observed during snow accumulation and melt periods. Increased contribution to PODy and FAR improvements is observed over the north- and south-facing slopes, when the overall improvements are stratified to the four cardinal aspect categories. A two-dimensional discrete Haar wavelet analysis for the two study areas indicates that the PODy improvements in snow cover estimation drop to below 10% at scales coarser than 16 km, whereas the FAR improvements are below 10% at scales coarser than 4 km.
Abstract
A collection of eight operational global analyses over a 27-month period have been processed to common data structures to facilitate comparisons among the analyses and global observational datasets. The present study evaluated the global precipitation, outgoing longwave radiation (OLR) at the top of the atmosphere, and basin-scale precipitation over the United States. In addition, a multimodel ensemble was created from a linear average of the available data, as close to the analysis time as each system permitted. The results show that the monthly global precipitation and OLR from the multimodel ensemble compares generally better to the observations than any single analysis. Likewise, the daily precipitation from the ensemble exhibits better statistical comparison (in space and time) to gauge observations over the Mississippi River basin. However, the comparisons have seasonality, when the members of the ensemble exhibit generally more skill, during winter. There is notably higher skill of the summertime basin precipitation by the ensemble. Using the global precipitation and OLR, the sensitivity was tested to selectively choose the members with the best statistical comparisons to the reference data. Only small improvements in the statistics were found when comparing a selective ensemble to the full ensemble. Additionally, terms of the global energy budget were compared among the ensemble and to other estimates. The ensemble data and the variance of the ensemble should make a useful point of comparison for the development of model and assimilation components of global analyses.
Abstract
A collection of eight operational global analyses over a 27-month period have been processed to common data structures to facilitate comparisons among the analyses and global observational datasets. The present study evaluated the global precipitation, outgoing longwave radiation (OLR) at the top of the atmosphere, and basin-scale precipitation over the United States. In addition, a multimodel ensemble was created from a linear average of the available data, as close to the analysis time as each system permitted. The results show that the monthly global precipitation and OLR from the multimodel ensemble compares generally better to the observations than any single analysis. Likewise, the daily precipitation from the ensemble exhibits better statistical comparison (in space and time) to gauge observations over the Mississippi River basin. However, the comparisons have seasonality, when the members of the ensemble exhibit generally more skill, during winter. There is notably higher skill of the summertime basin precipitation by the ensemble. Using the global precipitation and OLR, the sensitivity was tested to selectively choose the members with the best statistical comparisons to the reference data. Only small improvements in the statistics were found when comparing a selective ensemble to the full ensemble. Additionally, terms of the global energy budget were compared among the ensemble and to other estimates. The ensemble data and the variance of the ensemble should make a useful point of comparison for the development of model and assimilation components of global analyses.
Abstract
Estimating diffuse recharge of precipitation is fundamental to assessing groundwater sustainability. Diffuse recharge is also the process through which climate and climate change directly affect groundwater. In this study, we evaluated diffuse recharge over the conterminous United States simulated by a suite of land surface models (LSMs) that were forced using a common set of meteorological input data. Simulated annual recharge exhibited spatial patterns that were similar among the LSMs, with the highest values in the eastern United States and Pacific Northwest. However, the magnitudes of annual recharge varied significantly among the models and were associated with differences in simulated ET, runoff, and snow. Evaluation against two independent datasets did not answer the question of whether the ensemble mean performs the best, due to inconsistency between those datasets. The amplitude and timing of seasonal maximum recharge differed among the models, influenced strongly by model physics governing deep soil moisture drainage rates and, in cold regions, snowmelt. Evaluation using in situ soil moisture observations suggested that true recharge peaks 1–3 months later than simulated recharge, indicating systematic biases in simulating deep soil moisture. However, recharge from lateral flows and through preferential flows cannot be inferred from soil moisture data, and the seasonal cycle of simulated groundwater storage actually compared well with in situ groundwater observations. Long-term trends in recharge were not consistently correlated with either precipitation trends or temperature trends. This study highlights the need to employ dynamic flow models in LSMs, among other improvements, to enable more accurate simulation of recharge.
Abstract
Estimating diffuse recharge of precipitation is fundamental to assessing groundwater sustainability. Diffuse recharge is also the process through which climate and climate change directly affect groundwater. In this study, we evaluated diffuse recharge over the conterminous United States simulated by a suite of land surface models (LSMs) that were forced using a common set of meteorological input data. Simulated annual recharge exhibited spatial patterns that were similar among the LSMs, with the highest values in the eastern United States and Pacific Northwest. However, the magnitudes of annual recharge varied significantly among the models and were associated with differences in simulated ET, runoff, and snow. Evaluation against two independent datasets did not answer the question of whether the ensemble mean performs the best, due to inconsistency between those datasets. The amplitude and timing of seasonal maximum recharge differed among the models, influenced strongly by model physics governing deep soil moisture drainage rates and, in cold regions, snowmelt. Evaluation using in situ soil moisture observations suggested that true recharge peaks 1–3 months later than simulated recharge, indicating systematic biases in simulating deep soil moisture. However, recharge from lateral flows and through preferential flows cannot be inferred from soil moisture data, and the seasonal cycle of simulated groundwater storage actually compared well with in situ groundwater observations. Long-term trends in recharge were not consistently correlated with either precipitation trends or temperature trends. This study highlights the need to employ dynamic flow models in LSMs, among other improvements, to enable more accurate simulation of recharge.
Abstract
This study presents an evaluation of the impact of vegetation conditions on a land surface model (LSM) simulation of agricultural drought. The Noah-MP LSM is used to simulate water and energy fluxes and states, which are transformed into drought categories using percentiles over the continental United States from 1979 to 2017. Leaf area index (LAI) observations are assimilated into the dynamic vegetation scheme of Noah-MP. A weekly operational drought monitor (the U.S. Drought Monitor) is used for the evaluation. The results show that LAI assimilation into Noah-MP’s dynamic vegetation scheme improves the model’s ability to represent drought, particularly over cropland areas. LAI assimilation improves the simulation of the drought category, detection of drought conditions, and reduces the instances of drought false alarms. The assimilation of LAI in these locations not only corrects model errors in the simulation of vegetation, but also can help to represent unmodeled physical processes such as irrigation toward improved simulation of agricultural drought.
Abstract
This study presents an evaluation of the impact of vegetation conditions on a land surface model (LSM) simulation of agricultural drought. The Noah-MP LSM is used to simulate water and energy fluxes and states, which are transformed into drought categories using percentiles over the continental United States from 1979 to 2017. Leaf area index (LAI) observations are assimilated into the dynamic vegetation scheme of Noah-MP. A weekly operational drought monitor (the U.S. Drought Monitor) is used for the evaluation. The results show that LAI assimilation into Noah-MP’s dynamic vegetation scheme improves the model’s ability to represent drought, particularly over cropland areas. LAI assimilation improves the simulation of the drought category, detection of drought conditions, and reduces the instances of drought false alarms. The assimilation of LAI in these locations not only corrects model errors in the simulation of vegetation, but also can help to represent unmodeled physical processes such as irrigation toward improved simulation of agricultural drought.
Abstract
Using data from seven global model operational analyses (OA), one land surface model, and various remote sensing retrievals, the energy and water fluxes over global land areas are intercompared for 2003/04. Remote sensing estimates of evapotranspiration (ET) are obtained from three process-based models that use input forcings from multisensor satellites. An ensemble mean (linear average) of the seven operational (mean-OA) models is used primarily to intercompare the fluxes with comparisons performed at both global and basin scales. At the global scale, it is found that all components of the energy budget represented by the ensemble mean of the OA models have a significant bias. Net radiation estimates had a positive bias (global mean) of 234 MJ m−2 yr−1 (7.4 W m−2) as compared to the remote sensing estimates, with the latent and sensible heat fluxes biased by 470 MJ m−2 yr−1 (13.3 W m−2) and −367 MJ m−2 yr−1 (11.7 W m−2), respectively. The bias in the latent heat flux is affected by the bias in the net radiation, which is primarily due to the biases in the incoming shortwave and outgoing longwave radiation and to the nudging process of the operational models. The OA models also suffer from improper partitioning of the surface heat fluxes. Comparison of precipitation (P) analyses from the various OA models, gauge analysis, and remote sensing retrievals showed better agreement than the energy fluxes. Basin-scale comparisons were consistent with the global-scale results, with the results for the Amazon in particular showing disparities between OA and remote sensing estimates of energy fluxes. The biases in the fluxes are attributable to a combination of errors in the forcing from the OA atmospheric models and the flux calculation methods in their land surface schemes. The atmospheric forcing errors are mainly attributable to high shortwave radiation likely due to the underestimation of clouds, but also precipitation errors, especially in water-limited regions.
Abstract
Using data from seven global model operational analyses (OA), one land surface model, and various remote sensing retrievals, the energy and water fluxes over global land areas are intercompared for 2003/04. Remote sensing estimates of evapotranspiration (ET) are obtained from three process-based models that use input forcings from multisensor satellites. An ensemble mean (linear average) of the seven operational (mean-OA) models is used primarily to intercompare the fluxes with comparisons performed at both global and basin scales. At the global scale, it is found that all components of the energy budget represented by the ensemble mean of the OA models have a significant bias. Net radiation estimates had a positive bias (global mean) of 234 MJ m−2 yr−1 (7.4 W m−2) as compared to the remote sensing estimates, with the latent and sensible heat fluxes biased by 470 MJ m−2 yr−1 (13.3 W m−2) and −367 MJ m−2 yr−1 (11.7 W m−2), respectively. The bias in the latent heat flux is affected by the bias in the net radiation, which is primarily due to the biases in the incoming shortwave and outgoing longwave radiation and to the nudging process of the operational models. The OA models also suffer from improper partitioning of the surface heat fluxes. Comparison of precipitation (P) analyses from the various OA models, gauge analysis, and remote sensing retrievals showed better agreement than the energy fluxes. Basin-scale comparisons were consistent with the global-scale results, with the results for the Amazon in particular showing disparities between OA and remote sensing estimates of energy fluxes. The biases in the fluxes are attributable to a combination of errors in the forcing from the OA atmospheric models and the flux calculation methods in their land surface schemes. The atmospheric forcing errors are mainly attributable to high shortwave radiation likely due to the underestimation of clouds, but also precipitation errors, especially in water-limited regions.