Search Results
-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing
-Surface Parameterization Schemes (PILPS) ( Henderson-Sellers et al. 1996 ; Chen et al. 1997 ) that enough high temporal resolution data were available to characterize LSM biases statistically. While Chen et al. (1997) were the first to define performance that was “good enough” (using the model ensemble mean as a benchmark), no justification was offered as to why this was a satisfactory level of performance. Indeed, even more recent evaluations [e.g., the Community Land Model (CLM) ( Dai et al. 2003 ) and Organizing
further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best
further downscaling over a region of interest. A common way to address this problem is to evaluate model output against the reference data and then prequalify the models based on their ability to simulate climate in the region or variable of interest (e.g., Dettinger 2005 ; Milly et al. 2005 ; Tebaldi et al. 2005 ; Wang and Overland 2009 ; Barnett et al. 2008 ). Lacking reference data for the future, the climate model performance is evaluated against the present-day climate. Models that best
; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple
; Pomeroy et al. 2007 ), Framework for Understanding Structural Errors (FUSE; Clark et al. 2008 ), and the Joint UK Land Environment Simulator (JULES) Investigation Model (JIM; Essery et al. 2013 )]. In parallel, the methods employed to evaluate model performance have also come under scrutiny; efforts to establish standardized guidelines for model evaluation have been numerous. For example, Taylor (2001) and Jolliff et al. (2009) have proposed single summary diagrams to represent multiple
wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the models’ performance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate
wide range in current permafrost areas, active layer parameters, and model ability to simulate the coupling between soil and air temperatures ( Koven et al. 2013 ). Additionally, projected loss of permafrost extent in response to climate change also varied greatly between the models. Evaluating the models’ performance and understanding the sources of uncertainties in the simulated contemporary state of the land carbon cycle are essential steps forward to improve the credibility of future climate
with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were
with regard to the geographical distribution of monthly mean precipitation. Section 5 considers model performance with respect to the seasonal march of the rainy season. Section 6 evaluates the ability of the models to reproduce extreme precipitation events. Section 7 discusses the reasons for the differences between the CMIP5 and CMIP3 models. We present conclusions in section 8 . 2. Models Table 1 shows information on 31 CMIP5 models used in this study. The majority of these models were
). In accord with this emphasis, several agricultural research activities are carried out in this region and need data on R g . Although a large number of models exist that can estimate R g from commonly available meteorological variables, researchers have used only a limited number of methods for generating R g or have explored the performance of only a few methods for the southeastern United States. For instance, Thornton and Running (1999) evaluated the reformulated Bristow and Campbell
). In accord with this emphasis, several agricultural research activities are carried out in this region and need data on R g . Although a large number of models exist that can estimate R g from commonly available meteorological variables, researchers have used only a limited number of methods for generating R g or have explored the performance of only a few methods for the southeastern United States. For instance, Thornton and Running (1999) evaluated the reformulated Bristow and Campbell
. 2016 ; Xing et al. 2017 ). Thus, it is necessary to assess the performance of models with respect to the EASM in MJ and JA separately, in favor of improving the subseasonal prediction of dynamical models. The EAWM features surface air temperature variability that is dominated by a northern mode and southern mode, which have distinct circulation structures ( Wang et al. 2010 ). These unique features of the EAWM have barely been evaluated in climate models. Meanwhile a set of systematic metrics that
. 2016 ; Xing et al. 2017 ). Thus, it is necessary to assess the performance of models with respect to the EASM in MJ and JA separately, in favor of improving the subseasonal prediction of dynamical models. The EAWM features surface air temperature variability that is dominated by a northern mode and southern mode, which have distinct circulation structures ( Wang et al. 2010 ). These unique features of the EAWM have barely been evaluated in climate models. Meanwhile a set of systematic metrics that
observations. They used a model spatial resolution of 1/12° × 1/12° in order to capture the rapidly varying wave field generated by Hurricane Bonnie, which is much finer than is typically implemented in operational forecasting. For example, the fine-resolution wave model grid for the Gulf of Maine Ocean Observing System (GoMOOS; information online at www.gomoos.org ) is 0.2° × 0.2°. In this study, three modern widely used third-generation spectral wave models are evaluated: (a) the Simulating Waves
observations. They used a model spatial resolution of 1/12° × 1/12° in order to capture the rapidly varying wave field generated by Hurricane Bonnie, which is much finer than is typically implemented in operational forecasting. For example, the fine-resolution wave model grid for the Gulf of Maine Ocean Observing System (GoMOOS; information online at www.gomoos.org ) is 0.2° × 0.2°. In this study, three modern widely used third-generation spectral wave models are evaluated: (a) the Simulating Waves
previous MIPs: albedo is still a major source of uncertainty, surface exchange parameterizations are still problematic, and individual model performance is inconsistent. In fact, models are less classifiable with results from more sites, years and evaluation variables. Our initial, or false, hypothesis had to be killed off. Developments have been made, particularly in terms of the complexity of snow process representations, and conclusions from PILPS2(d) and snow MIPs have undoubtedly driven model
previous MIPs: albedo is still a major source of uncertainty, surface exchange parameterizations are still problematic, and individual model performance is inconsistent. In fact, models are less classifiable with results from more sites, years and evaluation variables. Our initial, or false, hypothesis had to be killed off. Developments have been made, particularly in terms of the complexity of snow process representations, and conclusions from PILPS2(d) and snow MIPs have undoubtedly driven model
features in the UW scheme are likely to improve boundary layer wind predictions. The goal of this research is to characterize the ramp occurrence over the CBWES site that is within an operating wind farm, to evaluate the WRF model's capability in ramp prediction, and to test the sensitivity of the model's performance to the choice of PBL schemes. Results of this study are also intended to provide recommendations to the wind-energy community regarding the choice of PBL schemes in WRF in areas of complex
features in the UW scheme are likely to improve boundary layer wind predictions. The goal of this research is to characterize the ramp occurrence over the CBWES site that is within an operating wind farm, to evaluate the WRF model's capability in ramp prediction, and to test the sensitivity of the model's performance to the choice of PBL schemes. Results of this study are also intended to provide recommendations to the wind-energy community regarding the choice of PBL schemes in WRF in areas of complex