Search Results
used to understand model consistency, accuracy, and precision. They defined model evaluation as a process in which model outputs are compared against observations, model intercomparison as a process in which multiple models are applied to specific test cases, and model benchmarking as a process in which model outputs are compared against a priori expectations of model performance. Our first purpose here is to formalize the concept of model benchmarking. A benchmark consists of three distinct
used to understand model consistency, accuracy, and precision. They defined model evaluation as a process in which model outputs are compared against observations, model intercomparison as a process in which multiple models are applied to specific test cases, and model benchmarking as a process in which model outputs are compared against a priori expectations of model performance. Our first purpose here is to formalize the concept of model benchmarking. A benchmark consists of three distinct
, which include many different process parameterizations, may not represent diurnal land–atmosphere interaction well. This is important because land–atmosphere feedbacks propagate to larger scales and may ultimately affect model sensitivity to global change ( Miralles et al. 2019 ). Here, we employ a diagnostic model evaluation based on diurnal signatures of surface heat fluxes to quantify the performance of LSMs specifically for diurnal heat redistribution processes and point toward parameterizations
, which include many different process parameterizations, may not represent diurnal land–atmosphere interaction well. This is important because land–atmosphere feedbacks propagate to larger scales and may ultimately affect model sensitivity to global change ( Miralles et al. 2019 ). Here, we employ a diagnostic model evaluation based on diurnal signatures of surface heat fluxes to quantify the performance of LSMs specifically for diurnal heat redistribution processes and point toward parameterizations