Search Results

You are looking at 1 - 10 of 647 items for :

  • Model performance/evaluation x
  • Bulletin of the American Meteorological Society x
  • Refine by Access: All Content x
Clear All
Cecile B. Menard, Richard Essery, Gerhard Krinner, Gabriele Arduini, Paul Bartlett, Aaron Boone, Claire Brutel-Vuilmet, Eleanor Burke, Matthias Cuntz, Yongjiu Dai, Bertrand Decharme, Emanuel Dutra, Xing Fang, Charles Fierz, Yeugeniy Gusev, Stefan Hagemann, Vanessa Haverd, Hyungjun Kim, Matthieu Lafaysse, Thomas Marke, Olga Nasonova, Tomoko Nitta, Masashi Niwano, John Pomeroy, Gerd Schädler, Vladimir A. Semenov, Tatiana Smirnova, Ulrich Strasser, Sean Swenson, Dmitry Turkov, Nander Wever, and Hua Yuan

previous MIPs: albedo is still a major source of uncertainty, surface exchange parameterizations are still problematic, and individual model performance is inconsistent. In fact, models are less classifiable with results from more sites, years and evaluation variables. Our initial, or false, hypothesis had to be killed off. Developments have been made, particularly in terms of the complexity of snow process representations, and conclusions from PILPS2(d) and snow MIPs have undoubtedly driven model

Full access
Cort J. Willmott

Quantitative approaches to the evaluation of model performance were recently examined by Fox (1981). His recommendations are briefly reviewed and a revised set of performance statistics is proposed. It is suggested that the correlation between model-predicted and observed data, commonly described by Pearson's product-moment correlation coefficient, is an insufficient and often misleading measure of accuracy. A complement of difference and summary univariate indices is presented as the nucleus of a more informative, albeit fundamentally descriptive, approach to model evaluation. Two models that estimate monthly evapotranspiration are comparatively evaluated in order to illustrate how the recommended method(s) can be applied.

Full access
Barbara Brown, Tara Jensen, John Halley Gotway, Randy Bullock, Eric Gilleland, Tressa Fowler, Kathryn Newman, Dan Adriaansen, Lindsay Blank, Tatiana Burek, Michelle Harrold, Tracy Hertneky, Christina Kalb, Paul Kucera, Louisa Nance, John Opatz, Jonathan Vigh, and Jamie Wolff

, and (iii) provide summary statistics for forecast comparisons. The MET-TC summary tools produce a variety of statistics, including frequency of superior performance (e.g., to meet one of HFIP’s goals to compare the performance of different TC modeling systems), time series independence calculations, and confidence intervals (CIs) on mean differences. In addition, MET-TC includes tools to evaluate rapid intensification/weakening (RI/RW) events, with flexible options for selecting thresholds to

Full access
Angel Liduvino Vara-Vela, Dirceu Luís Herdies, Débora Souza Alvim, Éder Paulo Vendrasco, Silvio Nilo Figueroa, Jayant Pendharkar, and Julio Pablo Reyes Fernandez

model performance for the August–September period in 2018 and 2019 in comparison to satellite observations; and (iii) to investigate how extreme Amazonian wildfire events can affect the atmospheric composition over the São Paulo metropolitan area (SPMA). The Amazon fire season and the new system, hereafter referred to as CPTEC WRF-Chem, are described in the second and third sections, respectively. The model evaluation and the unusual event over the SPMA are presented in the fourth section, and the

Full access
Catherine A. Senior, John H. Marsham, Ségolène Berthou, Laura E. Burgin, Sonja S. Folwell, Elizabeth J. Kendon, Cornelia M. Klein, Richard G. Jones, Neha Mittal, David P. Rowell, Lorenzo Tomassini, Théo Vischel, Bernd Becker, Cathryn E. Birch, Julia Crook, Andrew J. Dougill, Declan L. Finney, Richard J. Graham, Neil C. G. Hart, Christopher D. Jack, Lawrence S. Jackson, Rachel James, Bettina Koelle, Herbert Misiani, Brenda Mwalukanga, Douglas J. Parker, Rachel A. Stratton, Christopher M. Taylor, Simon O. Tucker, Caroline M. Wainwright, Richard Washington, and Martin R. Willet

-Africa model improvement and evaluation ( James et al. 2018 ). The remaining four were transdisciplinary, delivering climate change research and bringing innovative co-production of climate information and services in East, West, Central, and southern Africa through pilot studies. The IMPALA project has targeted effort on some of the important challenges to improved model performance. A major focus has been on understanding the sensitivity of model climate predictions to the representation of mesoscale

Full access
John Irwin and Maynard Smith
Full access
Adam J. Clark, Israel L. Jirak, Burkely T. Gallo, Brett Roberts, Andrew R. Dean, Kent H. Knopfmeier, Louis J. Wicker, Makenzie Krocak, Patrick S. Skinner, Pamela L. Heinselman, Katie A. Wilson, Jake Vancil, Kimberly A. Hoogewind, Nathan A. Dahl, Gerald J. Creager, Thomas A. Jones, Jidong Gao, Yunheng Wang, Eric D. Loken, Montgomery Flora, Christopher A. Kerr, Nusrat Yussouf, Scott R. Dembek, William Miller, Joshua Martin, Jorge Guerra, Brian Matilla, David Jahn, David Harrison, David Imy, and Michael C. Coniglio

The 2020 NOAA Hazardous Weather Testbed Spring Forecasting Experiment What : Severe weather research and forecasting experts convened virtually to evaluate convection-allowing modeling strategies and test short-term forecasting applications of a prototype Warn-on-Forecast System within a simulated, real-time forecasting environment. When : 27 April–29 May 2020 Where : Norman, Oklahoma The NWS/Storm Prediction Center (SPC) and OAR/National Severe Storms Laboratory (NSSL) co-led the 2020 NOAA

Full access
Rachel James, Richard Washington, Babatunde Abiodun, Gillian Kay, Joseph Mutemi, Wilfried Pokam, Neil Hart, Guleid Artan, and Cath Senior

(i) analysis of physical processes and (ii) quantification of performance. On a global scale, important work has been done to investigate model representation of clouds and water vapor (e.g., Jiang et al. 2012 ; Klein et al. 2013 ), tropical circulation (e.g., Niznik and Lintner 2013 ; Oueslati and Bellon 2015 ), and modes of variability (e.g., Guilyardi et al. 2009 ; Kim et al. 2009 ). This process-oriented evaluation is fundamental to inform model development. On a regional scale

Open access
Burkely T. Gallo, Christina P. Kalb, John Halley Gotway, Henry H. Fisher, Brett Roberts, Israel L. Jirak, Adam J. Clark, Curtis Alexander, and Tara L. Jensen

scorecard indicates better performance by one modeling system, and displaying a square for each unique combination of domain, time period, metric, and threshold can reveal systemic differences. These systemic differences could then be examined in depth, in order to diagnose model deficiencies. For instance, if a new system has difficulty with nocturnal temperatures, that would become evident from the columns of the scorecard rather than potentially obscured by a summary metric evaluated over the entire

Free access
Linda O. Mearns, Ray Arritt, Sébastien Biner, Melissa S. Bukovsky, Seth McGinnis, Stephan Sain, Daniel Caya, James Correia Jr., Dave Flory, William Gutowski, Eugene S. Takle, Richard Jones, Ruby Leung, Wilfran Moufouma-Okia, Larry McDaniel, Ana M. B. Nunes, Yun Qian, John Roads, Lisa Sloan, and Mark Snyder

multiple regional and global climate models, impacts researchers will have the ingredients to produce impacts assessments that characterize multiple uncertainties. Additional goals of the program include the following: to evaluate regional model performance over North America by nesting the RCMs in National Centers for Environmental Prediction–Department of Energy (NCEP–DOE) Reanalysis; to explore some remaining uncertainties in regional climate modeling (e.g., importance of compatibility of physics in

Full access