Search Results

You are looking at 1 - 3 of 3 items for :

  • Author or Editor: Michelle Harrold x
  • Bulletin of the American Meteorological Society x
  • All content x
Clear All Modify Search
Jamie K. Wolff, Michelle Harrold, Tracy Hertneky, Eric Aligo, Jacob R. Carley, Brad Ferrier, Geoff DiMego, Louisa Nance, and Ying-Hwa Kuo

Abstract

A wide range of numerical weather prediction (NWP) innovations are under development in the research community that have the potential to positively impact operational models. The Developmental Testbed Center (DTC) helps facilitate the transition of these innovations from research to operations (R2O). With the large number of innovations available in the research community, it is critical to clearly define a testing protocol to streamline the R2O process. The DTC has defined such a process that relies on shared responsibilities of the researchers, the DTC, and operational centers to test promising new NWP advancements. As part of the first stage of this process, the DTC instituted the mesoscale model evaluation testbed (MMET), which established a common testing framework to assist the research community in demonstrating the merits of developments. The ability to compare performance across innovations for critical cases provides a mechanism for selecting the most promising capabilities for further testing. If the researcher demonstrates improved results using MMET, then the innovation may be considered for the second stage of comprehensive testing and evaluation (T&E) prior to entering the final stage of preimplementation T&E.

MMET provides initialization and observation datasets for several case studies and multiday periods. In addition, the DTC provides baseline results for select operational configurations that use the Advanced Research version of Weather Research and Forecasting Model (ARW) or the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling System Nonhydrostatic Multiscale Model on the B grid (NEMS-NMMB). These baselines can be used for testing sensitivities to different model versions or configurations in order to improve forecast performance.

Full access
Barbara Brown, Tara Jensen, John Halley Gotway, Randy Bullock, Eric Gilleland, Tressa Fowler, Kathryn Newman, Dan Adriaansen, Lindsay Blank, Tatiana Burek, Michelle Harrold, Tracy Hertneky, Christina Kalb, Paul Kucera, Louisa Nance, John Opatz, Jonathan Vigh, and Jamie Wolff

Capsule summary

MET is a community-based package of state-of-the-art tools to evaluate predictions of weather, climate, and other phenomena, with capabilities to display and analyze verification results via the METplus system.

Full access
Barbara Brown, Tara Jensen, John Halley Gotway, Randy Bullock, Eric Gilleland, Tressa Fowler, Kathryn Newman, Dan Adriaansen, Lindsay Blank, Tatiana Burek, Michelle Harrold, Tracy Hertneky, Christina Kalb, Paul Kucera, Louisa Nance, John Opatz, Jonathan Vigh, and Jamie Wolff

Abstract

Forecast verification and evaluation is a critical aspect of forecast development and improvement, day-to-day forecasting, and the interpretation and application of forecasts. In recent decades, the verification field has rapidly matured, and many new approaches have been developed. However, until recently, a stable set of modern tools to undertake this important component of forecasting has not been available. The Model Evaluation Tools (MET) was conceived and implemented to fill this gap. MET (https://dtcenter.org/community-code/model-evaluation-tools-met) was developed by the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Air Force (USAF) and is supported via the Developmental Testbed Center (DTC) and collaborations with operational and research organizations. MET incorporates traditional verification methods, as well as modern verification capabilities developed over the last two decades. MET stands apart from other verification packages due to its inclusion of innovative spatial methods, statistical inference tools, and a wide range of approaches to address the needs of individual users, coupled with strong community engagement and support. In addition, MET is freely available, which ensures that consistent modern verification capabilities can be applied by researchers and operational forecasting practitioners, enabling the use of consistent and scientifically meaningful methods by all users. This article describes MET and the expansion of MET to an umbrella package (METplus) that includes a database and display system and Python wrappers to facilitate the wide use of MET. Examples of MET applications illustrate some of the many ways that the package can be used to evaluate forecasts in a meaningful way.

Full access