Search Results

You are looking at 11 - 15 of 15 items for

  • Author or Editor: Louisa Nance x
  • Refine by Access: All Content x
Clear All Modify Search
Jamie K. Wolff
,
Michelle Harrold
,
Tracy Hertneky
,
Eric Aligo
,
Jacob R. Carley
,
Brad Ferrier
,
Geoff DiMego
,
Louisa Nance
, and
Ying-Hwa Kuo

Abstract

A wide range of numerical weather prediction (NWP) innovations are under development in the research community that have the potential to positively impact operational models. The Developmental Testbed Center (DTC) helps facilitate the transition of these innovations from research to operations (R2O). With the large number of innovations available in the research community, it is critical to clearly define a testing protocol to streamline the R2O process. The DTC has defined such a process that relies on shared responsibilities of the researchers, the DTC, and operational centers to test promising new NWP advancements. As part of the first stage of this process, the DTC instituted the mesoscale model evaluation testbed (MMET), which established a common testing framework to assist the research community in demonstrating the merits of developments. The ability to compare performance across innovations for critical cases provides a mechanism for selecting the most promising capabilities for further testing. If the researcher demonstrates improved results using MMET, then the innovation may be considered for the second stage of comprehensive testing and evaluation (T&E) prior to entering the final stage of preimplementation T&E.

MMET provides initialization and observation datasets for several case studies and multiday periods. In addition, the DTC provides baseline results for select operational configurations that use the Advanced Research version of Weather Research and Forecasting Model (ARW) or the National Oceanic and Atmospheric Administration (NOAA) Environmental Modeling System Nonhydrostatic Multiscale Model on the B grid (NEMS-NMMB). These baselines can be used for testing sensitivities to different model versions or configurations in order to improve forecast performance.

Full access
Hui Shao
,
John Derber
,
Xiang-Yu Huang
,
Ming Hu
,
Kathryn Newman
,
Donald Stark
,
Michael Lueken
,
Chunhua Zhou
,
Louisa Nance
,
Ying-Hwa Kuo
, and
Barbara Brown

Abstract

With a goal of improving operational numerical weather prediction (NWP), the Developmental Testbed Center (DTC) has been working with operational centers, including, among others, the National Centers for Environmental Prediction (NCEP), National Oceanic and Atmospheric Administration (NOAA), National Aeronautics and Space Administration (NASA), and the U.S. Air Force, to support numerical models/systems and their research, perform objective testing and evaluation of NWP methods, and facilitate research-to-operations transitions. This article introduces the first attempt of the DTC in the data assimilation area to help achieve this goal. Since 2009, the DTC, NCEP’s Environmental Modeling Center (EMC), and other developers have made significant progress in transitioning the operational Gridpoint Statistical Interpolation (GSI) data assimilation system into a community-based code management framework. Currently, GSI is provided to the public with user support and is open for contributions from internal developers as well as the broader research community, following the same code transition procedures. This article introduces measures and steps taken during this community GSI effort followed by discussions of encountered challenges and issues. The purpose of this article is to promote contributions from the research community to operational data assimilation capabilities and, furthermore, to seek potential solutions to stimulate such a transition and, eventually, improve the NWP capabilities in the United States.

Full access
Barbara G. Brown
,
Louisa B. Nance
,
Christopher L. Williams
,
Kathryn M. Newman
,
James L. Franklin
,
Edward N. Rappaport
,
Paul A. Kucera
, and
Robert L. Gall

Abstract

The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.

Significance Statement

The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.

Open access
Edward I. Tollerud
,
Brian Etherton
,
Zoltan Toth
,
Isidora Jankov
,
Tara L. Jensen
,
Huiling Yuan
,
Linda S. Wharton
,
Paula T. McCaslin
,
Eugene Mirvis
,
Bill Kuo
,
Barbara G. Brown
,
Louisa Nance
,
Steven E. Koch
, and
F. Anthony Eckel
Full access
Barbara Brown
,
Tara Jensen
,
John Halley Gotway
,
Randy Bullock
,
Eric Gilleland
,
Tressa Fowler
,
Kathryn Newman
,
Dan Adriaansen
,
Lindsay Blank
,
Tatiana Burek
,
Michelle Harrold
,
Tracy Hertneky
,
Christina Kalb
,
Paul Kucera
,
Louisa Nance
,
John Opatz
,
Jonathan Vigh
, and
Jamie Wolff

Abstract

Forecast verification and evaluation is a critical aspect of forecast development and improvement, day-to-day forecasting, and the interpretation and application of forecasts. In recent decades, the verification field has rapidly matured, and many new approaches have been developed. However, until recently, a stable set of modern tools to undertake this important component of forecasting has not been available. The Model Evaluation Tools (MET) was conceived and implemented to fill this gap. MET (https://dtcenter.org/community-code/model-evaluation-tools-met) was developed by the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Air Force (USAF) and is supported via the Developmental Testbed Center (DTC) and collaborations with operational and research organizations. MET incorporates traditional verification methods, as well as modern verification capabilities developed over the last two decades. MET stands apart from other verification packages due to its inclusion of innovative spatial methods, statistical inference tools, and a wide range of approaches to address the needs of individual users, coupled with strong community engagement and support. In addition, MET is freely available, which ensures that consistent modern verification capabilities can be applied by researchers and operational forecasting practitioners, enabling the use of consistent and scientifically meaningful methods by all users. This article describes MET and the expansion of MET to an umbrella package (METplus) that includes a database and display system and Python wrappers to facilitate the wide use of MET. Examples of MET applications illustrate some of the many ways that the package can be used to evaluate forecasts in a meaningful way.

Full access