Search Results

You are looking at 11 - 15 of 15 items for

  • Author or Editor: Louisa Nance x
  • Refine by Access: All Content x
Clear All Modify Search
Lígia Bernardet
,
Louisa Nance
,
Meral Demirtas
,
Steve Koch
,
Edward Szoke
,
Tressa Fowler
,
Andrew Loughe
,
Jennifer Luppens Mahoney
,
Hui-Ya Chuang
,
Matthew Pyle
, and
Robert Gall

The Weather Research and Forecasting (WRF) Developmental Testbed Center (DTC) was formed to promote exchanges between the development and operational communities in the field of Numerical Weather Prediction (NWP). The WRF DTC serves to accelerate the transfer of NWP technology from research to operations and to support a subset of the current WRF operational configurations to the general community. This article describes the mission and recent activities of the WRF DTC, including a detailed discussion about one of its recent projects, the WRF DTC Winter Forecasting Experiment (DWFE).

DWFE was planned and executed by the WRF DTC in collaboration with forecasters and model developers. The real-time phase of the experiment took place in the winter of 2004/05, with two dynamic cores of the WRF model being run once per day out to 48 h. The models were configured with 5-km grid spacing over the entire continental United States to ascertain the value of high-resolution numerical guidance for winter weather prediction. Forecasts were distributed to many National Weather Service Weather Forecast Offices to allow forecasters both to familiarize themselves with WRF capabilities prior to WRF becoming operational at the National Centers for Environmental Prediction (NCEP) in the North American Mesoscale Model (NAM) application, and to provide feedback about the model to its developers. This paper presents the experiment's configuration, the results of objective forecast verification, including uncertainty measures, a case study to illustrate the potential use of DWFE products in the forecasting process, and a discussion about the importance and challenges of real-time experiments involving forecaster participation.

Full access
Kathryn M. Newman
,
Barbara Brown
,
John Halley Gotway
,
Ligia Bernardet
,
Mrinal Biswas
,
Tara Jensen
, and
Louisa Nance

Abstract

Tropical cyclone (TC) forecast verification techniques have traditionally focused on track and intensity, as these are some of the most important characteristics of TCs and are often the principal verification concerns of operational forecast centers. However, there is a growing need to verify other aspects of TCs as process-based validation techniques may be increasingly necessary for further track and intensity forecast improvements as well as improving communication of the broad impacts of TCs including inland flooding from precipitation. Here we present a set of TC-focused verification methods available via the Model Evaluation Tools (MET) ranging from traditional approaches to the application of storm-centric coordinates and the use of feature-based verification of spatially defined TC objects. Storm-relative verification using observed and forecast tracks can be useful for identifying model biases in precipitation accumulation in relation to the storm center. Using a storm-centric cylindrical coordinate system based on the radius of maximum wind adds additional storm-relative capabilities to regrid precipitation fields onto cylindrical or polar coordinates. This powerful process-based model diagnostic and verification technique provides a framework for improved understanding of feedbacks between forecast tracks, intensity, and precipitation distributions. Finally, object-based verification including land masking capabilities provides even more nuanced verification options. Precipitation objects of interest, either the central core of TCs or extended areas of rainfall after landfall, can be identified, matched to observations, and quickly aggregated to build meaningful spatial and summary verification statistics.

Restricted access
Barbara G. Brown
,
Louisa B. Nance
,
Christopher L. Williams
,
Kathryn M. Newman
,
James L. Franklin
,
Edward N. Rappaport
,
Paul A. Kucera
, and
Robert L. Gall

Abstract

The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.

Significance Statement

The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.

Open access
Barbara Brown
,
Tara Jensen
,
John Halley Gotway
,
Randy Bullock
,
Eric Gilleland
,
Tressa Fowler
,
Kathryn Newman
,
Dan Adriaansen
,
Lindsay Blank
,
Tatiana Burek
,
Michelle Harrold
,
Tracy Hertneky
,
Christina Kalb
,
Paul Kucera
,
Louisa Nance
,
John Opatz
,
Jonathan Vigh
, and
Jamie Wolff

Abstract

Forecast verification and evaluation is a critical aspect of forecast development and improvement, day-to-day forecasting, and the interpretation and application of forecasts. In recent decades, the verification field has rapidly matured, and many new approaches have been developed. However, until recently, a stable set of modern tools to undertake this important component of forecasting has not been available. The Model Evaluation Tools (MET) was conceived and implemented to fill this gap. MET (https://dtcenter.org/community-code/model-evaluation-tools-met) was developed by the National Center for Atmospheric Research (NCAR), the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Air Force (USAF) and is supported via the Developmental Testbed Center (DTC) and collaborations with operational and research organizations. MET incorporates traditional verification methods, as well as modern verification capabilities developed over the last two decades. MET stands apart from other verification packages due to its inclusion of innovative spatial methods, statistical inference tools, and a wide range of approaches to address the needs of individual users, coupled with strong community engagement and support. In addition, MET is freely available, which ensures that consistent modern verification capabilities can be applied by researchers and operational forecasting practitioners, enabling the use of consistent and scientifically meaningful methods by all users. This article describes MET and the expansion of MET to an umbrella package (METplus) that includes a database and display system and Python wrappers to facilitate the wide use of MET. Examples of MET applications illustrate some of the many ways that the package can be used to evaluate forecasts in a meaningful way.

Full access
Edward I. Tollerud
,
Brian Etherton
,
Zoltan Toth
,
Isidora Jankov
,
Tara L. Jensen
,
Huiling Yuan
,
Linda S. Wharton
,
Paula T. McCaslin
,
Eugene Mirvis
,
Bill Kuo
,
Barbara G. Brown
,
Louisa Nance
,
Steven E. Koch
, and
F. Anthony Eckel
Full access