Search Results

You are looking at 1 - 6 of 6 items for

  • Author or Editor: Tressa Fowler x
  • All content x
Clear All Modify Search
Jamie K. Wolff, Michelle Harrold, Tressa Fowler, John Halley Gotway, Louisa Nance, and Barbara G. Brown


While traditional verification methods are commonly used to assess numerical model quantitative precipitation forecasts (QPFs) using a grid-to-grid approach, they generally offer little diagnostic information or reasoning behind the computed statistic. On the other hand, advanced spatial verification techniques, such as neighborhood and object-based methods, can provide more meaningful insight into differences between forecast and observed features in terms of skill with spatial scale, coverage area, displacement, orientation, and intensity. To demonstrate the utility of applying advanced verification techniques to mid- and coarse-resolution models, the Developmental Testbed Center (DTC) applied several traditional metrics and spatial verification techniques to QPFs provided by the Global Forecast System (GFS) and operational North American Mesoscale Model (NAM). Along with frequency bias and Gilbert skill score (GSS) adjusted for bias, both the fractions skill score (FSS) and Method for Object-Based Diagnostic Evaluation (MODE) were utilized for this study with careful consideration given to how these methods were applied and how the results were interpreted. By illustrating the types of forecast attributes appropriate to assess with the spatial verification techniques, this paper provides examples of how to obtain advanced diagnostic information to help identify what aspects of the forecast are or are not performing well.

Full access
James O. Pinto, Dan L. Megenhardt, Tressa Fowler, and Jenny Colavito


Short-range (2 h) predictions of ceiling and visibility obtained from version 4 of the Rapid Refresh (RAPv4) model are evaluated over Alaska using surface meteorological station data. These forecasts tended to overpredict the frequency of aviation-impacting ceilings in coastal areas by as much as 50%. In winter, this overforecasting bias extends into the interior of Alaska as well. Biases in visibility predictions were more complex. In winter, visibility hazards were predicted too often throughout the interior of Alaska (+5%) and not often enough in northern and western coastal areas (−20%). This wintertime underprediction of visibility restrictions in coastal areas has been linked to the fact that the visibility diagnostic does not include a treatment for the effect of blowing snow. This, in part, results in winter IFR visibilities being detected only 37% of the time. An efficient algorithm that uses quantile matching has been implemented to remove mean biases in 2-h predictions of ceiling and visibility. Performance of the algorithm is demonstrated using two 30-day periods (January and June 2019). The calibrated forecasts obtained for the two month-long periods are found to have significantly reduced biases and enhanced skill in capturing flight rules categories for both ceiling and visibility throughout much of Alaska. This technique can be easily extended to other forecast lead times or mesoscale models.

Restricted access
Eric Gilleland, Amanda S. Hering, Tressa L. Fowler, and Barbara G. Brown


Which of two competing continuous forecasts is better? This question is often asked in forecast verification, as well as climate model evaluation. Traditional statistical tests seem to be well suited to the task of providing an answer. However, most such tests do not account for some of the special underlying circumstances that are prevalent in this domain. For example, model output is seldom independent in time, and the models being compared are geared to predicting the same state of the atmosphere, and thus they could be contemporaneously correlated with each other. These types of violations of the assumptions of independence required for most statistical tests can greatly impact the accuracy and power of these tests. Here, this effect is examined on simulated series for many common testing procedures, including two-sample and paired t and normal approximation z tests, the z test with a first-order variance inflation factor applied, and the newer Hering–Genton (HG) test, as well as several bootstrap methods. While it is known how most of these tests will behave in the face of temporal dependence, it is less clear how contemporaneous correlation will affect them. Moreover, it is worthwhile knowing just how badly the tests can fail so that if they are applied, reasonable conclusions can be drawn. It is found that the HG test is the most robust to both temporal dependence and contemporaneous correlation, as well as the specific type and strength of temporal dependence. Bootstrap procedures that account for temporal dependence stand up well to contemporaneous correlation and temporal dependence, but require large sample sizes to be accurate.

Open access
Abayomi A. Abatan, William J. Gutowski Jr., Caspar M. Ammann, Laurna Kaatz, Barbara G. Brown, Lawrence Buja, Randy Bullock, Tressa Fowler, Eric Gilleland, and John Halley Gotway


This study analyzes spatial and temporal characteristics of multiyear droughts and pluvials over the southwestern United States with a focus on the upper Colorado River basin. The study uses two multiscalar moisture indices: standardized precipitation evapotranspiration index (SPEI) and standardized precipitation index (SPI) on a 36-month scale (SPEI36 and SPI36, respectively). The indices are calculated from monthly average precipitation and maximum and minimum temperatures from the Parameter-Elevation Regressions on Independent Slopes Model dataset for the period 1950–2012. The study examines the relationship between individual climate variables as well as large-scale atmospheric circulation features found in reanalysis output during drought and pluvial periods. The results indicate that SPEI36 and SPI36 show similar temporal and spatial patterns, but that the inclusion of temperatures in SPEI36 leads to more extreme magnitudes in SPEI36 than in SPI36. Analysis of large-scale atmospheric fields indicates an interplay between different fields that yields extremes over the study region. Widespread drought (pluvial) events are associated with enhanced positive (negative) 500-hPa geopotential height anomaly linked to subsidence (ascent) and negative (positive) moisture convergence and precipitable water anomalies. Considering the broader context of the conditions responsible for the occurrence of prolonged hydrologic anomalies provides water resource managers and other decision-makers with valuable understanding of these events. This perspective also offers evaluation opportunities for climate models.

Full access
Lígia Bernardet, Louisa Nance, Meral Demirtas, Steve Koch, Edward Szoke, Tressa Fowler, Andrew Loughe, Jennifer Luppens Mahoney, Hui-Ya Chuang, Matthew Pyle, and Robert Gall

The Weather Research and Forecasting (WRF) Developmental Testbed Center (DTC) was formed to promote exchanges between the development and operational communities in the field of Numerical Weather Prediction (NWP). The WRF DTC serves to accelerate the transfer of NWP technology from research to operations and to support a subset of the current WRF operational configurations to the general community. This article describes the mission and recent activities of the WRF DTC, including a detailed discussion about one of its recent projects, the WRF DTC Winter Forecasting Experiment (DWFE).

DWFE was planned and executed by the WRF DTC in collaboration with forecasters and model developers. The real-time phase of the experiment took place in the winter of 2004/05, with two dynamic cores of the WRF model being run once per day out to 48 h. The models were configured with 5-km grid spacing over the entire continental United States to ascertain the value of high-resolution numerical guidance for winter weather prediction. Forecasts were distributed to many National Weather Service Weather Forecast Offices to allow forecasters both to familiarize themselves with WRF capabilities prior to WRF becoming operational at the National Centers for Environmental Prediction (NCEP) in the North American Mesoscale Model (NAM) application, and to provide feedback about the model to its developers. This paper presents the experiment's configuration, the results of objective forecast verification, including uncertainty measures, a case study to illustrate the potential use of DWFE products in the forecasting process, and a discussion about the importance and challenges of real-time experiments involving forecaster participation.

Full access
Barbara Brown, Tara Jensen, John Halley Gotway, Randy Bullock, Eric Gilleland, Tressa Fowler, Kathryn Newman, Dan Adriaansen, Lindsay Blank, Tatiana Burek, Michelle Harrold, Tracy Hertneky, Christina Kalb, Paul Kucera, Louisa Nance, John Opatz, Jonathan Vigh, and Jamie Wolff

Capsule summary

MET is a community-based package of state-of-the-art tools to evaluate predictions of weather, climate, and other phenomena, with capabilities to display and analyze verification results via the METplus system.

Full access