Search Results

You are looking at 1 - 8 of 8 items for

  • Author or Editor: Matthew Pyle x
  • Refine by Access: All Content x
Clear All Modify Search
Keith F. Brill
and
Matthew Pyle

Abstract

Critical performance ratio (CPR) expressions for the eight conditional probabilities associated with the 2 × 2 contingency table of outcomes for binary (dichotomous “yes” or “no”) forecasts are derived. Two are shown to be useful in evaluating the effects of hedging as it approaches random change. The CPR quantifies how the probability of detection (POD) must change as frequency bias changes, so that a performance measure (or conditional probability) indicates an improved forecast for a given value of frequency bias. If yes forecasts were to be increased randomly, the probability of additional correct forecasts (hits) is given by the detection failure ratio (DFR). If the DFR for a performance measure is greater than the CPR, the forecast is likely to be improved by the random increase in yes forecasts. Thus, the DFR provides a benchmark for the CPR in the case of frequency bias inflation. If yes forecasts are decreased randomly, the probability of removing a hit is given by the frequency of hits (FOH). If the FOH for a performance measure is less than the CPR, the forecast is likely to be improved by the random decrease in yes forecasts. Therefore, the FOH serves as a benchmark for the CPR if the frequency bias is decreased. The closer the FOH (DFR) is to being less (greater) than or equal to the CPR, the more likely it may be to enhance the performance measure by decreasing (increasing) the frequency bias. It is shown that randomly increasing yes forecasts for a forecast that is itself better than a randomly generated forecast can improve the threat score but is not likely to improve the equitable threat score. The equitable threat score is recommended instead of the threat score whenever possible.

Full access
Matthew E. Pyle
and
Keith F. Brill

Abstract

A fair comparison of quantitative precipitation forecast (QPF) products from multiple forecast sources using performance metrics based on a 2 × 2 contingency table with assessment of statistical significance of differences requires accounting for differing frequency biases to which the performance metrics are sensitive. A simple approach to address differing frequency biases modifies the 2 × 2 contingency table values using a mathematical assumption that determines the change in hit rate when the frequency bias is adjusted to unity. Another approach uses quantile mapping to remove the frequency bias of the QPFs by matching the frequency distribution of each QPF to the frequency distribution of the verifying analysis or points. If these two methods consistently yield the same result for assessing the statistical significance of differences between two QPF forecast sources when accounting for bias differences, then verification software can apply the simpler approach and existing 2 × 2 contingency tables can be used for statistical significance computations without recovering the original QPF and verifying data required for the bias removal approach. However, this study provides evidence for continued application and wider adoption of the bias removal approach.

Full access
Matthew E. Pyle
,
Daniel Keyser
, and
Lance F. Bosart

Abstract

A diagnostic study is conducted of the kinematics and evolution of upper-level jet streaks representative of three (of the four) phases in the Shapiro conceptual model of a jet streak progressing through a synoptic-scale baroclinic wave over North America. The three phases selected for consideration apply to those segments of the wave pattern where jet streaks are relatively straight. The 1200 UTC 2 December 1991 case (trough-over-ridge) considers a strong jet streak located over eastern North America and constitutes the bulk of the study; the other two cases, which also concern jet streaks over North America, are from 0000 UTC 11 November 1995 (northwesterly flow) and 0000 UTC 28 October 1995 (southwesterly flow). Kinematic signatures consistent with the classic four-quadrant conceptual model for a straight jet streak are evident in all three cases, although flow curvature and thermal advection lead to significant departures from this conceptual model. The position of the jet streak within the synoptic-scale flow pattern also is shown to have a discernible influence on the kinematic fields, adding to the more localized effects of flow curvature and thermal advection in causing observed jet-streak kinematic signatures to depart from the four-quadrant conceptual model.

The investigation of the evolution of the trough-over-ridge jet streak focuses on a vortexlike feature situated on the cyclonic-shear side of the jet streak and manifested as a localized mesoscale depression in the height of the dynamic tropopause (DT), corresponding to a local maximum of pressure on the DT. Complementary signatures of this vortexlike feature, referred to as a coherent tropopause disturbance (CTD), are a local minimum in potential temperature on the DT and a maximum in potential vorticity (PV) on tropopause-intersecting isentropic surfaces. In the trough-over-ridge case, a CTD is tracked for 17.5 days during which time it influences not only the jet streak considered for kinematic study but also one additional jet streak. The evolutions of the northwesterly and southwesterly flow jet streaks are also evaluated in relation to their association or lack thereof with CTDs. The northwesterly flow jet streak intensifies in the absence of a CTD, whereas the southwesterly flow jet streak is associated with a CTD that is tracked for 11.5 days and that participates in the intensification of one additional jet streak. In all three cases, the jet streaks coincide with large horizontal gradients of pressure and potential temperature on the DT and of PV on tropopause-intersecting isentropic surfaces. In the two cases involving CTDs, their role is to enhance these respective gradients over a mesoscale region; this enhancement appears to focus and strengthen jet-streak winds over the same region, suggesting the importance of CTDs in jet-streak evolution.

Full access
Estela A. Collini
,
Ernesto H. Berbery
,
Vicente R. Barros
, and
Matthew E. Pyle

Abstract

This article discusses the feedbacks between soil moisture and precipitation during the early stages of the South American monsoon. The system achieves maximum precipitation over the southern Amazon basin and the Brazilian highlands during the austral summer. Monsoon changes are associated with the large-scale dynamics, but during its early stages, when the surface is not sufficiently wet, soil moisture anomalies may also modulate the development of precipitation. To investigate this, sensitivity experiments to initial soil moisture conditions were performed using month-long simulations with the regional mesoscale Eta model. Examination of the control simulations shows that they reproduce all major features and magnitudes of the South American circulation and precipitation patterns, particularly those of the monsoon. The surface sensible and latent heat fluxes, as well as precipitation, have a diurnal cycle whose phase is consistent with previous observational studies. The convective inhibition is smallest at the time of the precipitation maximum, but the convective available potential energy exhibits an unrealistic morning maximum that may result from an early boundary layer mixing.

The sensitivity experiments show that precipitation is more responsive to reductions of soil moisture than to increases, suggesting that although the soil is not too wet, it is sufficiently humid to easily reach levels where soil moisture anomalies stop being effective in altering the evapotranspiration and other surface and boundary layer variables. Two mechanisms by which soil moisture has a positive feedback with precipitation are discussed. First, the reduction of initial soil moisture leads to a smaller latent heat flux and a larger sensible heat flux, and both contribute to a larger Bowen ratio. The smaller evapotranspiration and increased sensible heat flux lead to a drier and warmer boundary layer, which in turn reduces the atmospheric instability. Second, the deeper (and drier) boundary layer is related to a stronger and higher South American low-level jet (SALLJ). However, because of the lesser moisture content, the SALLJ carries less moisture to the monsoon region, as evidenced by the reduced moisture fluxes and their convergence. The two mechanisms—reduced convective instability and reduced moisture flux convergence—act concurrently to diminish the core monsoon precipitation.

Full access
Benjamin T. Blake
,
Jacob R. Carley
,
Trevor I. Alcott
,
Isidora Jankov
,
Matthew E. Pyle
,
Sarah E. Perfater
, and
Benjamin Albright

Abstract

Traditional ensemble probabilities are computed based on the number of members that exceed a threshold at a given point divided by the total number of members. This approach has been employed for many years in coarse-resolution models. However, convection-permitting ensembles of less than ~20 members are generally underdispersive, and spatial displacement at the gridpoint scale is often large. These issues have motivated the development of spatial filtering and neighborhood postprocessing methods, such as fractional coverage and neighborhood maximum value, which address this spatial uncertainty. Two different fractional coverage approaches for the generation of gridpoint probabilities were evaluated. The first method expands the traditional point probability calculation to cover a 100-km radius around a given point. The second method applies the idea that a uniform radius is not appropriate when there is strong agreement between members. In such cases, the traditional fractional coverage approach can reduce the probabilities for these potentially well-handled events. Therefore, a variable radius approach has been developed based upon ensemble agreement scale similarity criteria. In this method, the radius size ranges from 10 km for member forecasts that are in good agreement (e.g., lake-effect snow, orographic precipitation, very short-term forecasts, etc.) to 100 km when the members are more dissimilar. Results from the application of this adaptive technique for the calculation of point probabilities for precipitation forecasts are presented based upon several months of objective verification and subjective feedback from the 2017 Flash Flood and Intense Rainfall Experiment.

Full access
Lígia Bernardet
,
Louisa Nance
,
Meral Demirtas
,
Steve Koch
,
Edward Szoke
,
Tressa Fowler
,
Andrew Loughe
,
Jennifer Luppens Mahoney
,
Hui-Ya Chuang
,
Matthew Pyle
, and
Robert Gall

The Weather Research and Forecasting (WRF) Developmental Testbed Center (DTC) was formed to promote exchanges between the development and operational communities in the field of Numerical Weather Prediction (NWP). The WRF DTC serves to accelerate the transfer of NWP technology from research to operations and to support a subset of the current WRF operational configurations to the general community. This article describes the mission and recent activities of the WRF DTC, including a detailed discussion about one of its recent projects, the WRF DTC Winter Forecasting Experiment (DWFE).

DWFE was planned and executed by the WRF DTC in collaboration with forecasters and model developers. The real-time phase of the experiment took place in the winter of 2004/05, with two dynamic cores of the WRF model being run once per day out to 48 h. The models were configured with 5-km grid spacing over the entire continental United States to ascertain the value of high-resolution numerical guidance for winter weather prediction. Forecasts were distributed to many National Weather Service Weather Forecast Offices to allow forecasters both to familiarize themselves with WRF capabilities prior to WRF becoming operational at the National Centers for Environmental Prediction (NCEP) in the North American Mesoscale Model (NAM) application, and to provide feedback about the model to its developers. This paper presents the experiment's configuration, the results of objective forecast verification, including uncertainty measures, a case study to illustrate the potential use of DWFE products in the forecasting process, and a discussion about the importance and challenges of real-time experiments involving forecaster participation.

Full access
Dmitry Kiktev
,
Paul Joe
,
George A. Isaac
,
Andrea Montani
,
Inger-Lise Frogner
,
Pertti Nurmi
,
Benedikt Bica
,
Jason Milbrandt
,
Michael Tsyrulnikov
,
Elena Astakhova
,
Anastasia Bundel
,
Stéphane Bélair
,
Matthew Pyle
,
Anatoly Muravyev
,
Gdaly Rivin
,
Inna Rozinkina
,
Tiziana Paccagnella
,
Yong Wang
,
Janti Reid
,
Thomas Nipen
, and
Kwang-Deuk Ahn

Abstract

The World Meteorological Organization (WMO) World Weather Research Programme’s (WWRP) Forecast and Research in the Olympic Sochi Testbed program (FROST-2014) was aimed at the advancement and demonstration of state-of-the-art nowcasting and short-range forecasting systems for winter conditions in mountainous terrain. The project field campaign was held during the 2014 XXII Olympic and XI Paralympic Winter Games and preceding test events in Sochi, Russia. An enhanced network of in situ and remote sensing observations supported weather predictions and their verification. Six nowcasting systems (model based, radar tracking, and combined nowcasting systems), nine deterministic mesoscale numerical weather prediction models (with grid spacings down to 250 m), and six ensemble prediction systems (including two with explicitly simulated deep convection) participated in FROST-2014. The project provided forecast input for the meteorological support of the Sochi Olympic Games. The FROST-2014 archive of winter weather observations and forecasts is a valuable information resource for mesoscale predictability studies as well as for the development and validation of nowcasting and forecasting systems in complex terrain. The resulting innovative technologies, exchange of experience, and professional developments contributed to the success of the Olympics and left a post-Olympic legacy.

Open access
Adam J. Clark
,
Israel L. Jirak
,
Scott R. Dembek
,
Gerry J. Creager
,
Fanyou Kong
,
Kevin W. Thomas
,
Kent H. Knopfmeier
,
Burkely T. Gallo
,
Christopher J. Melick
,
Ming Xue
,
Keith A. Brewster
,
Youngsun Jung
,
Aaron Kennedy
,
Xiquan Dong
,
Joshua Markel
,
Matthew Gilmore
,
Glen S. Romine
,
Kathryn R. Fossell
,
Ryan A. Sobash
,
Jacob R. Carley
,
Brad S. Ferrier
,
Matthew Pyle
,
Curtis R. Alexander
,
Steven J. Weiss
,
John S. Kain
,
Louis J. Wicker
,
Gregory Thompson
,
Rebecca D. Adams-Selin
, and
David A. Imy

Abstract

One primary goal of annual Spring Forecasting Experiments (SFEs), which are coorganized by NOAA’s National Severe Storms Laboratory and Storm Prediction Center and conducted in the National Oceanic and Atmospheric Administration’s (NOAA) Hazardous Weather Testbed, is documenting performance characteristics of experimental, convection-allowing modeling systems (CAMs). Since 2007, the number of CAMs (including CAM ensembles) examined in the SFEs has increased dramatically, peaking at six different CAM ensembles in 2015. Meanwhile, major advances have been made in creating, importing, processing, verifying, and developing tools for analyzing and visualizing these large and complex datasets. However, progress toward identifying optimal CAM ensemble configurations has been inhibited because the different CAM systems have been independently designed, making it difficult to attribute differences in performance characteristics. Thus, for the 2016 SFE, a much more coordinated effort among many collaborators was made by agreeing on a set of model specifications (e.g., model version, grid spacing, domain size, and physics) so that the simulations contributed by each collaborator could be combined to form one large, carefully designed ensemble known as the Community Leveraged Unified Ensemble (CLUE). The 2016 CLUE was composed of 65 members contributed by five research institutions and represents an unprecedented effort to enable an evidence-driven decision process to help guide NOAA’s operational modeling efforts. Eight unique experiments were designed within the CLUE framework to examine issues directly relevant to the design of NOAA’s future operational CAM-based ensembles. This article will highlight the CLUE design and present results from one of the experiments examining the impact of single versus multicore CAM ensemble configurations.

Full access