Search Results

You are looking at 21 - 30 of 34 items for

  • Author or Editor: Barbara G. Brown x
  • Refine by Access: All Content x
Clear All Modify Search
Ben C. Bernstein
,
Frank McDonough
,
Marcia K. Politovich
,
Barbara G. Brown
,
Thomas P. Ratvasky
,
Dean R. Miller
,
Cory A. Wolff
, and
Gary Cunning

Abstract

The “current icing potential” (CIP) algorithm combines satellite, radar, surface, lightning, and pilot-report observations with model output to create a detailed three-dimensional hourly diagnosis of the potential for the existence of icing and supercooled large droplets. It uses a physically based situational approach that is derived from basic and applied cloud physics, combined with forecaster and onboard flight experience from field programs. Both fuzzy logic and decision-tree logic are applied in this context. CIP determines the locations of clouds and precipitation and then estimates the potential for the presence of supercooled liquid water and supercooled large droplets within a given airspace. First developed in the winter of 1997/98, CIP became an operational National Weather Service and Federal Aviation Administration product in 2002, providing real-time diagnoses that allow users to make route-specific decisions to avoid potentially hazardous icing. The CIP algorithm, its individual components, and the logic behind them are described.

Full access
Rebecca E. Morss
,
Jeffrey K. Lazo
,
Barbara G. Brown
,
Harold E. Brooks
,
Philip T. Ganderton
, and
Brian N. Mills

Despite the meteorological community's long-term interest in weather-society interactions, efforts to understand socioeconomic aspects of weather prediction and to incorporate this knowledge into the weather prediction system have yet to reach critical mass. This article aims to reinvigorate interest in societal and economic research and applications (SERA) activities within the meteorological and social science communities by exploring key SERA issues and proposing SERA priorities for the next decade.

The priorities were developed by the authors, building on previous work, with input from a diverse group of social scientists and meteorologists who participated in a SERA workshop in August 2006. The workshop was organized to provide input to the North American regional component of THORPEX: A Global Atmospheric Research Programme, but the priorities identified are broadly applicable to all weather forecast research and applications.

To motivate and frame SERA activities, we first discuss the concept of high-impact weather forecasts and the chain from forecast creation to value realization. Next, we present five interconnected SERA priority themes—use of forecast information in decision making, communication of forecast uncertainty, user-relevant verification, economic value of forecasts, and decision support— and propose research integrated across the themes.

SERA activities can significantly improve understanding of weather-society interactions to the benefit of the meteorological community and society. However, reaching this potential will require dedicated effort to bring together and maintain a sustainable interdisciplinary community.

Full access
Elizabeth E. Ebert
,
Laurence J. Wilson
,
Barbara G. Brown
,
Pertti Nurmi
,
Harold E. Brooks
,
John Bally
, and
Matthias Jaeneke

Abstract

The verification phase of the World Weather Research Programme (WWRP) Sydney 2000 Forecast Demonstration Project (FDP) was intended to measure the skill of the participating nowcast algorithms in predicting the location of convection, rainfall rate and occurrence, wind speed and direction, severe thunderstorm wind gusts, and hail location and size. An additional question of interest was whether forecasters could improve the quality of the nowcasts compared to the FDP products alone.

The nowcasts were verified using a variety of statistical techniques. Observational data came from radar reflectivity and rainfall analyses, a network of rain gauges, and human (spotter) observations. The verification results showed that the cell tracking algorithms predicted the location of the strongest cells with a mean error of about 15–30 km for a 1-h forecast, and were usually more accurate than an extrapolation (Lagrangian persistence) forecast. Mean location errors for the area tracking schemes were on the order of 20 km.

Almost all of the algorithms successfully predicted the frequency of rain throughout the forecast period, although most underestimated the frequency of high rain rates. The skill in predicting rain occurrence decreased very quickly into the forecast period. In particular, the algorithms could not predict the precise location of heavy rain beyond the first 10–20 min. Using radar analyses as verification, the algorithms' spatial forecasts were consistently more skillful than simple persistence. However, when verified against rain gauge observations at point locations, the algorithms had difficulty beating persistence, mainly due to differences in spatial and temporal resolution.

Only one algorithm attempted to forecast gust fronts. The results for a limited sample showed a mean absolute error of 7 km h−1 and mean bias of 3 km h−1 in the speed of the gust fronts during the FDP. The errors in sea-breeze front forecasts were half as large, with essentially no bias. Verification of the hail associated with the 3 November tornadic storm showed that the two algorithms that estimated hail size and occurrence successfully diagnosed the onset and cessation of the hail to within 30 min of the reported sightings. The time evolution of hail size was reasonably well captured by the algorithms, and the predicted mean and maximum hail diameters were consistent with the observations.

The Thunderstorm Interactive Forecast System (TIFS) allowed forecasters to modify the output of the cell tracking nowcasts, primarily using it to remove cells that were insignificant or diagnosed with incorrect motion. This manual filtering resulted in markedly reduced mean cell position errors when compared to the unfiltered forecasts. However, when forecasters attempted to adjust the storm tracks for a small number of well-defined intense cells, the position errors increased slightly, suggesting that in such cases the objective guidance is probably the best estimate of storm motion.

Full access
Barbara G. Brown
,
Louisa B. Nance
,
Christopher L. Williams
,
Kathryn M. Newman
,
James L. Franklin
,
Edward N. Rappaport
,
Paul A. Kucera
, and
Robert L. Gall

Abstract

The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.

Significance Statement

The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.

Open access
Rita D. Roberts
,
Amanda R. S. Anderson
,
Eric Nelson
,
Barbara G. Brown
,
James W. Wilson
,
Matthew Pocernich
, and
Thomas Saxen

Abstract

A forecaster-interactive capability was added to an automated convective storm nowcasting system [Auto-Nowcaster (ANC)] to allow forecasters to enhance the performance of 1-h nowcasts of convective storm initiation and evolution produced every 6 min. This Forecaster-Over-The-Loop (FOTL-ANC) system was tested at the National Weather Service Fort Worth–Dallas, Texas, Weather Forecast Office during daily operations from 2005 to 2010. The forecaster’s role was to enter the locations of surface convergence boundaries into the ANC prior to dissemination of nowcasts to the Center Weather Service Unit. Verification of the FOTL-ANC versus ANC (no human) nowcasts was conducted on the convective scale. Categorical verification scores were computed for 30 subdomains within the forecast domain. Special focus was placed on subdomains that included convergence boundaries for evaluation of forecaster involvement and impact on the FOTL-ANC nowcasts. The probability of detection of convective storms increased by 20%–60% with little to no change observed in the false-alarm ratios. Bias values increased from 0.8–1.0 to 1.0–3.0 with human involvement. The accuracy of storm nowcasts notably improved with forecaster involvement; critical success index (CSI) values increased from 0.15–0.25 (ANC) to 0.2–0.4 (FOTL-ANC). Over short time periods, CSI values as large as 0.6 were also observed. This study demonstrated definitively that forecaster involvement led to positive improvement in the nowcasts in most cases while causing no degradation in other cases; a few exceptions are noted. Results show that forecasters can play an important role in the production of rapidly updated, convective storm nowcasts for end users.

Full access
Steven D. Miller
,
Courtney E. Weeks
,
Randy G. Bullock
,
John M. Forsythe
,
Paul A. Kucera
,
Barbara G. Brown
,
Cory A. Wolff
,
Philip T. Partain
,
Andrew S. Jones
, and
David B. Johnson

Abstract

Clouds pose many operational hazards to the aviation community in terms of ceilings and visibility, turbulence, and aircraft icing. Realistic descriptions of the three-dimensional (3D) distribution and temporal evolution of clouds in numerical weather prediction models used for flight planning and routing are therefore of central importance. The introduction of satellite-based cloud radar (CloudSat) and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) sensors to the National Aeronautics and Space Administration A-Train is timely in light of these needs but requires a new paradigm of model-evaluation tools that are capable of exploiting the vertical-profile information. Early results from the National Center for Atmospheric Research Model Evaluation Toolkit (MET), augmented to work with the emergent satellite-based active sensor observations, are presented here. Existing horizontal-plane statistical evaluation techniques have been adapted to operate on observations in the vertical plane and have been extended to 3D object evaluations, leveraging blended datasets from the active and passive A-Train sensors. Case studies of organized synoptic-scale and mesoscale distributed cloud systems are presented to illustrate the multiscale utility of the MET tools. Definition of objects on the basis of radar-reflectivity thresholds was found to be strongly dependent on the model’s ability to resolve details of the cloud’s internal hydrometeor distribution. Contoured-frequency-by-altitude diagrams provide a useful mechanism for evaluating the simulated and observed 3D distributions for regional domains. The expanded MET provides a new dimension to model evaluation and positions the community to better exploit active-sensor satellite observing systems that are slated for launch in the near future.

Full access
Abayomi A. Abatan
,
William J. Gutowski Jr.
,
Caspar M. Ammann
,
Laurna Kaatz
,
Barbara G. Brown
,
Lawrence Buja
,
Randy Bullock
,
Tressa Fowler
,
Eric Gilleland
, and
John Halley Gotway

Abstract

This study analyzes spatial and temporal characteristics of multiyear droughts and pluvials over the southwestern United States with a focus on the upper Colorado River basin. The study uses two multiscalar moisture indices: standardized precipitation evapotranspiration index (SPEI) and standardized precipitation index (SPI) on a 36-month scale (SPEI36 and SPI36, respectively). The indices are calculated from monthly average precipitation and maximum and minimum temperatures from the Parameter-Elevation Regressions on Independent Slopes Model dataset for the period 1950–2012. The study examines the relationship between individual climate variables as well as large-scale atmospheric circulation features found in reanalysis output during drought and pluvial periods. The results indicate that SPEI36 and SPI36 show similar temporal and spatial patterns, but that the inclusion of temperatures in SPEI36 leads to more extreme magnitudes in SPEI36 than in SPI36. Analysis of large-scale atmospheric fields indicates an interplay between different fields that yields extremes over the study region. Widespread drought (pluvial) events are associated with enhanced positive (negative) 500-hPa geopotential height anomaly linked to subsidence (ascent) and negative (positive) moisture convergence and precipitable water anomalies. Considering the broader context of the conditions responsible for the occurrence of prolonged hydrologic anomalies provides water resource managers and other decision-makers with valuable understanding of these events. This perspective also offers evaluation opportunities for climate models.

Full access
Steven V. Vasiloff
,
Dong-Jun Seo
,
Kenneth W. Howard
,
Jian Zhang
,
David H. Kitzmiller
,
Mary G. Mullusky
,
Witold F. Krajewski
,
Edward A. Brandes
,
Robert M. Rabin
,
Daniel S. Berkowitz
,
Harold E. Brooks
,
John A. McGinley
,
Robert J. Kuligowski
, and
Barbara G. Brown

Accurate quantitative precipitation estimates (QPE) and very short term quantitative precipitation forecasts (VSTQPF) are critical to accurate monitoring and prediction of water-related hazards and water resources. While tremendous progress has been made in the last quarter-century in many areas of QPE and VSTQPF, significant gaps continue to exist in both knowledge and capabilities that are necessary to produce accurate high-resolution precipitation estimates at the national scale for a wide spectrum of users. Toward this goal, a national next-generation QPE and VSTQPF (Q2) workshop was held in Norman, Oklahoma, on 28–30 June 2005. Scientists, operational forecasters, water managers, and stakeholders from public and private sectors, including academia, presented and discussed a broad range of precipitation and forecasting topics and issues, and developed a list of science focus areas. To meet the nation's needs for the precipitation information effectively, the authors herein propose a community-wide integrated approach for precipitation information that fully capitalizes on recent advances in science and technology, and leverages the wide range of expertise and experience that exists in the research and operational communities. The concepts and recommendations from the workshop form the Q2 science plan and a suggested path to operations. Implementation of these concepts is expected to improve river forecasts and flood and flash flood watches and warnings, and to enhance various hydrologic and hydrometeorological services for a wide range of users and customers. In support of this initiative, the National Mosaic and Q2 (NMQ) system is being developed at the National Severe Storms Laboratory to serve as a community test bed for QPE and VSTQPF research and to facilitate the transition to operations of research applications. The NMQ system provides a real-time, around-the-clock data infusion and applications development and evaluation environment, and thus offers a community-wide platform for development and testing of advances in the focus areas.

Full access
Edward I. Tollerud
,
Brian Etherton
,
Zoltan Toth
,
Isidora Jankov
,
Tara L. Jensen
,
Huiling Yuan
,
Linda S. Wharton
,
Paula T. McCaslin
,
Eugene Mirvis
,
Bill Kuo
,
Barbara G. Brown
,
Louisa Nance
,
Steven E. Koch
, and
F. Anthony Eckel
Full access
John S. Kain
,
Ming Xue
,
Michael C. Coniglio
,
Steven J. Weiss
,
Fanyou Kong
,
Tara L. Jensen
,
Barbara G. Brown
,
Jidong Gao
,
Keith Brewster
,
Kevin W. Thomas
,
Yunheng Wang
,
Craig S. Schwartz
, and
Jason J. Levit

Abstract

The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.

Full access