Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: Tara L. Jensen x
  • All content x
Clear All Modify Search
Shuowen Yang, William R. Cotton, and Tara L. Jensen

Abstract

This paper studies the feasibility of retrieving aerosol concentrations in the planetary boundary layer (PBL) with a variational data assimilation (VDA) algorithm using dual-wavelength lidar returns and visual range simulated at multiple times. Aerosols are assumed to consist of nucleation, accumulation, and coarse modes, with each mode distributed in a gamma distribution. The VDA algorithm retrieves initial vertical profiles of aerosol concentrations of the three modes, which are predicted by a 1D PBL model. The accuracy of retrieved aerosol concentrations of the three modes is examined through a series of identical twin numerical experiments. For the VDA algorithm that uses data from a lidar wavelength pair (0.289, 11.15 μm), results show that 1) if both random and systematic errors in the observed data are less than 1.0 dB and the number densities of accumulation and coarse modes are, respectively, 0.025–0.25 and 0–1.25 × 10−3 times that of the nucleation mode, relative errors in the retrieved aerosol concentrations are 12%–110% for the nucleation mode, 9%–40% for the accumulation, and 3%–25% for the coarse mode; 2) the accuracy of retrieved aerosol concentrations is slightly (greatly) affected by the errors in relative humidity (RH) if RH is less (greater) than 95%, and moderately by the vertical scales of initial aerosol concentration fields; 3) systematic errors in the observed data can severely reduce the accuracy of retrieved aerosol concentrations; and 4) the VDA algorithm is more accurate than traditional methods that use single-time data.

Moreover, a method is developed to retrieve systematic errors in the observed data. Results show that if systematic errors in lidar returns are less than about 2 dB, retrieving systematic errors can increase the accuracy of retrieved aerosol concentrations, especially for the accumulation mode. The method to retrieve systematic errors in the observed data can find applications in other retrieval problems. Finally, assimilation of ceilometer data (wavelength 0.904 μm) is explored through investigating data from wavelength pairs: (0.289, 0.904 μm) and (0.904, 11.15 μm).

Full access
Adam J. Clark, Randy G. Bullock, Tara L. Jensen, Ming Xue, and Fanyou Kong

Abstract

Meaningful verification and evaluation of convection-allowing models requires approaches that do not rely on point-to-point matches of forecast and observed fields. In this study, one such approach—a beta version of the Method for Object-Based Diagnostic Evaluation (MODE) that incorporates the time dimension [known as MODE time-domain (MODE-TD)]—was applied to 30-h precipitation forecasts from four 4-km grid-spacing members of the 2010 Storm-Scale Ensemble Forecast system with different microphysics parameterizations. Including time in MODE-TD provides information on rainfall system evolution like lifetime, timing of initiation and dissipation, and translation.

The simulations depicted the spatial distribution of time-domain precipitation objects across the United States quite well. However, all simulations overpredicted the number of objects, with the Thompson microphysics scheme overpredicting the most and the Morrison method the least. For the smallest smoothing radius and rainfall threshold used to define objects [8 km and 0.10 in. (1 in. = 2.54 cm), respectively], the most common object duration was 3 h in both models and observations. With an increased smoothing radius and rainfall threshold, the most common duration became shorter. The simulations depicted the diurnal cycle of object frequencies well, but overpredicted object frequencies uniformly across all forecast hours. The simulations had spurious maxima in initiating objects at the beginning of the forecast and a corresponding spurious maximum in dissipating objects slightly later. Examining average object velocities, a slow bias was found in the simulations, which was most pronounced in the Thompson member. These findings should aid users and developers of convection-allowing models and motivate future work utilizing time-domain methods for verifying high-resolution forecasts.

Full access
Burkely T. Gallo, Christina P. Kalb, John Halley Gotway, Henry H. Fisher, Brett Roberts, Israel L. Jirak, Adam J. Clark, Curtis Alexander, and Tara L. Jensen

Abstract

Evaluation of numerical weather prediction (NWP) is critical for both forecasters and researchers. Through such evaluation, forecasters can understand the strengths and weaknesses of NWP guidance, and researchers can work to improve NWP models. However, evaluating high-resolution convection-allowing models (CAMs) requires unique verification metrics tailored to high-resolution output, particularly when considering extreme events. Metrics used and fields evaluated often differ between verification studies, hindering the effort to broadly compare CAMs. The purpose of this article is to summarize the development and initial testing of a CAM-based scorecard, which is intended for broad use across research and operational communities and is similar to scorecards currently available within the enhanced Model Evaluation Tools package (METplus) for evaluating coarser models. Scorecards visualize many verification metrics and attributes simultaneously, providing a broad overview of model performance. A preliminary CAM scorecard was developed and tested during the 2018 Spring Forecasting Experiment using METplus, focused on metrics and attributes relevant to severe convective forecasting. The scorecard compared attributes specific to convection-allowing scales such as reflectivity and surrogate severe fields, using metrics like the critical success index (CSI) and fractions skill score (FSS). While this preliminary scorecard focuses on attributes relevant to severe convective storms, the scorecard framework allows for the inclusion of further metrics relevant to other applications. Development of a CAM scorecard allows for evidence-based decision-making regarding future operational CAM systems as the National Weather Service transitions to a Unified Forecast system as part of the Next-Generation Global Prediction System initiative.

Free access
Julie L. Demuth, Rebecca E. Morss, Isidora Jankov, Trevor I. Alcott, Curtis R. Alexander, Daniel Nietfeld, Tara L. Jensen, David R. Novak, and Stanley G. Benjamin

Abstract

U.S. National Weather Service (NWS) forecasters assess and communicate hazardous weather risks, including the likelihood of a threat and its impacts. Convection-allowing model (CAM) ensembles offer potential to aid forecasting by depicting atmospheric outcomes, including associated uncertainties, at the refined space and time scales at which hazardous weather often occurs. Little is known, however, about what CAM ensemble information is needed to inform forecasting decisions. To address this knowledge gap, participant observations and semistructured interviews were conducted with NWS forecasters from national centers and local weather forecast offices. Data were collected about forecasters’ roles and their forecasting processes, uses of model guidance and verification information, interpretations of prototype CAM ensemble products, and needs for information from CAM ensembles. Results revealed forecasters’ needs for specific types of CAM ensemble guidance, including a product that combines deterministic and probabilistic output from the ensemble as well as a product that provides map-based guidance about timing of hazardous weather threats. Forecasters also expressed a general need for guidance to help them provide impact-based decision support services. Finally, forecasters conveyed needs for objective model verification information to augment their subjective assessments and for training about using CAM ensemble guidance for operational forecasting. The research was conducted as part of an interdisciplinary research effort that integrated elicitation of forecasters’ CAM ensemble needs with model development efforts, with the aim of illustrating a robust approach for creating information for forecasters that is truly useful and usable.

Restricted access
Burkely T. Gallo, Christina P. Kalb, John Halley Gotway, Henry H. Fisher, Brett Roberts, Israel L. Jirak, Adam J. Clark, Curtis Alexander, and Tara L. Jensen
Full access
Sarah M. Griffin, Jason A. Otkin, Christopher M. Rozoff, Justin M. Sieglaff, Lee M. Cronce, Curtis R. Alexander, Tara L. Jensen, and Jamie K. Wolff

Abstract

In this study, object-based verification using the method for object-based diagnostic evaluation (MODE) is used to assess the accuracy of cloud-cover forecasts from the experimental High-Resolution Rapid Refresh (HRRRx) model during the warm and cool seasons. This is accomplished by comparing cloud objects identified by MODE in observed and simulated Geostationary Operational Environmental Satellite 10.7-μm brightness temperatures for August 2015 and January 2016. The analysis revealed that more cloud objects and a more pronounced diurnal cycle occurred during August, with larger object sizes observed in January because of the prevalence of synoptic-scale cloud features. With the exception of the 0-h analyses, the forecasts contained fewer cloud objects than were observed. HRRRx forecast accuracy is assessed using two methods: traditional verification, which compares the locations of grid points identified as observation and forecast objects, and the MODE composite score, an area-weighted calculation using the object-pair interest values computed by MODE. The 1-h forecasts for both August and January were the most accurate for their respective months. Inspection of the individual MODE attribute interest scores showed that, even though displacement errors between the forecast and observation objects increased between the 0-h analyses and 1-h forecasts, the forecasts were more accurate than the analyses because the sizes of the largest cloud objects more closely matched the observations. The 1-h forecasts from August were found to be more accurate than those during January because the spatial displacement between the cloud objects was smaller and the forecast objects better represented the size of the observation objects.

Full access
Edward I. Tollerud, Brian Etherton, Zoltan Toth, Isidora Jankov, Tara L. Jensen, Huiling Yuan, Linda S. Wharton, Paula T. McCaslin, Eugene Mirvis, Bill Kuo, Barbara G. Brown, Louisa Nance, Steven E. Koch, and F. Anthony Eckel
Full access
John S. Kain, Ming Xue, Michael C. Coniglio, Steven J. Weiss, Fanyou Kong, Tara L. Jensen, Barbara G. Brown, Jidong Gao, Keith Brewster, Kevin W. Thomas, Yunheng Wang, Craig S. Schwartz, and Jason J. Levit

Abstract

The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.

Full access
Adam J. Clark, Steven J. Weiss, John S. Kain, Israel L. Jirak, Michael Coniglio, Christopher J. Melick, Christopher Siewert, Ryan A. Sobash, Patrick T. Marsh, Andrew R. Dean, Ming Xue, Fanyou Kong, Kevin W. Thomas, Yunheng Wang, Keith Brewster, Jidong Gao, Xuguang Wang, Jun Du, David R. Novak, Faye E. Barthold, Michael J. Bodner, Jason J. Levit, C. Bruce Entwistle, Tara L. Jensen, and James Correia Jr.

The NOAA Hazardous Weather Testbed (HWT) conducts annual spring forecasting experiments organized by the Storm Prediction Center and National Severe Storms Laboratory to test and evaluate emerging scientific concepts and technologies for improved analysis and prediction of hazardous mesoscale weather. A primary goal is to accelerate the transfer of promising new scientific concepts and tools from research to operations through the use of intensive real-time experimental forecasting and evaluation activities conducted during the spring and early summer convective storm period. The 2010 NOAA/HWT Spring Forecasting Experiment (SE2010), conducted 17 May through 18 June, had a broad focus, with emphases on heavy rainfall and aviation weather, through collaboration with the Hydrometeorological Prediction Center (HPC) and the Aviation Weather Center (AWC), respectively. In addition, using the computing resources of the National Institute for Computational Sciences at the University of Tennessee, the Center for Analysis and Prediction of Storms at the University of Oklahoma provided unprecedented real-time conterminous United States (CONUS) forecasts from a multimodel Storm-Scale Ensemble Forecast (SSEF) system with 4-km grid spacing and 26 members and from a 1-km grid spacing configuration of the Weather Research and Forecasting model. Several other organizations provided additional experimental high-resolution model output. This article summarizes the activities, insights, and preliminary findings from SE2010, emphasizing the use of the SSEF system and the successful collaboration with the HPC and AWC.

A supplement to this article is available online (DOI:10.1175/BAMS-D-11-00040.2)

Full access