Search Results

You are looking at 21 - 30 of 43 items for :

  • Author or Editor: Adam J. Clark x
  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All Modify Search
Adam J. Clark
,
Andrew MacKenzie
,
Amy McGovern
,
Valliappa Lakshmanan
, and
Rodger A. Brown

Abstract

Moisture boundaries, or drylines, are common over the southern U.S. high plains and are one of the most important airmass boundaries for convective initiation over this region. In favorable environments, drylines can initiate storms that produce strong and violent tornadoes, large hail, lightning, and heavy rainfall. Despite their importance, there are few studies documenting climatological dryline location and frequency, or performing systematic dryline forecast evaluation, which likely stems from difficulties in objectively identifying drylines over large datasets. Previous studies have employed tedious manual identification procedures. This study aims to streamline dryline identification by developing an automated, multiparameter algorithm, which applies image-processing and pattern recognition techniques to various meteorological fields and their gradients to identify drylines. The algorithm is applied to five years of high-resolution 24-h forecasts from Weather Research and Forecasting (WRF) Model simulations valid April–June 2007–11. Manually identified dryline positions, which were available from a previous study using the same dataset, are used as truth to evaluate the algorithm performance. Generally, the algorithm performed very well. High probability of detection (POD) scores indicated that the majority of drylines were identified by the method. However, a relatively high false alarm ratio (FAR) was also found, indicating that a large number of nondryline features were also identified. Preliminary use of random forests (a machine learning technique) significantly decreased the FAR, while minimally impacting the POD. The algorithm lays the groundwork for applications including model evaluation and operational forecasting, and should enable efficient analysis of drylines from very large datasets.

Full access
Adam J. Clark
,
Randy G. Bullock
,
Tara L. Jensen
,
Ming Xue
, and
Fanyou Kong

Abstract

Meaningful verification and evaluation of convection-allowing models requires approaches that do not rely on point-to-point matches of forecast and observed fields. In this study, one such approach—a beta version of the Method for Object-Based Diagnostic Evaluation (MODE) that incorporates the time dimension [known as MODE time-domain (MODE-TD)]—was applied to 30-h precipitation forecasts from four 4-km grid-spacing members of the 2010 Storm-Scale Ensemble Forecast system with different microphysics parameterizations. Including time in MODE-TD provides information on rainfall system evolution like lifetime, timing of initiation and dissipation, and translation.

The simulations depicted the spatial distribution of time-domain precipitation objects across the United States quite well. However, all simulations overpredicted the number of objects, with the Thompson microphysics scheme overpredicting the most and the Morrison method the least. For the smallest smoothing radius and rainfall threshold used to define objects [8 km and 0.10 in. (1 in. = 2.54 cm), respectively], the most common object duration was 3 h in both models and observations. With an increased smoothing radius and rainfall threshold, the most common duration became shorter. The simulations depicted the diurnal cycle of object frequencies well, but overpredicted object frequencies uniformly across all forecast hours. The simulations had spurious maxima in initiating objects at the beginning of the forecast and a corresponding spurious maximum in dissipating objects slightly later. Examining average object velocities, a slow bias was found in the simulations, which was most pronounced in the Thompson member. These findings should aid users and developers of convection-allowing models and motivate future work utilizing time-domain methods for verifying high-resolution forecasts.

Full access
Adam J. Clark
,
Michael C. Coniglio
,
Brice E. Coffer
,
Greg Thompson
,
Ming Xue
, and
Fanyou Kong

Abstract

Recent NOAA Hazardous Weather Testbed Spring Forecasting Experiments have emphasized the sensitivity of forecast sensible weather fields to how boundary layer processes are represented in the Weather Research and Forecasting (WRF) Model. Thus, since 2010, the Center for Analysis and Prediction of Storms has configured at least three members of their WRF-based Storm-Scale Ensemble Forecast (SSEF) system specifically for examination of sensitivities to parameterizations of turbulent mixing, including the Mellor–Yamada–Janjić (MYJ); quasi-normal scale elimination (QNSE); Asymmetrical Convective Model, version 2 (ACM2); Yonsei University (YSU); and Mellor–Yamada–Nakanishi–Niino (MYNN) schemes (hereafter PBL members). In postexperiment analyses, significant differences in forecast boundary layer structure and evolution have been observed, and for preconvective environments MYNN was found to have a superior depiction of temperature and moisture profiles. This study evaluates the 24-h forecast dryline positions in the SSEF system PBL members during the period April–June 2010–12 and documents sensitivities of the vertical distribution of thermodynamic and kinematic variables in near-dryline environments. Main results include the following. Despite having superior temperature and moisture profiles, as indicated by a previous study, MYNN was one of the worst-performing PBL members, exhibiting large eastward errors in forecast dryline position. During April–June 2010–11, a dry bias in the North American Mesoscale Forecast System (NAM) initial conditions largely contributed to eastward dryline errors in all PBL members. An upgrade to the NAM and assimilation system in October 2011 apparently fixed the dry bias, reducing eastward errors. Large sensitivities of CAPE and low-level shear to the PBL schemes were found, which were largest between 1.0° and 3.0° to the east of drylines. Finally, modifications to YSU to decrease vertical mixing and mitigate its warm and dry bias greatly reduced eastward dryline errors.

Full access
Aaron Johnson
,
Xuguang Wang
,
Yongming Wang
,
Anthony Reinhart
,
Adam J. Clark
, and
Israel L. Jirak

Abstract

An object-based probabilistic (OBPROB) forecasting framework is developed and applied, together with a more traditional neighborhood-based framework, to convection-permitting ensemble forecasts produced by the University of Oklahoma (OU) Multiscale data Assimilation and Predictability (MAP) laboratory during the 2017 and 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiments. Case studies from 2017 are used for parameter tuning and demonstration of methodology, while the 2018 ensemble forecasts are systematically verified. The 2017 case study demonstrates that the OBPROB forecast product can provide a unique tool to operational forecasters that includes convective-scale details such as storm mode and morphology, which are typically lost in neighborhood-based methods, while also providing quantitative ensemble probabilistic guidance about those details in a more easily interpretable format than the more commonly used paintball plots. The case study also demonstrates that objective verification metrics reveal different relative performance of the ensemble at different forecast lead times depending on the verification framework (i.e., object versus neighborhood) because of the different features emphasized by object- and neighborhood-based evaluations. Both frameworks are then used for a systematic evaluation of 26 forecasts from the spring of 2018. The OBPROB forecast verification as configured in this study shows less sensitivity to forecast lead time than the neighborhood forecasts. Both frameworks indicate a need for probabilistic calibration to improve ensemble reliability. However, lower ensemble discrimination for OBPROB than the neighborhood-based forecasts is also noted.

Free access
Aaron Johnson
,
Xuguang Wang
,
Yongming Wang
,
Anthony Reinhart
,
Adam J. Clark
, and
Israel L. Jirak

Abstract

An object-based probabilistic (OBPROB) forecasting framework is developed and applied, together with a more traditional neighborhood-based framework, to convection-permitting ensemble forecasts produced by the University of Oklahoma (OU) Multiscale data Assimilation and Predictability (MAP) laboratory during the 2017 and 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiments. Case studies from 2017 are used for parameter tuning and demonstration of methodology, while the 2018 ensemble forecasts are systematically verified. The 2017 case study demonstrates that the OBPROB forecast product can provide a unique tool to operational forecasters that includes convective-scale details such as storm mode and morphology, which are typically lost in neighborhood-based methods, while also providing quantitative ensemble probabilistic guidance about those details in a more easily interpretable format than the more commonly used paintball plots. The case study also demonstrates that objective verification metrics reveal different relative performance of the ensemble at different forecast lead times depending on the verification framework (i.e., object versus neighborhood) because of the different features emphasized by object- and neighborhood-based evaluations. Both frameworks are then used for a systematic evaluation of 26 forecasts from the spring of 2018. The OBPROB forecast verification as configured in this study shows less sensitivity to forecast lead time than the neighborhood forecasts. Both frameworks indicate a need for probabilistic calibration to improve ensemble reliability. However, lower ensemble discrimination for OBPROB than the neighborhood-based forecasts is also noted.

Full access
Tsing-Chang Chen
,
Shih-Yu Wang
,
Ming-Cheng Yen
, and
Adam J. Clark

Abstract

The life cycle of the Southeast Asian–western North Pacific monsoon circulation is established by the northward migrations of the monsoon trough and the western Pacific subtropical anticyclone, and is reflected by the intraseasonal variations of monsoon westerlies and trade easterlies in the form of an east–west seesaw oscillation. In this paper, an effort is made to disclose the influence of this monsoon circulation on tropical cyclone tracks during its different phases using composite charts of large-scale circulation for certain types of tracks.

A majority of straight-moving (recurving) tropical cyclones appear during weak (strong) monsoon westerlies and strong (weak) trade easterlies. The monsoon conditions associated with straight-moving tropical cyclones are linked to the intensified subtropical anticyclone, while that associated with recurving tropical cyclones is coupled with the deepened monsoon trough. The relationship between genesis locations and track characteristics is evolved from the intraseasonal variation of the monsoon circulation reflected by the east–west oscillation of monsoon westerlies and trade easterlies. Composite circulation differences between the flows associated with the two types of tropical cyclone tracks show a vertically uniform short wave train along the North Pacific rim, as portrayed by the Pacific–Japan oscillation. During the extreme phases of the monsoon life cycle, the anomalous circulation pattern east of Taiwan resembles this anomalous short wave train.

A vorticity budget analysis of the strong monsoon condition reveals a vorticity tendency dipole with a positive zone to the north and a negative zone to the south of the deepened monsoon trough. This meridional juxtaposition of vorticity tendency propagates the monsoon trough northward. The interaction of a tropical cyclone with the monsoon trough intensifies the north–south juxtaposition of the vorticity tendency and deflects the tropical cyclone northward. In contrast, during weak monsoon conditions, the interaction between a tropical cyclone and the subtropical high results in a northwestward motion steered by the intensified trade easterlies. The accurate prediction of the monsoon trough and the subtropical anticyclone variations coupled with the monsoon life cycle may help to improve the forecasting of tropical cyclone tracks.

Full access
Adam J. Clark
,
John S. Kain
,
Patrick T. Marsh
,
James Correia Jr.
,
Ming Xue
, and
Fanyou Kong

Abstract

A three-dimensional (in space and time) object identification algorithm is applied to high-resolution forecasts of hourly maximum updraft helicity (UH)—a diagnostic that identifies simulated rotating storms—with the goal of diagnosing the relationship between forecast UH objects and observed tornado pathlengths. UH objects are contiguous swaths of UH exceeding a specified threshold. Including time allows tracks to span multiple hours and entire life cycles of simulated rotating storms. The object algorithm is applied to 3 yr of 36-h forecasts initialized daily from a 4-km grid-spacing version of the Weather Research and Forecasting Model (WRF) run in real time at the National Severe Storms Laboratory (NSSL), and forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. Methods for visualizing UH object attributes are presented, and the relationship between pathlengths of UH objects and tornadoes for corresponding 18- or 24-h periods is examined. For deterministic NSSL-WRF UH forecasts, the relationship of UH pathlengths to tornadoes was much stronger during spring (March–May) than in summer (June–August). Filtering UH track segments produced by high-based and/or elevated storms improved the UH–tornado pathlength correlations. The best ensemble results were obtained after filtering high-based and/or elevated UH track segments for the 20 cases in April–May 2010, during which correlation coefficients were as high as 0.91. The results indicate that forecast UH pathlengths during spring could be a very skillful predictor for the severity of tornado outbreaks as measured by total pathlength.

Full access
Barry H. Lynn
,
Yoav Yair
,
Colin Price
,
Guy Kelman
, and
Adam J. Clark

Abstract

A new prognostic, spatially and temporally dependent variable is introduced to the Weather Research and Forecasting Model (WRF). This variable is called the potential electrical energy (Ep ). It was used to predict the dynamic contribution of the grid-scale-resolved microphysical and vertical velocity fields to the production of cloud-to-ground and intracloud lightning in convection-allowing forecasts. The source of Ep is assumed to be the noninductive charge separation process involving collisions of graupel and ice particles in the presence of supercooled liquid water. The Ep dissipates when it exceeds preassigned threshold values and lightning is generated. An analysis of four case studies is presented and analyzed. On the 4-km simulation grid, a single cloud-to-ground lightning event was forecast with about equal values of probability of detection (POD) and false alarm ratio (FAR). However, when lighting was integrated onto 12-km and then 36-km grid overlays, there was a large improvement in the forecast skill, and as many as 10 cloud-to-ground lighting events were well forecast on the 36-km grid. The impact of initial conditions on forecast accuracy is briefly discussed, including an evaluation of the scheme in wintertime, when lightning activity is weaker. The dynamic algorithm forecasts are also contrasted with statistical lightning forecasts and differences are noted. The scheme is being used operationally with the Rapid Refresh (13 km) data; the skill scores in these operational runs were very good in clearly defined convective situations.

Full access
Eric D. Loken
,
Adam J. Clark
,
Amy McGovern
,
Montgomery Flora
, and
Kent Knopfmeier

Abstract

Most ensembles suffer from underdispersion and systematic biases. One way to correct for these shortcomings is via machine learning (ML), which is advantageous due to its ability to identify and correct nonlinear biases. This study uses a single random forest (RF) to calibrate next-day (i.e., 12–36-h lead time) probabilistic precipitation forecasts over the contiguous United States (CONUS) from the Short-Range Ensemble Forecast System (SREF) with 16-km grid spacing and the High-Resolution Ensemble Forecast version 2 (HREFv2) with 3-km grid spacing. Random forest forecast probabilities (RFFPs) from each ensemble are compared against raw ensemble probabilities over 496 days from April 2017 to November 2018 using 16-fold cross validation. RFFPs are also compared against spatially smoothed ensemble probabilities since the raw SREF and HREFv2 probabilities are overconfident and undersample the true forecast probability density function. Probabilistic precipitation forecasts are evaluated at four precipitation thresholds ranging from 0.1 to 3 in. In general, RFFPs are found to have better forecast reliability and resolution, fewer spatial biases, and significantly greater Brier skill scores and areas under the relative operating characteristic curve compared to corresponding raw and spatially smoothed ensemble probabilities. The RFFPs perform best at the lower thresholds, which have a greater observed climatological frequency. Additionally, the RF-based postprocessing technique benefits the SREF more than the HREFv2, likely because the raw SREF forecasts contain more systematic biases than those from the raw HREFv2. It is concluded that the RFFPs provide a convenient, skillful summary of calibrated ensemble output and are computationally feasible to implement in real time. Advantages and disadvantages of ML-based postprocessing techniques are discussed.

Full access
Rebecca D. Adams-Selin
,
Adam J. Clark
,
Christopher J. Melick
,
Scott R. Dembek
,
Israel L. Jirak
, and
Conrad L. Ziegler

Abstract

Four different versions of the HAILCAST hail model have been tested as part of the 2014–16 NOAA Hazardous Weather Testbed (HWT) Spring Forecasting Experiments. HAILCAST was run as part of the National Severe Storms Laboratory (NSSL) WRF Ensemble during 2014–16 and the Community Leveraged Unified Ensemble (CLUE) in 2016. Objective verification using the Multi-Radar Multi-Sensor maximum expected size of hail (MRMS MESH) product was conducted using both object-based and neighborhood grid-based verification. Subjective verification and feedback was provided by HWT participants. Hourly maximum storm surrogate fields at a variety of thresholds and Storm Prediction Center (SPC) convective outlooks were also evaluated for comparison. HAILCAST was found to improve with each version due to feedback from the 2014–16 HWTs. The 2016 version of HAILCAST was equivalent to or exceeded the skill of the tested storm surrogates across a variety of thresholds. The post-2016 version of HAILCAST was found to improve 50-mm hail forecasts through object-based verification, but 25-mm hail forecasting ability declined as measured through neighborhood grid-based verification. The skill of the storm surrogate fields varied widely as the threshold values used to determine hail size were varied. HAILCAST was found not to require such tuning, as it produced consistent results even when used across different model configurations and horizontal grid spacings. Additionally, different storm surrogate fields performed at varying levels of skill when forecasting 25- versus 50-mm hail, hinting at the different convective modes typically associated with small versus large sizes of hail. HAILCAST was able to match results relatively consistently with the best-performing storm surrogate field across multiple hail size thresholds.

Full access