Search Results

You are looking at 1 - 10 of 25 items for

  • Author or Editor: Robert A. Clark x
  • All content x
Clear All Modify Search
Robert A. Clark

Abstract

Data collected during the summer of 1958 and the late spring and summer of 1959, by use of an AN/CPS-9 radar located in the Department of Oceanography and Meteorology on the campus of the A. & M. College of Texas, were analyzed to study convective precipitation processes. The formation and growth of convective echoes were studied in relation to the mechanisms of precipitation formation. Consideration also was given to day-to-day variations in convective activity.

Pronounced variations were noted in echo growth and subsidence, level of first-echo formation, and echo movements. It is shown that maximum echo tops are largely dependent on the depth of the moist layer. It is also revealed that precipitation in the Central Texas region is frequently initiated by a process involving only liquid water droplets.

Full access
DOUGLAS R. GREENE and ROBERT A. CLARK

Abstract

Through the use of digital radar data measured at successive elevation angles in a storm system, we developed a technique that presents a new dimension in mesoscale analysis. This technique, mapped vertically integrated liquid-water content (VIL), presents the three-dimensional characteristics of a storm system in a two-dimensional display. This analysis technique appears to hold real promise for both severe storm and hydrologic applications.

Full access
Robert A. Clark, Jonathan J. Gourley, Zachary L. Flamig, Yang Hong, and Edward Clark

Abstract

This study quantifies the skill of the National Weather Service’s (NWS) flash flood guidance (FFG) product. Generated by River Forecast Centers (RFCs) across the United States, local NWS Weather Forecast Offices compare estimated and forecast rainfall to FFG to monitor and assess flash flooding potential. A national flash flood observation database consisting of reports in the NWS publication Storm Data and U.S. Geological Survey (USGS) stream gauge measurements are used to determine the skill of FFG over a 4-yr period. FFG skill is calculated at several different precipitation-to-FFG ratios for both observation datasets. Although a ratio of 1.0 nominally indicates a potential flash flooding event, this study finds that FFG can be more skillful when ratios other than 1.0 are considered. When the entire continental United States is considered, the highest observed critical success index (CSI) with 1-h FFG is 0.20 for the USGS dataset, which should be considered a benchmark for future research that seeks to improve, modify, or replace the current FFG system. Regional benchmarks of FFG skill are also determined on an RFC-by-RFC basis. When evaluated against Storm Data reports, the regional skill of FFG ranges from 0.00 to 0.19. When evaluated against USGS stream gauge measurements, the regional skill of FFG ranges from 0.00 to 0.44.

Full access
Humphrey W. Lean, Nigel M. Roberts, Peter A. Clark, and Cyril Morcrette

Abstract

Many factors, both mesoscale and larger scale, often come together in order for a particular convective initiation to take place. The authors describe a modeling study of a case from the Convective Storms Initiation Project (CSIP) in which a single thunderstorm formed behind a front in the southern United Kingdom. The key features of the case were a tongue of low-level high θw air associated with a forward-sloping split front (overrunning lower θw air above), a convergence line, and a “lid” of high static stability air, which the shower was initially constrained below but later broke through. In this paper, the authors analyze the initiation of the storm, which can be traced back to a region of high ground (Dartmoor) at around 0700 UTC, in more detail using model sensitivity studies with the Met Office Unified Model (MetUM). It is established that the convergence line was initially caused by roughness effects but had a significant thermal component later. Dartmoor had a key role in the development of the thunderstorm. A period of asymmetric flow over the high ground, with stronger low-level descent in the lee, led to a hole in a layer of low-level clouds downstream. The surface solar heating through this hole, in combination with the tongue of low-level high θw air associated with the front, caused the shower to initiate with sufficient lifting to enable it later to break through the lid.

Full access
Humphrey W. Lean, Peter A. Clark, Mark Dixon, Nigel M. Roberts, Anna Fitch, Richard Forbes, and Carol Halliwell

Abstract

With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.

Full access
D. L. A. Flack, P. A. Clark, C.E. Halliwell, N. M. Roberts, S. L. Gray, R. S. Plant, and H. W. Lean

Abstract

Convection-permitting forecasts have improved the forecasts of flooding from intense rainfall. However, probabilistic forecasts, generally based upon ensemble methods, are essential to quantify forecast uncertainty. This leads to a need to understand how different aspects of the model system affect forecast behaviour. We compare the uncertainty due to initial and boundary condition (IBC) perturbations and boundary-layer turbulence using a super ensemble (SE) created to determine the influence of 12 IBC perturbations vs. 12 stochastic boundary-layer (SBL) perturbations constructed using a physically-based SBL scheme. We consider two mesoscale extreme precipitation events. For each we run a 144–member SE. The SEs are analysed to consider the growth of differences between the simulations, and the spatial structure and scales of those differences. The SBL perturbations rapidly spin-up, typically within 12 h of precipitation commencing. The SBL perturbations eventually produce spread that is not statistically different from the spread produced by the IBC perturbations, though in one case there is initially increased spread from the IBC perturbations. Spatially, the growth from IBC occurs on larger scales than that produced by the SBL perturbations (typically by an order of magnitude). However, analysis across multiple scales shows that the SBL scheme produces a random relocation of precipitation up to the scale at which the ensemble members agree with each other. This implies that statistical post-processing can be used instead of running larger ensembles. Use of these statistical post-processing techniques could lead to more reliable probabilistic forecasts of convective events and their associated hazards.

Open access
D. L. A. Flack, P. A. Clark, C. E. Halliwell, N. M. Roberts, S. L. Gray, R. S. Plant, and H. W. Lean

Abstract

Convection-permitting forecasts have improved the forecasts of flooding from intense rainfall. However, probabilistic forecasts, generally based upon ensemble methods, are essential to quantify forecast uncertainty. This leads to a need to understand how different aspects of the model system affect forecast behavior. We compare the uncertainty due to initial and boundary condition (IBC) perturbations and boundary layer turbulence using a superensemble (SE) created to determine the influence of 12 IBC perturbations versus 12 stochastic boundary layer (SBL) perturbations constructed using a physically based SBL scheme. We consider two mesoscale extreme precipitation events. For each, we run a 144-member SE. The SEs are analyzed to consider the growth of differences between the simulations, and the spatial structure and scales of those differences. The SBL perturbations rapidly spin up, typically within 12 h of precipitation commencing. The SBL perturbations eventually produce spread that is not statistically different from the spread produced by the IBC perturbations, though in one case there is initially increased spread from the IBC perturbations. Spatially, the growth from IBC occurs on larger scales than that produced by the SBL perturbations (typically by an order of magnitude). However, analysis across multiple scales shows that the SBL scheme produces a random relocation of precipitation up to the scale at which the ensemble members agree with each other. This implies that statistical postprocessing can be used instead of running larger ensembles. Use of these statistical postprocessing techniques could lead to more reliable probabilistic forecasts of convective events and their associated hazards.

Open access
Brian A. Klimowski, Robert Becker, Eric A. Betterton, Roelof Bruintjes, Terry L. Clark, William D. Hall, Brad W. Orr, Robert A. Kropfli, Paivi Piironen, Roger F. Reinking, Dennis Sundie, and Taneil Uttal

The 1995 Arizona Program was a field experiment aimed at advancing the understanding of winter storm development in a mountainous region of central Arizona. From 15 January through 15 March 1995, a wide variety of instrumentation was operated in and around the Verde Valley southwest of Flagstaff, Arizona. These instruments included two Doppler dual-polarization radars, an instrumented airplane, a lidar, microwave and infrared radiometers, an acoustic sounder, and other surface-based facilities. Twenty-nine scientists from eight institutions took part in the program. Of special interest was the interaction of topographically induced, storm-embedded gravity waves with ambient upslope flow. It is hypothesized that these waves serve to augment the upslope-forced precipitation that falls on the mountain ridges. A major thrust of the program was to compare the observations of these winter storms to those predicted with the Clark-NCAR 3D, nonhydrostatic numerical model.

Full access
Adam J. Clark, Israel L. Jirak, Burkely T. Gallo, Brett Roberts, Kent. H. Knopfmeier, Robert A. Clark, Jake Vancil, Andrew R. Dean, Kimberly A. Hoogewind, Pamela L. Heinselman, Nathan A. Dahl, Makenzie J. Krocak, Jessica J. Choate, Katie A. Wilson, Patrick S. Skinner, Thomas A. Jones, Yunheng Wang, Gerald J. Creager, Larissa J. Reames, Louis J. Wicker, Scott R. Dembek, and Steven J. Weiss
Full access
Steven M. Martinaitis, Jonathan J. Gourley, Zachary L. Flamig, Elizabeth M. Argyle, Robert A. Clark III, Ami Arthur, Brandon R. Smith, Jessica M. Erlingis, Sarah Perfater, and Benjamin Albright

Abstract

There are numerous challenges with the forecasting and detection of flash floods, one of the deadliest weather phenomena in the United States. Statistical metrics of flash flood warnings over recent years depict a generally stagnant warning performance, while regional flash flood guidance utilized in warning operations was shown to have low skill scores. The Hydrometeorological Testbed—Hydrology (HMT-Hydro) experiment was created to allow operational forecasters to assess emerging products and techniques designed to improve the prediction and warning of flash flooding. Scientific goals of the HMT-Hydro experiment included the evaluation of gridded products from the Multi-Radar Multi-Sensor (MRMS) and Flooded Locations and Simulated Hydrographs (FLASH) product suites, including the experimental Coupled Routing and Excess Storage (CREST) model, the application of user-defined probabilistic forecasts in experimental flash flood watches and warnings, and the utility of the Hazard Services software interface with flash flood recommenders in real-time experimental warning operations. The HMT-Hydro experiment ran in collaboration with the Flash Flood and Intense Rainfall (FFaIR) experiment at the Weather Prediction Center to simulate the real-time workflow between a national center and a local forecast office, as well as to facilitate discussions on the challenges of short-term flash flood forecasting. Results from the HMT-Hydro experiment highlighted the utility of MRMS and FLASH products in identifying the spatial coverage and magnitude of flash flooding, while evaluating the perception and reliability of probabilistic forecasts in flash flood watches and warnings.

NSSL scientists and NWS forecasters evaluate new tools and techniques through real-time test bed operations for the improvement of flash flood detection and warning operations.

Full access