Search Results

You are looking at 1 - 9 of 9 items for

  • Author or Editor: Trevor I. Alcott x
  • Refine by Access: All Content x
Clear All Modify Search
Trevor I. Alcott and W. James Steenburgh

Abstract

Contemporary snowfall forecasting is a three-step process involving a quantitative precipitation forecast (QPF), determination of precipitation type, and application of a snow-to-liquid ratio (SLR). The final step is often performed using climatology or algorithms based primarily on temperature. Based on a record of consistent and professional daily snowfall measurements, this study 1) presents general characteristics of SLR at Alta, Utah, a high-elevation site in interior North America with frequent winter storms; 2) diagnoses relationships between SLR and atmospheric conditions using reanalysis data; and 3) develops a statistical method for predicting SLR at the study location.

The mean SLR at Alta is similar to that observed at lower elevations in the surrounding region, with substantial variability throughout the winter season. Using data from the North American Regional Reanalysis, temperature, wind speed, and midlevel relative humidity are shown to be related to SLR, with the strongest correlation occurring between SLR and near-crest-level (650 hPa) temperature. A stepwise multiple linear regression (SMLR) equation is constructed that explains 68% of the SLR variance for all events, and 88% for a high snow-water equivalent (>25 mm) subset. To test predictive ability, the straightforward SMLR approach is applied to archived 12–36-h forecasts from the National Centers for Environmental Prediction Eta/North American Mesoscale (Eta/NAM) model, yielding an improvement over existing operational SLR prediction techniques. Errors in QPF over complex terrain, however, ultimately limit skill in forecasting snowfall amount.

Full access
Trevor I. Alcott and W. James Steenburgh

Abstract

Although several mountain ranges surround the Great Salt Lake (GSL) of northern Utah, the extent to which orography modifies GSL-effect precipitation remains largely unknown. Here the authors use observational and numerical modeling approaches to examine the influence of orography on the GSL-effect snowstorm of 27 October 2010, which generated 6–10 mm of precipitation (snow-water equivalent) in the Salt Lake Valley and up to 30 cm of snow in the Wasatch Mountains. The authors find that the primary orographic influences on the event are 1) foehnlike flow over the upstream orography that warms and dries the incipient low-level air mass and reduces precipitation coverage and intensity; 2) orographically forced convergence that extends downstream from the upstream orography, is enhanced by blocking windward of the Promontory Mountains, and affects the structure and evolution of the lake-effect precipitation band; and 3) blocking by the Wasatch and Oquirrh Mountains, which funnels the flow into the Salt Lake Valley, reinforces the thermally driven convergence generated by the GSL, and strongly enhances precipitation. The latter represents a synergistic interaction between lake and downstream orographic processes that is crucial for precipitation development, with a dramatic decrease in precipitation intensity and coverage evident in simulations in which either the lake or the orography are removed. These results help elucidate the spectrum of lake–orographic processes that contribute to lake-effect events and may be broadly applicable to other regions where lake effect precipitation occurs in proximity to complex terrain.

Full access
W. James Steenburgh and Trevor I. Alcott

State license plates and tourism brochures boast that Utah ski areas receive the “greatest snow on Earth,” but is there really anything special about Utah's snow? Often it is argued in ski industry brochures that Utah's snow is the greatest because it is the “driest” (i.e., has a low density or water content), yet the mean water content of snow at Alta ski area, which is world renowned for powder skiing and provides the cornerstone for Utah's famous slogan, is not lower than observed, for example, at many Colorado and Wyoming ski resorts. We propose that Alta's reputation is not based solely on mean water content, but also abundant natural snowfall. Although it cannot be shown that Utah's snow is the “greatest on Earth,” the climatology at Alta and other nearby ski areas is consistent with a high frequency of deep-powder days.

Full access
Trevor I. Alcott, W. James Steenburgh, and Neil F. Laird

Abstract

This climatology examines the environmental factors controlling the frequency, occurrence, and morphology of Great Salt Lake–effect (GSLE) precipitation events using cool season (16 September–15 May) Weather Surveillance Radar-1988 Doppler (WSR-88D) imagery, radiosonde soundings, and MesoWest surface observations from 1997/98 to 2009/10. During this period, the frequency of GSLE events features considerable interannual variability that is more strongly correlated to large-scale circulation changes than lake-area variations. Events are most frequent in fall and spring, with a minimum in January when the climatological lake surface temperature is lowest. Although forecasters commonly use a 16°C lake–700-hPa temperature difference (ΔT) as a threshold for GSLE occurrence, GSLE was found to occur in winter when ΔT was only 12.4°C. Conversely, GSLE is associated with much higher values of ΔT in the fall and spring. Therefore, a seasonally varying threshold based on a quadratic fit to the monthly minimum ΔT values during GSLE events is more appropriate than a single threshold value. A probabilistic forecast method based on the difference between ΔT and this seasonally varying threshold, 850–700-hPa relative humidity, and 700-hPa wind direction offers substantial improvement over existing methods, although forecast skill is diminished by temperature and moisture errors in operational models. An important consideration for forecasting because of their higher precipitation rates, banded features—with a horizontal aspect ratio of 6:1 or greater—dominate only 20% of the time that GSLE is occurring, while widespread, nonbanded precipitation is much more common. Banded periods are associated with stronger low-level winds and a larger lake–land temperature difference.

Full access
Kristen N. Yeager, W. James Steenburgh, and Trevor I. Alcott

Abstract

Although smaller lakes are known to produce lake-effect precipitation, their influence on the precipitation climatology of lake-effect regions remains poorly documented. This study examines the contribution of lake-effect periods (LEPs) to the 1998–2009 cool-season (16 September–15 May) hydroclimate in the region surrounding the Great Salt Lake, a meso-β-scale hypersaline lake in northern Utah. LEPs are identified subjectively from radar imagery, with precipitation (snow water equivalent) quantified through the disaggregation of daily (i.e., 24 h) Cooperative Observer Program (COOP) and Snowpack Telemetry (SNOTEL) observations using radar-derived precipitation estimates. An evaluation at valley and mountain stations with reliable hourly precipitation gauge observations demonstrates that the disaggregation method works well for estimating precipitation during LEPs. During the study period, LEPs account for up to 8.4% of the total cool-season precipitation in the Great Salt Lake basin, with the largest contribution to the south and east of the Great Salt Lake. The mean monthly distribution of LEP precipitation is bimodal, with a primary maximum from October to November and a secondary maximum from March to April. LEP precipitation is highly variable between cool seasons and is strongly influenced by a small number of intense events. For example, at a lowland (mountain) station in the lake-effect-precipitation belt southeast of the Great Salt Lake, just 12 (13) events produce 50% of the LEP precipitation. Although these results suggest that LEPs contribute modestly to the hydroclimate of the Great Salt Lake basin, infrequent but intense events have a profound impact during some cool seasons.

Full access
Wyndam R. Lewis, W. James Steenburgh, Trevor I. Alcott, and Jonathan J. Rutz

Abstract

Contemporary operational medium-range ensemble modeling systems produce quantitative precipitation forecasts (QPFs) that provide guidance for weather forecasters, yet lack sufficient resolution to adequately resolve orographic influences on precipitation. In this study, cool-season (October–March) Global Ensemble Forecast System (GEFS) QPFs are verified using daily (24 h) Snow Telemetry (SNOTEL) observations over the western United States, which tend to be located at upper elevations where the orographic enhancement of precipitation is pronounced. Results indicate widespread dry biases, which reflect the infrequent production of larger 24-h precipitation events (≳22.9 mm in Pacific ranges and ≳10.2 mm in the interior ranges) compared with observed. Performance metrics, such as equitable threat score (ETS), hit rate, and false alarm ratio, generally worsen from the coast toward the interior. Probabilistic QPFs exhibit low reliability, and the ensemble spread captures only ~30% of upper-quartile events at day 5. In an effort to improve QPFs without exacerbating computing demands, statistical downscaling is explored based on high-resolution climatological precipitation analyses from the Parameter-Elevation Regressions on Independent Slopes Model (PRISM), an approach frequently used by operational forecasters. Such downscaling improves model biases, ETSs, and hit rates. However, 47% of downscaled QPFs for upper-quartile events are false alarms at day 1, and the ensemble spread captures only 56% of the upper-quartile events at day 5. These results should help forecasters and hydrologists understand the capabilities and limitations of GEFS forecasts and statistical downscaling over the western United States and other regions of complex terrain.

Full access
Jason M. English, David D. Turner, Trevor I. Alcott, William R. Moninger, Janice L. Bytheway, Robert Cifelli, and Melinda Marquis

Abstract

Improved forecasts of atmospheric river (AR) events, which provide up to half the annual precipitation in California, may reduce impacts to water supply, lives, and property. We evaluate quantitative precipitation forecasts (QPF) from the High-Resolution Rapid Refresh model version 3 (HRRRv3) and version 4 (HRRRv4) for five AR events that occurred in February–March 2019 and compare them to quantitative precipitation estimates (QPE) from Stage IV and Mesonet products. Both HRRR versions forecast spatial patterns of precipitation reasonably well, but are drier than QPE products in the Bay Area and wetter in the Sierra Nevada range. The HRRR dry bias in the Bay Area may be related to biases in the model temperature profile, while integrated water vapor (IWV), wind speed, and wind direction compare reasonably well. In the Sierra Nevada range, QPE and QPF agree well at temperatures above freezing. Below freezing, the discrepancies are due in part to errors in the QPE products, which are known to underestimate frozen precipitation in mountainous terrain. HRRR frozen QPF accuracy is difficult to quantify, but the model does have wind speed and wind direction biases near the Sierra Nevada range. HRRRv4 is overall more accurate than HRRRv3, likely due to data assimilation improvements, and possibly physics improvements. Applying a neighborhood maximum method impacted performance metrics, but did not alter general conclusions, suggesting closest gridbox evaluations may be adequate for these types of events. Improvements to QPF in the Bay Area and QPE/QPF in the Sierra Nevada range would be particularly useful to provide better understanding of AR events.

Restricted access
Benjamin T. Blake, Jacob R. Carley, Trevor I. Alcott, Isidora Jankov, Matthew E. Pyle, Sarah E. Perfater, and Benjamin Albright

Abstract

Traditional ensemble probabilities are computed based on the number of members that exceed a threshold at a given point divided by the total number of members. This approach has been employed for many years in coarse-resolution models. However, convection-permitting ensembles of less than ~20 members are generally underdispersive, and spatial displacement at the gridpoint scale is often large. These issues have motivated the development of spatial filtering and neighborhood postprocessing methods, such as fractional coverage and neighborhood maximum value, which address this spatial uncertainty. Two different fractional coverage approaches for the generation of gridpoint probabilities were evaluated. The first method expands the traditional point probability calculation to cover a 100-km radius around a given point. The second method applies the idea that a uniform radius is not appropriate when there is strong agreement between members. In such cases, the traditional fractional coverage approach can reduce the probabilities for these potentially well-handled events. Therefore, a variable radius approach has been developed based upon ensemble agreement scale similarity criteria. In this method, the radius size ranges from 10 km for member forecasts that are in good agreement (e.g., lake-effect snow, orographic precipitation, very short-term forecasts, etc.) to 100 km when the members are more dissimilar. Results from the application of this adaptive technique for the calculation of point probabilities for precipitation forecasts are presented based upon several months of objective verification and subjective feedback from the 2017 Flash Flood and Intense Rainfall Experiment.

Full access
Julie L. Demuth, Rebecca E. Morss, Isidora Jankov, Trevor I. Alcott, Curtis R. Alexander, Daniel Nietfeld, Tara L. Jensen, David R. Novak, and Stanley G. Benjamin

Abstract

U.S. National Weather Service (NWS) forecasters assess and communicate hazardous weather risks, including the likelihood of a threat and its impacts. Convection-allowing model (CAM) ensembles offer potential to aid forecasting by depicting atmospheric outcomes, including associated uncertainties, at the refined space and time scales at which hazardous weather often occurs. Little is known, however, about what CAM ensemble information is needed to inform forecasting decisions. To address this knowledge gap, participant observations and semistructured interviews were conducted with NWS forecasters from national centers and local weather forecast offices. Data were collected about forecasters’ roles and their forecasting processes, uses of model guidance and verification information, interpretations of prototype CAM ensemble products, and needs for information from CAM ensembles. Results revealed forecasters’ needs for specific types of CAM ensemble guidance, including a product that combines deterministic and probabilistic output from the ensemble as well as a product that provides map-based guidance about timing of hazardous weather threats. Forecasters also expressed a general need for guidance to help them provide impact-based decision support services. Finally, forecasters conveyed needs for objective model verification information to augment their subjective assessments and for training about using CAM ensemble guidance for operational forecasting. The research was conducted as part of an interdisciplinary research effort that integrated elicitation of forecasters’ CAM ensemble needs with model development efforts, with the aim of illustrating a robust approach for creating information for forecasters that is truly useful and usable.

Full access