Browse

You are looking at 21 - 30 of 2,831 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Christoph Mony, Lukas Jansing, and Michael Sprenger

Abstract

This study explores the possibilities of employing machine learning algorithms to predict foehn occurrence in Switzerland at a north-Alpine (Altdorf) and south-Alpine (Lugano) station from its synoptic fingerprint in reanalysis data and climate simulations. This allows for an investigation on a potential future shift in monthly foehn frequencies. First, inputs from various atmospheric fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis-Interim (ERAI) were used to train an XGBoost model. Here, similar predictive performance to previous work was achieved, showing that foehn can accurately be diagnosed from the coarse synoptic situation. In the next step, the algorithm was generalized to predict foehn based on Community Earth System Model (CESM) ensemble simulations of a present-day and warming future climate. The best generalization between ERAI and CESM was obtained by including the present-day data in the training procedure and simultaneously optimizing two objective functions, namely the negative log loss and squared mean loss, on both datasets, respectively. It is demonstrated that the same synoptic fingerprint can be identified in CESM climate simulation data. Finally, predictions for present-day and future simulations were verified and compared for statistical significance. Our model is shown to produce valid output for most months, revealing that south foehn in Altdorf is expected to become more common during spring, while north foehn in Lugano is expected to become more common during summer.

Restricted access
Marvin Kähnert, Harald Sodemann, Wim C. de Rooy, and Teresa M. Valkonen

Abstract

Forecasts of marine cold air outbreaks critically rely on the interplay of multiple parameterisation schemes to represent sub-grid scale processes, including shallow convection, turbulence, and microphysics. Even though such an interplay has been recognised to contribute to forecast uncertainty, a quantification of this interplay is still missing. Here, we investigate the tendencies of temperature and specific humidity contributed by individual parameterisation schemes in the operational weather prediction model AROME-Arctic. From a case study of an extensive marine cold air outbreak over the Nordic Seas, we find that the type of planetary boundary layer assigned by the model algorithm modulates the contribution of individual schemes and affects the interactions between different schemes. In addition, we demonstrate the sensitivity of these interactions to an increase or decrease in the strength of the parameterised shallow convection. The individual tendencies from several parameterisations can thereby compensate each other, sometimes resulting in a small residual. In some instances this residual remains nearly unchanged between the sensitivity experiments, even though some individual tendencies differ by up to an order of magnitude. Using the individual tendency output, we can characterise the subgrid-scale as well as grid-scale responses of the model and trace them back to their underlying causes. We thereby highlight the utility of individual tendency output for understanding process-related differences between model runs with varying physical configurations and for the continued development of numerical weather prediction models.

Restricted access
Evan S. Bentley, Richard L. Thompson, Barry R. Bowers, Justin G. Gibbs, and Steven E. Nelson

Abstract

Previous work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.

Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.

Restricted access
Jason M. English, David D. Turner, Trevor I. Alcott, William R. Moninger, Janice L. Bytheway, Robert Cifelli, and Melinda Marquis

Abstract

Improved forecasts of Atmospheric River (AR) events, which provide up to half the annual precipitation in California, may reduce impacts to water supply, lives, and property. We evaluate Quantitative Precipitation Forecasts (QPF) from the High-Resolution Rapid Refresh model version 3 (HRRRv3) and version 4 (HRRRv4) for five AR events that occurred in Feb-Mar 2019 and compare them to Quantitative Precipitation Estimates (QPE) from Stage IV and Mesonet products. Both HRRR versions forecast spatial patterns of precipitation reasonably well, but are drier than QPE products in the Bay Area and wetter in the Sierra Nevada range. The HRRR dry bias in the Bay Area may be related to biases in the model temperature profile, while IWV, wind speed, and wind direction compare reasonably well. In the Sierra Nevada range, QPE and QPF agree well at temperatures above freezing. Below freezing, the discrepancies are due in part to errors in the QPE products, which are known to underestimate frozen precipitation in mountainous terrain. HRRR frozen QPF accuracy is difficult to quantify, but the model does have wind speed and wind direction biases near the Sierra Nevada range. HRRRv4 is overall more accurate than HRRRv3, likely due to data assimilation improvements, and possibly physics improvements. Applying a Neighborhood Maximum method impacted performance metrics, but did not alter general conclusions, suggesting closest grid box evaluations may be adequate for these types of events. Improvements to QPF in the Bay Area and QPE/QPF in the Sierra Nevada range would be particularly useful to provide better understanding of AR events.

Restricted access
Kevin Bachmann and Ryan D. Torn

Abstract

Tropical cyclones are associated with a variety of significant social hazards, including wind, rain, and storm surge. Despite this, most of the model validation effort has been directed toward track and intensity forecasts. In contrast, few studies have investigated the skill of state-of-the-art, high-resolution ensemble prediction systems in predicting associated TC hazards, which is crucial since TC position and intensity do not always correlate with the TC-related hazards, and can result in impacts far from the actual TC center. Furthermore, dynamic models can provide flow-dependent uncertainty estimates, which in turn can provide more specific guidance to forecasters than statistical uncertainty estimates based on past errors. This study validates probabilistic forecasts of wind speed and precipitation hazards derived from the HWRF ensemble prediction system and compares its skill to forecasts by the stochastically-based operational Monte Carlo Model (NHC), the IFS (ECMWF), and the GEFS (NOAA) in use 2017-2019. Wind and Precipitation forecasts are validated against NHC best track wind radii information and the National Stage IV QPE Product. The HWRF 34 kn wind forecasts have comparable skill to the global models up to 60 h lead time before HWRF skill decreases, possibly due to detrimental impacts of large track errors. In contrast, HWRF has comparable quality to its competitors for higher thresholds of 50 kn and 64 kn throughout 120 h lead time. In terms of precipitation hazards, HWRF performs similar or better than global models, but depicts higher, although not perfect, reliability, especially for events over 5 in120h−1. Post-processing, like Quantile Mapping, improves forecast skill for all models significantly and can alleviate reliability issues of the global models.

Restricted access
Jing Zhang, Jie Feng, Hong Li, Yuejian Zhu, Xiefei Zhi, and Feng Zhang

Abstract

Operational and research applications generally use the consensus approach for forecasting the track and intensity of tropical cyclones (TCs) due to the spatial displacement of the TC location and structure in ensemble member forecasts. This approach simply averages the location and intensity information for TCs in individual ensemble members, which is distinct from the traditional pointwise arithmetic mean (AM) method for ensemble forecast fields. The consensus approach, despite having improved skills relative to the AM in predicting the TC intensity, cannot provide forecasts of the TC spatial structure. We introduced a unified TC ensemble mean forecast based on the feature-oriented mean (FM) method to overcome the inconsistency between the AM and consensus forecasts. FM spatially aligns the TC-related features in each ensemble field to their geographical mean positions before the amplitude of their features is averaged.

We select 219 TC forecast samples during the summer of 2017 for an overall evaluation of the FM performance. The results show that the TC track consensus forecasts can differ from AM track forecasts by hundreds of kilometers at long lead times. AM also gives a systematic and statistically significant underestimation of the TC intensity compared with the consensus forecast. By contrast, FM has a very similar TC track and intensity forecast skill to the consensus approach. FM can also provide the corresponding ensemble mean forecasts of the TC spatial structure that are significantly more accurate than AM for the low- and upper-level circulation in TCs. The FM method has the potential to serve as a valuable unified ensemble mean approach for the TC prediction.

Open access
Diego Pons, Ángel G. Muñoz, Ligia M. Meléndez, Mario Chocooj, Rosario Gómez, Xandre Chourio, and Carmen González Romero

Abstract

The provision of climate services has the potential to generate adaptive capacity and help coffee farmers become or remain profitable by integrating climate information in a risk-management framework. Yet, in order to achieve this goal, it is necessary to identify the local demand for climate information, the relationships between coffee yield and climate variables, farmers’ perceptions, and to examine the potential actions that can be realistically put in place by farmers at the local level. In this study, we assessed the climate information demands from coffee farmers and their perception on the climate impacts to coffee yield in the Samalá watershed in Guatemala. After co-identifying the related candidate climate predictors, we propose an objective, flexible forecast system for coffee yield based on precipitation. The system, known as NextGen, analyzes multiple historical climate drivers to identify candidate predictors, and provides both deterministic and probabilistic forecasts for the target season. To illustrate the approach, a NextGen implementation is conducted in the Samalá watershed in southwestern Guatemala. The results suggest that accumulated June-July-August precipitation provides the highest predictive skill associated with coffee yield for this region. In addition to a formal cross-validated skill assessment, retrospective forecasts for the period 1989-2009 were compared to agriculturalists’ perception on the climate impacts to coffee yield at the farm level. We conclude with examples of how demand-based climate service provision in this location can inform adaptation strategies like optimum shade, pest control, and fertilization schemes months in advance. These potential adaptation strategies were validated by local agricultural technicians at the study site.

Restricted access
Franziska Hellmuth, Bjørg Jenny Kokkvoll Engdahl, Trude Storelvmo, Robert O. David, and Steven J. Cooper

Abstract

In the winter, orographic precipitation falls as snow in the mid to high latitudes where it causes avalanches, affects local infrastructure, or leads to flooding during the spring thaw. We present a technique to validate operational numerical weather prediction model simulations in complex terrain. The presented verification technique uses a combined retrieval approach to obtain surface snowfall accumulation and vertical profiles of snow water at the Haukeliseter test site, Norway. Both surface observations and vertical profiles of snow are used to validate model simulations from the Norwegian Meteorological Institute’s operational forecast system and two simulations with adjusted cloud microphysics.

Retrieved surface snowfall is validated against measurements conducted with a double-fence automated reference gauge (DFAR). In comparison, the optimal estimation snowfall retrieval produces + 10.9% more surface snowfall than the DFAR. The predicted surface snowfall from the operational forecast model and two additional simulations with microphysical adjustments (CTRL and ICE-T) are overestimated at the surface with +41.0 %, +43.8 %, and +59.2 %, respectively. Simultaneously, the CTRL and ICE-T simulations underestimate the mean snow water path by -1071.4% and -523.7 %, respectively.

The study shows that we would reach false conclusions only using surface accumulation or vertical snow water content profiles. These results highlight the need to combine ground-based in-situ and vertically-profiling remote sensing instruments to identify biases in numerical weather prediction.

Restricted access
Clifford F. Mass, David Ovens, Robert Conrick, and John Saltenberger

Abstract

A series of major fires spread across eastern Washington and western Oregon starting on September 7, 2020, driven by strong easterly and northeasterly winds gusting to ~70 kt at exposed locations. This event was associated with a high-amplitude upper-level ridge over the eastern Pacific and a mobile trough that moved southward on its eastern flank. The synoptic environment during the event was highly unusual, with the easterly 925-hPa wind speeds at Salem, Oregon, being unprecedented for the August-September period. The September 2020 wildfires produced dense smoke that initially moved westward over the Willamette Valley and eventually covered the region. As a result, air quality rapidly degraded to hazardous levels, representing the worst air quality period of recent decades. High-resolution numerical simulations using the WRF model indicated the importance of a high-amplitude mountain wave in producing strong easterly winds over western Oregon.

The dead fuel moisture levels over eastern Washington before the fires were typical for that time of the year. Along the western slopes of the Oregon Cascades, where the fuels are largely comprised of a dense conifer forest with understory vegetation, fire weather indices were lower (moister) than normal during the early part of the summer, but transitioned to above-normal (drier) values during August, with a spike to record values in early September coincident with the strong easterly winds.

Forecast guidance was highly accurate for both the Washington and Oregon wildfire events. Analyses of climatological data and fuel indices did not suggest that unusual pre-existing climatic conditions were major drivers of the September 2020 Northwest wildfires.

Restricted access
Callie McNicholas and Clifford F. Mass

Abstract

With over a billion smartphones capable of measuring atmospheric pressure, a global mesoscale surface pressure network based on smartphone pressure sensors may be possible if key technical issues are solved, including collection technology, privacy and bias correction. To overcome these challenges, a novel framework was developed for the anonymization and bias correction of smartphone pressure observations (SPOs) and was applied to billions of SPOs from The Weather Company (IBM). Bias correction using machine learning reduced the errors of anonymous (ANON) SPOs and uniquely identifiable (UID) SPOs by 43% and 57%, respectively. Applying multi-resolution kriging, gridded analyses of bias-corrected smartphone pressure observations were made for an entire year (2018), using both anonymized (ANON) and non-anonymized (UID) observations. Pressure analyses were also generated using conventional (MADIS) surface pressure networks. Relative to MADIS analyses, ANON and UID smartphone analyses reduced domain-average pressure errors by 21% and 31%. The performance of smartphone and MADIS pressure analyses was evaluated for two high-impact weather events: the landfall of Hurricane Michael and a long-lived mesoscale convective system. For these two events, both anonymized and non-anonymized smartphone pressure analyses better captured the spatial structure and temporal evolution of mesoscale pressure features than the MADIS analyses.

Restricted access