Browse

You are looking at 11 - 20 of 2,825 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Soo-Hyun Kim, Hye-Yeong Chun, Dan-Bi Lee, Jung-Hoon Kim, and Robert D. Sharman

Abstract

Based on a convective gravity wave drag parameterization scheme in a numerical weather prediction (NWP) model, previously proposed near-cloud turbulence (NCT) diagnostics for better detecting turbulence near convection are tested and evaluated by using global in situ flight data and outputs from the operational global NWP model of the Korea Meteorological Administration for one year (from December 2016 to November 2017). For comparison, 11 widely used clear air turbulence (CAT) diagnostics currently used in operational NWP-based aviation turbulence forecasting systems are separately computed. For selected cases, NCT diagnostics predict more accurately localized turbulence events over convective regions with better intensity, which is clearly distinguished from the turbulence areas diagnosed by conventional CAT diagnostics that they mostly failed to forecast with broad areas and low magnitudes. Although overall performance of NCT diagnostics for one whole year is lower than conventional CAT diagnostics due to the fact that NCT diagnostics exclusively focus on the isolated NCT events, adding the NCT diagnostics to CAT diagnostics improves the performance of aviation turbulence forecasting. Especially in the summertime, performance in terms of an area under the curve (AUC) based on probability of detection statistics is the best (AUC = 0.837 with a 4% increase, compared to conventional CAT forecasts) when the mean of all CAT and NCT diagnostics is used, while performance in terms of root-mean-square error is the best when the maximum among combined CAT and single NCT diagnostic is used. This implies that including NCT diagnostics to currently used NWP-based aviation turbulence forecasting systems should be beneficial for safety of air travel.

Restricted access
Makenzie J. Krocak and Harold E. Brooks

Abstract

While many studies have looked at the quality of forecast products, few have attempted to understand the relationship between them. We begin to consider whether or not such an influence exists by analyzing storm-based tornado warning product metrics with respect to whether they occurred within a severe weather watch and, if so, what type of watch they occurred within. The probability of detection, false alarm ratio, and lead time all show a general improvement with increasing watch severity. In fact, the probability of detection increased more as a function of watch-type severity than the change in probability of detection during the time period of analysis. False alarm ratio decreased as watch type increased in severity, but with a much smaller magnitude than the difference in probability of detection. Lead time also improved with an increase in watch-type severity. Warnings outside of any watch had a mean lead time of 5.5 min, while those inside of a particularly dangerous situation tornado watch had a mean lead time of 15.1 min. These results indicate that the existence and type of severe weather watch may have an influence on the quality of tornado warnings. However, it is impossible to separate the influence of weather watches from possible differences in warning strategy or differences in environmental characteristics that make it more or less challenging to warn for tornadoes. Future studies should attempt to disentangle these numerous influences to assess how much influence intermediate products have on downstream products.

Restricted access
Yang Lyu, Xiefei Zhi, Shoupeng Zhu, Yi Fan, and Mengting Pan

Abstract

In this study, two pattern projection methods, i.e., the stepwise pattern projection method (SPPM) and the newly proposed neighborhood pattern projection method (NPPM), are investigated to improve forecast skills of daily maximum and minimum temperatures (Tmax and Tmin) over East Asia with lead times of 1–7 days. Meanwhile, the decaying averaging method (DAM) is conducted in parallel for comparison. These postprocessing methods are found to effectively calibrate the temperature forecasts on the basis of the raw ECMWF output. Generally, the SPPM is slightly inferior to the DAM, while its insufficiency decreases with increasing lead times. The NPPM shows manifest superiority for all lead times, with the mean absolute errors of Tmax and Tmin decreased by ~0.7° and ~0.9°C, respectively. Advantages of the two pattern projection methods are both mainly concentrated on the high-altitude areas such as the Tibetan Plateau, where the raw ECMWF forecasts show the most conspicuous biases. In addition, aiming at further assessments of these methods on extreme event forecasts, two case experiments are carried out toward a heat wave and a cold surge, respectively. The NPPM is retained as the optimal with the highest forecast skills, which reduces most of the biases to <2°C for both Tmax and Tmin over all the lead days. In general, the statistical pattern projection methods are capable of effectively eliminating spatial biases in forecasts of surface air temperature. Compared with the initial SPPM, the NPPM not only produces more powerful forecast calibrations, but also provides more pragmatic calculations and greater potential economic benefits in practical applications.

Restricted access
Christopher J. Nowotarski, Justin Spotts, Roger Edwards, Scott Overpeck, and Gary R. Woodall

Abstract

Tropical cyclone tornadoes pose a unique challenge to warning forecasters given their often marginal environments and radar attributes. In late August 2017 Hurricane Harvey made landfall on the Texas coast and produced 52 tornadoes over a record-breaking seven consecutive days. To improve warning efforts, this case study of Harvey’s tornadoes includes an event overview as well as a comparison of near-cell environments and radar attributes between tornadic and nontornadic warned cells. Our results suggest that significant differences existed in both the near-cell environments and radar attributes, particularly rotational velocity, between tornadic cells and false alarms. For many environmental variables and radar attributes, differences were enhanced when only tornadoes associated with a tornado debris signature were considered. Our results highlight the potential of improving warning skill further and reducing false alarms by increasing rotational velocity warning thresholds, refining the use of near-storm environment information, and focusing warning efforts on cells likely to produce the most impactful tornadoes.

Restricted access
Christoph Mony, Lukas Jansing, and Michael Sprenger

Abstract

This study explores the possibilities of employing machine learning algorithms to predict foehn occurrence in Switzerland at a north-Alpine (Altdorf) and south-Alpine (Lugano) station from its synoptic fingerprint in reanalysis data and climate simulations. This allows for an investigation on a potential future shift in monthly foehn frequencies. First, inputs from various atmospheric fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis-Interim (ERAI) were used to train an XGBoost model. Here, similar predictive performance to previous work was achieved, showing that foehn can accurately be diagnosed from the coarse synoptic situation. In the next step, the algorithm was generalized to predict foehn based on Community Earth System Model (CESM) ensemble simulations of a present-day and warming future climate. The best generalization between ERAI and CESM was obtained by including the present-day data in the training procedure and simultaneously optimizing two objective functions, namely the negative log loss and squared mean loss, on both datasets, respectively. It is demonstrated that the same synoptic fingerprint can be identified in CESM climate simulation data. Finally, predictions for present-day and future simulations were verified and compared for statistical significance. Our model is shown to produce valid output for most months, revealing that south foehn in Altdorf is expected to become more common during spring, while north foehn in Lugano is expected to become more common during summer.

Restricted access
Marvin Kähnert, Harald Sodemann, Wim C. de Rooy, and Teresa M. Valkonen

Abstract

Forecasts of marine cold air outbreaks critically rely on the interplay of multiple parameterisation schemes to represent sub-grid scale processes, including shallow convection, turbulence, and microphysics. Even though such an interplay has been recognised to contribute to forecast uncertainty, a quantification of this interplay is still missing. Here, we investigate the tendencies of temperature and specific humidity contributed by individual parameterisation schemes in the operational weather prediction model AROME-Arctic. From a case study of an extensive marine cold air outbreak over the Nordic Seas, we find that the type of planetary boundary layer assigned by the model algorithm modulates the contribution of individual schemes and affects the interactions between different schemes. In addition, we demonstrate the sensitivity of these interactions to an increase or decrease in the strength of the parameterised shallow convection. The individual tendencies from several parameterisations can thereby compensate each other, sometimes resulting in a small residual. In some instances this residual remains nearly unchanged between the sensitivity experiments, even though some individual tendencies differ by up to an order of magnitude. Using the individual tendency output, we can characterise the subgrid-scale as well as grid-scale responses of the model and trace them back to their underlying causes. We thereby highlight the utility of individual tendency output for understanding process-related differences between model runs with varying physical configurations and for the continued development of numerical weather prediction models.

Restricted access
Evan S. Bentley, Richard L. Thompson, Barry R. Bowers, Justin G. Gibbs, and Steven E. Nelson

Abstract

Previous work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.

Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.

Restricted access
Jason M. English, David D. Turner, Trevor I. Alcott, William R. Moninger, Janice L. Bytheway, Robert Cifelli, and Melinda Marquis

Abstract

Improved forecasts of Atmospheric River (AR) events, which provide up to half the annual precipitation in California, may reduce impacts to water supply, lives, and property. We evaluate Quantitative Precipitation Forecasts (QPF) from the High-Resolution Rapid Refresh model version 3 (HRRRv3) and version 4 (HRRRv4) for five AR events that occurred in Feb-Mar 2019 and compare them to Quantitative Precipitation Estimates (QPE) from Stage IV and Mesonet products. Both HRRR versions forecast spatial patterns of precipitation reasonably well, but are drier than QPE products in the Bay Area and wetter in the Sierra Nevada range. The HRRR dry bias in the Bay Area may be related to biases in the model temperature profile, while IWV, wind speed, and wind direction compare reasonably well. In the Sierra Nevada range, QPE and QPF agree well at temperatures above freezing. Below freezing, the discrepancies are due in part to errors in the QPE products, which are known to underestimate frozen precipitation in mountainous terrain. HRRR frozen QPF accuracy is difficult to quantify, but the model does have wind speed and wind direction biases near the Sierra Nevada range. HRRRv4 is overall more accurate than HRRRv3, likely due to data assimilation improvements, and possibly physics improvements. Applying a Neighborhood Maximum method impacted performance metrics, but did not alter general conclusions, suggesting closest grid box evaluations may be adequate for these types of events. Improvements to QPF in the Bay Area and QPE/QPF in the Sierra Nevada range would be particularly useful to provide better understanding of AR events.

Restricted access
Kevin Bachmann and Ryan D. Torn

Abstract

Tropical cyclones are associated with a variety of significant social hazards, including wind, rain, and storm surge. Despite this, most of the model validation effort has been directed toward track and intensity forecasts. In contrast, few studies have investigated the skill of state-of-the-art, high-resolution ensemble prediction systems in predicting associated TC hazards, which is crucial since TC position and intensity do not always correlate with the TC-related hazards, and can result in impacts far from the actual TC center. Furthermore, dynamic models can provide flow-dependent uncertainty estimates, which in turn can provide more specific guidance to forecasters than statistical uncertainty estimates based on past errors. This study validates probabilistic forecasts of wind speed and precipitation hazards derived from the HWRF ensemble prediction system and compares its skill to forecasts by the stochastically-based operational Monte Carlo Model (NHC), the IFS (ECMWF), and the GEFS (NOAA) in use 2017-2019. Wind and Precipitation forecasts are validated against NHC best track wind radii information and the National Stage IV QPE Product. The HWRF 34 kn wind forecasts have comparable skill to the global models up to 60 h lead time before HWRF skill decreases, possibly due to detrimental impacts of large track errors. In contrast, HWRF has comparable quality to its competitors for higher thresholds of 50 kn and 64 kn throughout 120 h lead time. In terms of precipitation hazards, HWRF performs similar or better than global models, but depicts higher, although not perfect, reliability, especially for events over 5 in120h−1. Post-processing, like Quantile Mapping, improves forecast skill for all models significantly and can alleviate reliability issues of the global models.

Restricted access
Jing Zhang, Jie Feng, Hong Li, Yuejian Zhu, Xiefei Zhi, and Feng Zhang

Abstract

Operational and research applications generally use the consensus approach for forecasting the track and intensity of tropical cyclones (TCs) due to the spatial displacement of the TC location and structure in ensemble member forecasts. This approach simply averages the location and intensity information for TCs in individual ensemble members, which is distinct from the traditional pointwise arithmetic mean (AM) method for ensemble forecast fields. The consensus approach, despite having improved skills relative to the AM in predicting the TC intensity, cannot provide forecasts of the TC spatial structure. We introduced a unified TC ensemble mean forecast based on the feature-oriented mean (FM) method to overcome the inconsistency between the AM and consensus forecasts. FM spatially aligns the TC-related features in each ensemble field to their geographical mean positions before the amplitude of their features is averaged.

We select 219 TC forecast samples during the summer of 2017 for an overall evaluation of the FM performance. The results show that the TC track consensus forecasts can differ from AM track forecasts by hundreds of kilometers at long lead times. AM also gives a systematic and statistically significant underestimation of the TC intensity compared with the consensus forecast. By contrast, FM has a very similar TC track and intensity forecast skill to the consensus approach. FM can also provide the corresponding ensemble mean forecasts of the TC spatial structure that are significantly more accurate than AM for the low- and upper-level circulation in TCs. The FM method has the potential to serve as a valuable unified ensemble mean approach for the TC prediction.

Open access