Browse

You are looking at 11 - 20 of 2,828 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Soo-Hyun Kim, Hye-Yeong Chun, Dan-Bi Lee, Jung-Hoon Kim, and Robert D. Sharman

Abstract

Based on a convective gravity wave drag parameterization scheme in a numerical weather prediction (NWP) model, previously proposed near-cloud turbulence (NCT) diagnostics for better detecting turbulence near convection are tested and evaluated by using global in situ flight data and outputs from the operational global NWP model of the Korea Meteorological Administration for one year (from December 2016 to November 2017). For comparison, 11 widely used clear air turbulence (CAT) diagnostics currently used in operational NWP-based aviation turbulence forecasting systems are separately computed. For selected cases, NCT diagnostics predict more accurately localized turbulence events over convective regions with better intensity, which is clearly distinguished from the turbulence areas diagnosed by conventional CAT diagnostics that they mostly failed to forecast with broad areas and low magnitudes. Although overall performance of NCT diagnostics for one whole year is lower than conventional CAT diagnostics due to the fact that NCT diagnostics exclusively focus on the isolated NCT events, adding the NCT diagnostics to CAT diagnostics improves the performance of aviation turbulence forecasting. Especially in the summertime, performance in terms of an area under the curve (AUC) based on probability of detection statistics is the best (AUC = 0.837 with a 4% increase, compared to conventional CAT forecasts) when the mean of all CAT and NCT diagnostics is used, while performance in terms of root-mean-square error is the best when the maximum among combined CAT and single NCT diagnostic is used. This implies that including NCT diagnostics to currently used NWP-based aviation turbulence forecasting systems should be beneficial for safety of air travel.

Restricted access
Makenzie J. Krocak and Harold E. Brooks

Abstract

While many studies have looked at the quality of forecast products, few have attempted to understand the relationship between them. We begin to consider whether or not such an influence exists by analyzing storm-based tornado warning product metrics with respect to whether they occurred within a severe weather watch and, if so, what type of watch they occurred within. The probability of detection, false alarm ratio, and lead time all show a general improvement with increasing watch severity. In fact, the probability of detection increased more as a function of watch-type severity than the change in probability of detection during the time period of analysis. False alarm ratio decreased as watch type increased in severity, but with a much smaller magnitude than the difference in probability of detection. Lead time also improved with an increase in watch-type severity. Warnings outside of any watch had a mean lead time of 5.5 min, while those inside of a particularly dangerous situation tornado watch had a mean lead time of 15.1 min. These results indicate that the existence and type of severe weather watch may have an influence on the quality of tornado warnings. However, it is impossible to separate the influence of weather watches from possible differences in warning strategy or differences in environmental characteristics that make it more or less challenging to warn for tornadoes. Future studies should attempt to disentangle these numerous influences to assess how much influence intermediate products have on downstream products.

Restricted access
Franziska Hellmuth, Bjørg Jenny Kokkvoll Engdahl, Trude Storelvmo, Robert O. David, and Steven J. Cooper

Abstract

In the winter, orographic precipitation falls as snow in the mid- to high latitudes where it causes avalanches, affects local infrastructure, or leads to flooding during the spring thaw. We present a technique to validate operational numerical weather prediction model simulations in complex terrain. The presented verification technique uses a combined retrieval approach to obtain surface snowfall accumulation and vertical profiles of snow water at the Haukeliseter test site, Norway. Both surface observations and vertical profiles of snow are used to validate model simulations from the Norwegian Meteorological Institute’s operational forecast system and two simulations with adjusted cloud microphysics. Retrieved surface snowfall is validated against measurements conducted with a double-fence automated reference gauge (DFAR). In comparison, the optimal estimation snowfall retrieval produces +10.9% more surface snowfall than the DFAR. The predicted surface snowfall from the operational forecast model and two additional simulations with microphysical adjustments (CTRL and ICE-T) are overestimated at the surface with +41.0%, +43.8%, and +59.2%, respectively. Simultaneously, the CTRL and ICE-T simulations underestimate the mean snow water path by −1071.4% and −523.7%, respectively. The study shows that we would reach false conclusions only using surface accumulation or vertical snow water content profiles. These results highlight the need to combine ground-based in situ and vertically profiling remote sensing instruments to identify biases in numerical weather prediction.

Open access
Michelle L. L’Heureux, Michael K. Tippett, and Emily J. Becker

Abstract

The relation between the El Niño–Southern Oscillation (ENSO) and California precipitation has been studied extensively and plays a prominent role in seasonal forecasting. However, a wide range of precipitation outcomes on seasonal time scales are possible, even during extreme ENSO states. Here, we investigate prediction skill and its origins on subseasonal time scales. Model predictions of California precipitation are examined using Subseasonal Experiment (SubX) reforecasts for the period 1999–2016, focusing on those from the Flow-Following Icosahedral Model (FIM). Two potential sources of subseasonal predictability are examined: the tropical Pacific Ocean and upper-level zonal winds near California. In both observations and forecasts, the Niño-3.4 index exhibits a weak and insignificant relationship with daily to monthly averages of California precipitation. Likewise, model tropical sea surface temperature and outgoing longwave radiation show only minimal relations with California precipitation forecasts, providing no evidence that flavors of El Niño or tropical modes substantially contribute to the success or failure of subseasonal forecasts. On the other hand, an index for upper-level zonal winds is strongly correlated with precipitation in observations and forecasts, across averaging windows and lead times. The wind index is related to ENSO, but the correlation between the wind index and precipitation remains even after accounting for ENSO phase. Intriguingly, the Niño-3.4 index and California precipitation show a slight but robust negative statistical relation after accounting for the wind index.

Restricted access
Yang Lyu, Xiefei Zhi, Shoupeng Zhu, Yi Fan, and Mengting Pan

Abstract

In this study, two pattern projection methods, i.e., the stepwise pattern projection method (SPPM) and the newly proposed neighborhood pattern projection method (NPPM), are investigated to improve forecast skills of daily maximum and minimum temperatures (Tmax and Tmin) over East Asia with lead times of 1–7 days. Meanwhile, the decaying averaging method (DAM) is conducted in parallel for comparison. These postprocessing methods are found to effectively calibrate the temperature forecasts on the basis of the raw ECMWF output. Generally, the SPPM is slightly inferior to the DAM, while its insufficiency decreases with increasing lead times. The NPPM shows manifest superiority for all lead times, with the mean absolute errors of Tmax and Tmin decreased by ~0.7° and ~0.9°C, respectively. Advantages of the two pattern projection methods are both mainly concentrated on the high-altitude areas such as the Tibetan Plateau, where the raw ECMWF forecasts show the most conspicuous biases. In addition, aiming at further assessments of these methods on extreme event forecasts, two case experiments are carried out toward a heat wave and a cold surge, respectively. The NPPM is retained as the optimal with the highest forecast skills, which reduces most of the biases to <2°C for both Tmax and Tmin over all the lead days. In general, the statistical pattern projection methods are capable of effectively eliminating spatial biases in forecasts of surface air temperature. Compared with the initial SPPM, the NPPM not only produces more powerful forecast calibrations, but also provides more pragmatic calculations and greater potential economic benefits in practical applications.

Restricted access
Christopher J. Nowotarski, Justin Spotts, Roger Edwards, Scott Overpeck, and Gary R. Woodall

Abstract

Tropical cyclone tornadoes pose a unique challenge to warning forecasters given their often marginal environments and radar attributes. In late August 2017 Hurricane Harvey made landfall on the Texas coast and produced 52 tornadoes over a record-breaking seven consecutive days. To improve warning efforts, this case study of Harvey’s tornadoes includes an event overview as well as a comparison of near-cell environments and radar attributes between tornadic and nontornadic warned cells. Our results suggest that significant differences existed in both the near-cell environments and radar attributes, particularly rotational velocity, between tornadic cells and false alarms. For many environmental variables and radar attributes, differences were enhanced when only tornadoes associated with a tornado debris signature were considered. Our results highlight the potential of improving warning skill further and reducing false alarms by increasing rotational velocity warning thresholds, refining the use of near-storm environment information, and focusing warning efforts on cells likely to produce the most impactful tornadoes.

Restricted access
Erin E. Thomas, Malte Müller, Patrik Bohlinger, Yurii Batrak, and Nicholas Szapiro

Abstract

Accurately simulating the interactions between the components of a coupled Earth modelling system (atmosphere, sea-ice, and wave) on a kilometer-scale resolution is a new challenge in operational numerical weather prediction. It is difficult due to the complexity of interactive mechanisms, the limited accuracy of model components and scarcity of observations available for assessing relevant coupled processes. This study presents a newly developed convective-scale atmosphere-wave coupled forecasting system for the European Arctic. The HARMONIE-AROME configuration of the ALADIN-HIRLAM numerical weather prediction system is coupled to the spectral wave model WAVEWATCH III using the OASIS3 model coupling toolkit. We analyze the impact of representing the kilometer-scale atmosphere-wave interactions through coupled and uncoupled forecasts on a model domain with 2.5 km spatial resolution. In order to assess the coupled model’s accuracy and uncertainties we compare 48-hour model forecasts against satellite observational products such as Advanced Scatterometer 10 m wind speed, and altimeter based significant wave height. The fully coupled atmosphere-wave model results closely match both satellite-based wind speed and significant wave height observations as well as surface pressure and wind speed measurements from selected coastal station observation sites. Furthermore, the coupled model contains smaller standard deviation of errors in both 10m wind speed and significant wave height parameters when compared to the uncoupled model forecasts. Atmosphere and wave coupling reduces the short term forecast error variability of 10 m wind speed and significant wave height with the greatest benefit occurring for high wind and wave conditions.

Restricted access
Christoph Mony, Lukas Jansing, and Michael Sprenger

Abstract

This study explores the possibilities of employing machine learning algorithms to predict foehn occurrence in Switzerland at a north-Alpine (Altdorf) and south-Alpine (Lugano) station from its synoptic fingerprint in reanalysis data and climate simulations. This allows for an investigation on a potential future shift in monthly foehn frequencies. First, inputs from various atmospheric fields from the European Centre for Medium-Range Weather Forecasts (ECMWF) Re-Analysis-Interim (ERAI) were used to train an XGBoost model. Here, similar predictive performance to previous work was achieved, showing that foehn can accurately be diagnosed from the coarse synoptic situation. In the next step, the algorithm was generalized to predict foehn based on Community Earth System Model (CESM) ensemble simulations of a present-day and warming future climate. The best generalization between ERAI and CESM was obtained by including the present-day data in the training procedure and simultaneously optimizing two objective functions, namely the negative log loss and squared mean loss, on both datasets, respectively. It is demonstrated that the same synoptic fingerprint can be identified in CESM climate simulation data. Finally, predictions for present-day and future simulations were verified and compared for statistical significance. Our model is shown to produce valid output for most months, revealing that south foehn in Altdorf is expected to become more common during spring, while north foehn in Lugano is expected to become more common during summer.

Restricted access
Marvin Kähnert, Harald Sodemann, Wim C. de Rooy, and Teresa M. Valkonen

Abstract

Forecasts of marine cold air outbreaks critically rely on the interplay of multiple parameterisation schemes to represent sub-grid scale processes, including shallow convection, turbulence, and microphysics. Even though such an interplay has been recognised to contribute to forecast uncertainty, a quantification of this interplay is still missing. Here, we investigate the tendencies of temperature and specific humidity contributed by individual parameterisation schemes in the operational weather prediction model AROME-Arctic. From a case study of an extensive marine cold air outbreak over the Nordic Seas, we find that the type of planetary boundary layer assigned by the model algorithm modulates the contribution of individual schemes and affects the interactions between different schemes. In addition, we demonstrate the sensitivity of these interactions to an increase or decrease in the strength of the parameterised shallow convection. The individual tendencies from several parameterisations can thereby compensate each other, sometimes resulting in a small residual. In some instances this residual remains nearly unchanged between the sensitivity experiments, even though some individual tendencies differ by up to an order of magnitude. Using the individual tendency output, we can characterise the subgrid-scale as well as grid-scale responses of the model and trace them back to their underlying causes. We thereby highlight the utility of individual tendency output for understanding process-related differences between model runs with varying physical configurations and for the continued development of numerical weather prediction models.

Restricted access
Evan S. Bentley, Richard L. Thompson, Barry R. Bowers, Justin G. Gibbs, and Steven E. Nelson

Abstract

Previous work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.

Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.

Restricted access