Browse

You are looking at 1 - 10 of 2,774 items for :

  • Weather and Forecasting x
  • All content x
Clear All
Matthew D. Flournoy, Michael C. Coniglio, and Erik N. Rasmussen

Abstract

Although environmental controls on bulk supercell potential and hazards have been studied extensively, relationships between environmental conditions and temporal changes to storm morphology remain less explored. These relationships are examined in this study using a compilation of sounding data collected during field campaigns from 1994 to 2019 in the vicinity of 216 supercells. Environmental parameters are calculated from the soundings and related to storm-track characteristics like initial cell motion and the time of the right turn (i.e., the time elapsed between the cell initiation and the first time when the supercell obtains a quasi-steady motion that is directed clockwise from its initial motion.). We do not find any significant associations between environmental parameters and the time of the right turn. Somewhat surprisingly, no relationship is found between storm-relative environmental helicity and the time elapsed between cell initiation and the onset of deviant motion. Initial cell motion is best approximated by the direction of the 0–6-km mean wind at two-thirds the speed. This is a result of advection and propagation in the 0–4- and 0–2-km layers, respectively. Unsurprisingly, Bunkers-right storm motion is a good estimate of post-turn motion, but storms that exhibit a post-turn motion left of Bunkers-right are less likely to be tornadic. These findings are relevant for real-time forecasting efforts in predicting the path and tornado potential of supercells up to hours in advance.

Restricted access
Seth P. Howard, Kim E. Klockow-McClain, Alison P. Boehmer, and Kevin M. Simmons

Abstract

Tornadoes cause billions of dollars in damage and over 100 fatalities on average annually. Yet, an indirect cost to these storms is found in lost sales and/or lost productivity from responding to over 2000 warnings per year. This project responds to the Weather Research and Forecasting Innovation Act of 2017, H.R. 353, which calls for the use of social and behavioral science to study and improve storm warning systems. Our goal is to provide an analysis of cost avoidance that could accrue from a change to the warning paradigm, particularly to include probabilistic hazard information at storm scales. A survey of nearly 500 firms was conducted in and near the Dallas–Fort Worth metropolitan area asking questions about experience with tornadoes, sources of information for severe weather, expected cost of responding to tornado warnings, and how the firm would respond to either deterministic or probabilistic warnings. We find a dramatic change from deterministic warnings compared to the proposed probabilistic and that a probabilistic information system produces annual cost avoidance in a range of $2.3–$7.6 billion (U.S. dollars) compared to the current deterministic warning paradigm.

Restricted access
Sarah Tessendorf, Allyson Rugg, Alexei Korolev, Ivan Heckman, Courtney Weeks, Gregory Thompson, Darcy Jacobson, Dan Adriaansen, and Julie Haggerty

Abstract

Supercooled large drop (SLD) icing poses a unique hazard for aircraft and has resulted in new regulations regarding aircraft certification to fly in regions of known or forecast SLD icing conditions. The new regulations define two SLD icing categories based upon the maximum supercooled liquid water drop diameter (Dmax): freezing drizzle (100–500 μm) and freezing rain (> 500 μm). Recent upgrades to U.S. operational numerical weather prediction models lay a foundation to provide more relevant aircraft icing guidance including the potential to predict explicit drop size. The primary focus of this paper is to evaluate a proposed method for estimating the maximum drop size from model forecast data to differentiate freezing drizzle from freezing rain conditions. Using in-situ cloud microphysical measurements collected in icing conditions during two field campaigns between January and March 2017, this study shows that the High-Resolution Rapid Refresh model is capable of distinguishing SLD icing categories of freezing drizzle and freezing rain using a Dmax extracted from the rain category of the microphysics output. It is shown that the extracted Dmax from the model correctly predicted the observed SLD icing category as much as 99% of the time when the HRRR accurately forecast SLD conditions; however, performance varied by the method to define Dmax and by the field campaign dataset used for verification.

Restricted access
Shu-Ya Chen, Cheng-Peng Shih, Ching-Yuang Huang, and Wen-Hsin Teng

Abstract

Conventional soundings are rather limited over the western North Pacific and can be largely compensated by GNSS radio occultation (RO) data. We utilize the GSI hybrid assimilation system to assimilate RO data and the multi-resolution global model (MPAS) to investigate the RO data impact on prediction of Typhoon Nepartak that passed over southern Taiwan in 2016. In this study, the performances of assimilation with local RO refractivity and bending angle operators are compared for the assimilation analysis and typhoon forecast.

Assimilations with both RO data have shown similar and comparable temperature and moisture increments after cycling assimilation and largely reduce the RMSEs of the forecast without RO data assimilation at later times. The forecast results at 60-15-km resolution show that RO data assimilation largely improves the typhoon track prediction compared to that without RO data assimilation, and assimilation with bending angle has better performances than assimilation with refractivity, in particular for wind forecast. The improvement in the forecasted track is mainly due to the improved simulation for the translation of the typhoon. Diagnostics of wavenumber-one potential vorticity (PV) tendency budget indicates that the northwestward typhoon translation dominated by PV horizontal advection is slowed down by the southward tendency induced by the stronger differential diabatic heating south of the typhoon center for bending-angle assimilation. Simulations with the enhanced resolution of 3 km in the region of the storm track show further improvements in both typhoon track and intensity prediction with RO data assimilation. Positive RO impacts on track prediction are also illustrated for other two typhoons using the MPAS-GSI system.

Restricted access
Matthew T. Bray, David D. Turner, and Gijs de Boer

Abstract

Despite a need for accurate weather forecasts for societal and economic interests in the U.S. Arctic, thorough evaluations of operational numerical weather prediction in the region have been limited. In particular, the Rapid Refresh Model (RAP), which plays a key role in short-term forecasting and decision making, has seen very limited assessment in northern Alaska, with most evaluation efforts focused on lower latitudes. In the present study, we verify forecasts from version 4 of the RAP against radiosonde, surface meteorological, and radiative flux observations from two Arctic sites on the northern Alaskan coastline, with a focus on boundary-layer thermodynamic and dynamic biases, model representation of surface inversions, and cloud characteristics. We find persistent seasonal thermodynamic biases near the surface that vary with wind direction, and may be related to the RAP’s handling of sea ice and ocean interactions. These biases seem to have diminished in the latest version of the RAP (version 5), which includes refined handling of sea ice, among other improvements. In addition, we find that despite capturing boundary-layer temperature profiles well overall, the RAP struggles to consistently represent strong, shallow surface inversions. Further, while the RAP seems to forecast the presence of clouds accurately in most cases, there are errors in the simulated characteristics of these clouds, which we hypothesize may be related to the RAP’s treatment of mixed-phase clouds.

Restricted access
Evan A. Kalina, Isidora Jankov, Trevor Alcott, Joseph Olson, Jeffrey Beck, Judith Berner, David Dowell, and Curtis Alexander

Abstract

The High-Resolution Rapid Refresh Ensemble (HRRRE) is a 36-member ensemble analysis system with nine forecast members that utilizes the Advanced Research Weather Research and Forecasting (ARW-WRF) dynamic core and the physics suite from the operational Rapid Refresh/High-Resolution Rapid Refresh deterministic modeling system. A goal of HRRRE development is a system with sufficient spread amongst members, comparable in magnitude to the random error in the ensemble mean, to represent the range of possible future atmospheric states. HRRRE member diversity has traditionally been obtained by perturbing the initial and lateral boundary conditions of each member, but recent development has focused on implementing stochastic approaches in HRRRE to generate additional spread. These techniques were tested in retrospective experiments and in the May 2019 Hazardous Weather Testbed Spring Experiment (HWT-SE). Results show a 6–25% increase in the ensemble spread in 2-m temperature, 2-m mixing ratio, and 10-m wind speed when stochastic parameter perturbations are used in HRRRE (HRRRE-SPP). Case studies from HWT-SE demonstrate that HRRRE-SPP performed similar to or better than the operational High-Resolution Ensemble Forecast system version 2 (HREFv2) and the non-stochastic HRRRE. However, subjective evaluations provided by HWT-SE forecasters indicated that overall, HRRRE-SPP predicted lower probabilities of severe weather (using updraft helicity as a proxy) compared to HREFv2. A statistical analysis of the performance of HRRRE-SPP and HREFv2 from the 2019 summer convective season supports this claim, but also demonstrates that the two systems have similar reliability for prediction of severe weather using updraft helicity.

Open access
Cui Liu, Jianhua Sun, Xinlin Yang, Shuanglong Jin, and Shenming Fu

Abstract

Precipitation forecasts from the ECMWF model from March to September during 2015–2018 were evaluated using observed precipitation at 2411 stations from the China Meteorological Administration. To eliminate the influence of varying climatology in different regions in China, the Stable Equitable Error in Probability Space method was used to obtain criteria for 3-h and 6-h accumulated precipitation at each station and classified precipitation into light, medium, and heavy precipitation. The model was evaluated for these categories using categorical and continuous methods. The threat score and the equitable threat score showed that the model’s forecasts of rainfall were generally more accurate at shorter lead times, and the best performance occurred in the middle and lower reaches of the Yangtze River Basin. The miss ratio for heavy precipitation was higher in the northern region than in the southern region, while heavy precipitation false alarms were more frequent in the southwestern China. Overall, the miss ratio and false alarm ratio for heavy precipitation were highest in northern China and western China, respectively. For light and medium precipitation, the model performed best in the middle and lower reaches of the Yangtze River Basin. The model predicted too much light and medium precipitation, but too little heavy precipitation. Heavy precipitation was generally underestimated over all of China, especially in the western region of China, South China, and the Yungui Plateau. Heavy precipitation was systematically underestimated because of the resolution and the related parametrization of convection.

Restricted access
Andrew Hazelton, Zhan Zhang, Bin Liu, Jili Dong, Ghassan Alaka, Weiguo Wang, Tim Marchok, Avichal Mehra, Sundararaman Gopalakrishnan, Xuejin Zhang, Morris Bender, Vijay Tallapragada, and Frank Marks

Abstract

NOAA’s Hurricane Analysis and Forecast System (HAFS) is an evolving FV3-based hurricane modeling system that is expected to replace the operational hurricane models at the National Weather Service. Supported by the Hurricane Forecast Improvement Program (HFIP), global-nested and regional versions of HAFS were run in real time in 2019 to create the first baseline for the HAFS advancement. In this study, forecasts from the global-nested configuration of HAFS (HAFS-globalnest) are evaluated and compared with other operational and experimental models. The forecasts by HAFS-globalnest covered the period from July through October during the 2019 hurricane season. Tropical cyclone (TC) track, intensity, and structure forecast verifications are examined. HAFS-globalnest showed track skill superior to several operational hurricane models and comparable intensity and structure skill, although the skill in predicting rapid intensification was slightly inferior to the operational model skill. HAFS-globalnest correctly predicted that Hurricane Dorian would slow and turn north in the Bahamas and also correctly predicted structural features in other TCs such as a sting jet in Hurricane Humberto during extratropical transition. Humberto was also a case where HAFS-globalnest had better track forecasts than a regional version of HAFS (HAFS-SAR) due to a better representation of the large-scale flow. These examples and others are examined through comparisons with airborne tail Doppler radar from the NOAA WP-3D to provide a more detailed evaluation of TC structure prediction. The results from this real-time experiment motivate several future model improvements, and highlight the promise of HAFS-globalnest for improved TC prediction.

Restricted access
Theodore W. Letcher, Sandra L. LeGrand, and Christopher Polashenski

Abstract

Blowing snow presents substantial risk to human activities by causing severe visibility degradation and snow drifting. Furthermore, blowing snow presents a weather forecast challenge since it is not generally simulated in operational weather forecast models. In this study, we apply a physically based blowing snow model as a diagnostic overlay to output from a reforecast WRF simulation of a significant blowing snow event that occurred over the northern Great Plains of the United States during the winter of 2019. The blowing snow model is coupled to an optics parameterization that estimates the visibility reduction by blowing snow. This overlay is qualitatively evaluated against false color satellite imagery from the GOES-16 operational weather satellite and available surface visibility observations. The WRF-simulated visibility is substantially improved when incorporating blowing snow hydrometeors. Furthermore, the model-simulated plume of blowing snow roughly corresponds to the blowing snow plumes visible in the satellite imagery. Overall, this study illustrates how a blowing snow diagnostic model can aid weather forecasters in making blowing snow visibility forecasts, and demonstrates how the model can be evaluated against satellite imagery.

Open access
Florian Dupuy, Olivier Mestre, Mathieu Serrurier, Valentin Kivachuk Burdá, Michaël Zamo, Naty Citlali Cabrera-Gutiérrez, Mohamed Chafik Bakkay, Jean-Christophe Jouhaud, Maud-Alix Mader, and Guillaume Oller

Abstract

Cloud cover provides crucial information for many applications such as planning land observation missions from space. It remains, however, a challenging variable to forecast, and numerical weather prediction (NWP) models suffer from significant biases, hence, justifying the use of statistical postprocessing techniques. In this study, ARPEGE (Météo-France global NWP) cloud cover is postprocessed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows the integration of spatial information contained in NWP outputs. We use a gridded cloud cover product derived from satellite observations over Europe as ground truth, and predictors are spatial fields of various variables produced by ARPEGE at the corresponding lead time. We show that a simple U-Net architecture (a particular type of CNN) produces significant improvements over Europe. Moreover, the U-Net outclasses more traditional machine learning methods used operationally such as a random forest and a logistic quantile regression. When using a large number of predictors, a first step toward interpretation is to produce a ranking of predictors by importance. Traditional methods of ranking (permutation importance, sequential selection, etc.) need important computational resources. We introduced a weighting predictor layer prior to the traditional U-Net architecture in order to produce such a ranking. The small number of additional weights to train (the same as the number of predictors) does not impact the computational time, representing a huge advantage compared to traditional methods.

Open access