Browse

You are looking at 101 - 110 of 2,831 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Chanh Kieu, Cole Evans, Yi Jin, James D. Doyle, Hao Jin, and Jonathan Moskaitis

Abstract

This study examines the dependence of tropical cyclone (TC) intensity forecast errors on track forecast errors in the Coupled Ocean–Atmosphere Mesoscale Prediction System for Tropical Cyclones (COAMPS-TC) model. Using real-time forecasts and retrospective experiments during 2015–18, verification of TC intensity errors conditioned on different 5-day track error thresholds shows that reducing the 5-day track errors by 50%–70% can help reduce the absolute intensity errors by 18%–20% in the 2018 version of the COAMPS-TC model. Such impacts of track errors on the TC intensity errors are most persistent at 4–5-day lead times in all three major ocean basins, indicating a significant control of global models on the forecast skill of the COAMPS-TC model. It is of interest to find, however, that lowering the 5-day track errors below 80 n mi (1 n mi = 1.852 km) does not reduce TC absolute intensity errors further. Instead, the 4–5-day intensity errors appear to be saturated at around 10–12 kt (1 kt ≈ 0.51 m s−1) for cases with small track errors, thus suggesting the existence of some inherent intensity errors in regional models. Additional idealized simulations under a perfect model scenario reveal that the COAMPS-TC model possesses an intrinsic intensity variation at the TC mature stage in the range of 4–5 kt, regardless of the large-scale environment. Such intrinsic intensity variability in the COAMPS-TC model highlights the importance of potential chaotic TC dynamics, rather than model deficiencies, in determining TC intensity errors at 4–5-day lead times. These results suggest a fundamental limit in the improvement of TC intensity forecasts by numerical models that one should consider in future model development and evaluation.

Restricted access
Guiting Song, Robert Huva, Yu Xing, and Xiaohui Zhong

Abstract

For most locations on Earth the ability of a numerical weather prediction (NWP) model to accurately simulate surface irradiance relies heavily on the NWP model being able to resolve cloud coverage and thickness. At horizontal resolutions at or below a few kilometers NWP models begin to explicitly resolve convection and the clouds that arise from convective processes. However, even at high resolutions, biases may remain in the model and result in under- or overprediction of surface irradiance. In this study we explore the correction of such systematic biases using a moisture adjustment method in tandem with the Weather Research and Forecasting (WRF) Model for a location in Xinjiang, China. After extensive optimization of the configuration of the WRF Model we show that systematic biases still exist—in particular for wintertime in Xinjiang. We then demonstrate the moisture adjustment method with cloudy days for January 2019. Adjusting the relative humidity by 12% through the vertical led to a root-mean-square error (RMSE) improvement of 57.8% and a 90.5% reduction in bias for surface irradiance.

Restricted access
Cameron J. Nixon and John T. Allen

Abstract

The paths of tornadoes have long been a subject of fascination since the meticulously drawn damage tracks by Dr. Tetsuya Theodore “Ted” Fujita. Though uncommon, some tornadoes have been noted to take sudden left turns from their previous path. This has the potential to present an extreme challenge to warning lead time, and the spread of timely, accurate information to broadcasters and emergency managers. While a few hypotheses exist as to why tornadoes deviate, none have been tested for their potential use in operational forecasting and nowcasting. As a result, such deviations go largely unanticipated by forecasters. A sample of 102 leftward deviant tornadic low-level mesocyclones was tracked via WSR-88D and assessed for their potential predictability. A simple hodograph technique is presented that shows promising skill in predicting the motion of deviant tornadoes, which, upon “occlusion,” detach from the parent storm’s updraft centroid and advect leftward or rearward by the low-level wind. This metric, a vector average of the parent storm motion and the mean wind in the lowest half-kilometer, proves effective at anticipating deviant tornado motion with a median error of less than 6 kt (1 kt ≈ 0.51 m s−1). With over 25% of analyzed low-level mesocyclones deviating completely out of the tornado warning polygon issued by their respective National Weather Service Weather Forecast Office, the adoption of this new technique could improve warning performance. Furthermore, with over 35% of tornadoes becoming “deviant” almost immediately upon formation, the ability to anticipate such events may inspire a new paradigm for tornado warnings that, when covering unpredictable behavior, are proactive instead of reactive.

Restricted access
Lu Yang, Mingxuan Chen, Xiaoli Wang, Linye Song, Meilin Yang, Rui Qin, Conglan Cheng, and Siteng Li

Abstract

The ability to forecast thermodynamic conditions aloft and near the surface is critical to the accurate forecasting of precipitation type at the surface. This paper presents an experimental version of a new scheme for diagnosing precipitation type. The method considers the optimum surface temperature threshold associated with each precipitation type and combines model-based explicit fields of hydrometeors with higher-resolution modified thermodynamic and topographic information to determine precipitation types in North China. Based on over 60 years of precipitation-type samples from North China, this study explores the climatological characteristics of the five precipitation types—snow, rain, ice pellets (IP), rain/snow mix (R/S MIX), and freezing rain (FZ)—as well as the suitable air temperature T a and wet-bulb temperature T w thresholds for distinguishing different precipitation types. Direct output from numerical weather prediction (NWP) models, such as temperature and humidity, was modified by downscaling and bias correction, as well as by incorporating the latest surface observational data and high-resolution topographic data. Validation of the precipitation-type forecasts from this scheme was performed against observations from the 2016 to 2019 winter seasons and two case studies were also analyzed. Compared with the similar diagnostic routine in the High-Resolution Rapid Refresh (HRRR) forecasting system used to predict precipitation type over North China, the skill of the method proposed here is similar for rain and better for snow, R/S MIX, and FZ. Furthermore, depiction of the diagnosed boundary between R/S MIX and snow is good in most areas. However, the number of misclassifications for R/S MIX is significantly larger than for rain and snow.

Restricted access
Arlan Dirkson, Bertrand Denis, Michael Sigmond, and William J. Merryfield

Abstract

Dynamical forecasting systems are being used to skillfully predict deterministic ice-free and freeze-up date events in the Arctic. This paper extends such forecasts to a probabilistic framework and tests two calibration models to correct systematic biases and improve the statistical reliability of the event dates: trend-adjusted quantile mapping (TAQM) and nonhomogeneous censored Gaussian regression (NCGR). TAQM is a probability distribution mapping method that corrects the forecast for climatological biases, whereas NCGR relates the calibrated parametric forecast distribution to the raw ensemble forecast through a regression model framework. For NCGR, the observed event trend and ensemble-mean event date are used to predict the central tendency of the predictive distribution. For modeling forecast uncertainty, we find that the ensemble-mean event date, which is related to forecast lead time, performs better than the ensemble variance itself. Using a multidecadal hindcast record from the Canadian Seasonal to Interannual Prediction System (CanSIPS), TAQM and NCGR are applied to produce categorical forecasts quantifying the probabilities for early, normal, and late ice retreat and advance. While TAQM performs better than adjusting the raw forecast for mean and linear trend bias, NCGR is shown to outperform TAQM in terms of reliability, skill, and an improved tendency for forecast probabilities to be no worse than climatology. Testing various cross-validation setups, we find that NCGR remains useful when shorter hindcast records (~20 years) are available. By applying NCGR to operational forecasts, stakeholders can be more confident in using seasonal forecasts of sea ice event timing for planning purposes.

Restricted access
Ezio L. Mauri and William A. Gallus Jr.

Abstract

Nocturnal bow echoes can produce wind damage, even in situations where elevated convection occurs. Accurate forecasts of wind potential tend to be more challenging for operational forecasters than for daytime bows because of incomplete understanding of how elevated convection interacts with the stable boundary layer. The present study compares the differences in warm-season, nocturnal bow echo environments in which high intensity [>70 kt (1 kt ≈ 0.51 m s−1)] severe winds (HS), low intensity (50–55 kt) severe winds (LS), and nonsevere winds (NS) occurred. Using a sample of 132 events from 2010 to 2018, 43 forecast parameters from the SPC mesoanalysis system were examined over a 120 km × 120 km region centered on the strongest storm report or most pronounced bowing convective segment. Severe composite parameters are found to be among the best discriminators between all severity types, especially derecho composite parameter (DCP) and significant tornado parameter (STP). Shear parameters are significant discriminators only between severe and nonsevere cases, while convective available potential energy (CAPE) parameters are significant discriminators only between HS and LS/NS bow echoes. Convective inhibition (CIN) is among the worst discriminators for all severity types. The parameters providing the most predictive skill for HS bow echoes are STP and most unstable CAPE, and for LS bow echoes these are the V wind component at best CAPE (VMXP) level, STP, and the supercell composite parameter. Combinations of two parameters are shown to improve forecasting skill further, with the combination of surface-based CAPE and 0–6-km U shear component, and DCP and VMXP, providing the most skillful HS and LS forecasts, respectively.

Restricted access
Hui Wang, Arun Kumar, Alima Diawara, David DeWitt, and Jon Gottschalck

Abstract

A dynamical–statistical model is developed for forecasting week-2 severe weather (hail, tornadoes, and damaging winds) over the United States. The supercell composite parameter (SCP) is used as a predictor, which is derived from the 16-day dynamical forecasts of the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) model and represents the large-scale convective environments influencing severe weather. The hybrid model forecast is based on the empirical relationship between GEFS hindcast SCP and observed weekly severe weather frequency during 1996–2012, the GEFS hindcast period. Cross validations suggest that the hybrid model has a low skill for week-2 severe weather when applying simple linear regression method at 0.5° × 0.5° (latitude × longitude) grid data. However, the forecast can be improved by using the 5° × 5° area-averaged data. The forecast skill can be further improved by using the empirical relationship depicted by the singular value decomposition method, which takes into account the spatial covariations of weekly severe weather. The hybrid model was tested operationally in spring 2019 and demonstrated skillful forecasts of week-2 severe weather frequency over the United States.

Restricted access
Syo Yoshida, Ryohei Misumi, and Takeshi Maesaka

Abstract

Cumulonimbus clouds, which cause local heavy rainfall and urban floods, can develop within 20 min after being detected by operational centimeter-wavelength (X-, C-, or S-band) weather radars. To detect such clouds with greater lead times, Ka-band radars at a wavelength of 8.6 mm together with operational X-band radars were used in this study. The vertically averaged radar reflectivity (VAR) of convective echoes detected by the Ka-band and X-band radars were defined as mesoscale cloud echoes (MCEs) and mesoscale precipitation echoes (MPEs), respectively. The time series of each echo was analyzed by an echo tracking algorithm. On average, MCEs that developed into MPEs (denoted as developed MCEs) were detected 17 min earlier than the MPEs and 33 min earlier than the peak time of the area-averaged VAR (VARa) for MPEs. Some MCEs dissipated without developing into MPEs (denoted as non-developed MCEs). There were statistically significant differences between the developed and non-developed MCEs in terms of the maximum VARa values, maximum MCEs areas, and increase amounts of the VARa values and MCE areas for the first 6–12 min after their detection. Among these indicators, the maximum VARa for the first 9 min showed the most significant differences. Therefore, an algorithm for predicting MCE development using this indicator is discussed.

Open access
Jonathan Labriola, Youngsun Jung, Chengsi Liu, and Ming Xue

Abstract

In an effort to improve radar data assimilation configurations for potential operational implementation, GSI EnKF data assimilation experiments based on the operational system employed by the Center for Analysis and Prediction of Storms (CAPS) real-time Spring Forecast Experiments are performed. These experiments are followed by 6-h forecasts for an MCS on 28–29 May 2017. Configurations examined include data thinning, covariance localization radii and inflation, observation error settings, and data assimilation frequency for radar observations. The results show experiments that assimilate radar observations more frequently (i.e., 5–10 min) are initially better at suppressing spurious convection. However, assimilating observations every 5 min causes spurious convection to become more widespread with time, and modestly degrades forecast skill through the remainder of the forecast window. Ensembles that assimilate more observations with less thinning of data or use a larger horizontal covariance localization radius for radar data predict fewer spurious storms and better predict the location of observed storms. Optimized data thinning and horizontal covariance localization radii have positive impacts on forecast skill during the first forecast hour that are quickly lost due to the growth of forecast error. Forecast skill is less sensitive to the ensemble spread inflation factors and observation errors tested during this study. These results provide guidance toward optimizing the configuration of the GSI EnKF system. Among the DA configurations tested, the one employed by the CAPS Spring Forecast Experiment produces the most skilled forecasts while remaining computationally efficient for real-time use.

Restricted access
Burkely T. Gallo, Jamie K. Wolff, Adam J. Clark, Israel Jirak, Lindsay R. Blank, Brett Roberts, Yunheng Wang, Chunxi Zhang, Ming Xue, Tim Supinie, Lucas Harris, Linjiong Zhou, and Curtis Alexander

Abstract

Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.

Restricted access