Browse

You are looking at 101 - 110 of 2,827 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Arlan Dirkson, Bertrand Denis, Michael Sigmond, and William J. Merryfield

Abstract

Dynamical forecasting systems are being used to skillfully predict deterministic ice-free and freeze-up date events in the Arctic. This paper extends such forecasts to a probabilistic framework and tests two calibration models to correct systematic biases and improve the statistical reliability of the event dates: trend-adjusted quantile mapping (TAQM) and nonhomogeneous censored Gaussian regression (NCGR). TAQM is a probability distribution mapping method that corrects the forecast for climatological biases, whereas NCGR relates the calibrated parametric forecast distribution to the raw ensemble forecast through a regression model framework. For NCGR, the observed event trend and ensemble-mean event date are used to predict the central tendency of the predictive distribution. For modeling forecast uncertainty, we find that the ensemble-mean event date, which is related to forecast lead time, performs better than the ensemble variance itself. Using a multidecadal hindcast record from the Canadian Seasonal to Interannual Prediction System (CanSIPS), TAQM and NCGR are applied to produce categorical forecasts quantifying the probabilities for early, normal, and late ice retreat and advance. While TAQM performs better than adjusting the raw forecast for mean and linear trend bias, NCGR is shown to outperform TAQM in terms of reliability, skill, and an improved tendency for forecast probabilities to be no worse than climatology. Testing various cross-validation setups, we find that NCGR remains useful when shorter hindcast records (~20 years) are available. By applying NCGR to operational forecasts, stakeholders can be more confident in using seasonal forecasts of sea ice event timing for planning purposes.

Restricted access
Ezio L. Mauri and William A. Gallus Jr.

Abstract

Nocturnal bow echoes can produce wind damage, even in situations where elevated convection occurs. Accurate forecasts of wind potential tend to be more challenging for operational forecasters than for daytime bows because of incomplete understanding of how elevated convection interacts with the stable boundary layer. The present study compares the differences in warm-season, nocturnal bow echo environments in which high intensity [>70 kt (1 kt ≈ 0.51 m s−1)] severe winds (HS), low intensity (50–55 kt) severe winds (LS), and nonsevere winds (NS) occurred. Using a sample of 132 events from 2010 to 2018, 43 forecast parameters from the SPC mesoanalysis system were examined over a 120 km × 120 km region centered on the strongest storm report or most pronounced bowing convective segment. Severe composite parameters are found to be among the best discriminators between all severity types, especially derecho composite parameter (DCP) and significant tornado parameter (STP). Shear parameters are significant discriminators only between severe and nonsevere cases, while convective available potential energy (CAPE) parameters are significant discriminators only between HS and LS/NS bow echoes. Convective inhibition (CIN) is among the worst discriminators for all severity types. The parameters providing the most predictive skill for HS bow echoes are STP and most unstable CAPE, and for LS bow echoes these are the V wind component at best CAPE (VMXP) level, STP, and the supercell composite parameter. Combinations of two parameters are shown to improve forecasting skill further, with the combination of surface-based CAPE and 0–6-km U shear component, and DCP and VMXP, providing the most skillful HS and LS forecasts, respectively.

Restricted access
Hui Wang, Arun Kumar, Alima Diawara, David DeWitt, and Jon Gottschalck

Abstract

A dynamical–statistical model is developed for forecasting week-2 severe weather (hail, tornadoes, and damaging winds) over the United States. The supercell composite parameter (SCP) is used as a predictor, which is derived from the 16-day dynamical forecasts of the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS) model and represents the large-scale convective environments influencing severe weather. The hybrid model forecast is based on the empirical relationship between GEFS hindcast SCP and observed weekly severe weather frequency during 1996–2012, the GEFS hindcast period. Cross validations suggest that the hybrid model has a low skill for week-2 severe weather when applying simple linear regression method at 0.5° × 0.5° (latitude × longitude) grid data. However, the forecast can be improved by using the 5° × 5° area-averaged data. The forecast skill can be further improved by using the empirical relationship depicted by the singular value decomposition method, which takes into account the spatial covariations of weekly severe weather. The hybrid model was tested operationally in spring 2019 and demonstrated skillful forecasts of week-2 severe weather frequency over the United States.

Restricted access
Syo Yoshida, Ryohei Misumi, and Takeshi Maesaka

Abstract

Cumulonimbus clouds, which cause local heavy rainfall and urban floods, can develop within 20 min after being detected by operational centimeter-wavelength (X-, C-, or S-band) weather radars. To detect such clouds with greater lead times, Ka-band radars at a wavelength of 8.6 mm together with operational X-band radars were used in this study. The vertically averaged radar reflectivity (VAR) of convective echoes detected by the Ka-band and X-band radars were defined as mesoscale cloud echoes (MCEs) and mesoscale precipitation echoes (MPEs), respectively. The time series of each echo was analyzed by an echo tracking algorithm. On average, MCEs that developed into MPEs (denoted as developed MCEs) were detected 17 min earlier than the MPEs and 33 min earlier than the peak time of the area-averaged VAR (VARa) for MPEs. Some MCEs dissipated without developing into MPEs (denoted as non-developed MCEs). There were statistically significant differences between the developed and non-developed MCEs in terms of the maximum VARa values, maximum MCEs areas, and increase amounts of the VARa values and MCE areas for the first 6–12 min after their detection. Among these indicators, the maximum VARa for the first 9 min showed the most significant differences. Therefore, an algorithm for predicting MCE development using this indicator is discussed.

Open access
Jonathan Labriola, Youngsun Jung, Chengsi Liu, and Ming Xue

Abstract

In an effort to improve radar data assimilation configurations for potential operational implementation, GSI EnKF data assimilation experiments based on the operational system employed by the Center for Analysis and Prediction of Storms (CAPS) real-time Spring Forecast Experiments are performed. These experiments are followed by 6-h forecasts for an MCS on 28–29 May 2017. Configurations examined include data thinning, covariance localization radii and inflation, observation error settings, and data assimilation frequency for radar observations. The results show experiments that assimilate radar observations more frequently (i.e., 5–10 min) are initially better at suppressing spurious convection. However, assimilating observations every 5 min causes spurious convection to become more widespread with time, and modestly degrades forecast skill through the remainder of the forecast window. Ensembles that assimilate more observations with less thinning of data or use a larger horizontal covariance localization radius for radar data predict fewer spurious storms and better predict the location of observed storms. Optimized data thinning and horizontal covariance localization radii have positive impacts on forecast skill during the first forecast hour that are quickly lost due to the growth of forecast error. Forecast skill is less sensitive to the ensemble spread inflation factors and observation errors tested during this study. These results provide guidance toward optimizing the configuration of the GSI EnKF system. Among the DA configurations tested, the one employed by the CAPS Spring Forecast Experiment produces the most skilled forecasts while remaining computationally efficient for real-time use.

Restricted access
Burkely T. Gallo, Jamie K. Wolff, Adam J. Clark, Israel Jirak, Lindsay R. Blank, Brett Roberts, Yunheng Wang, Chunxi Zhang, Ming Xue, Tim Supinie, Lucas Harris, Linjiong Zhou, and Curtis Alexander

Abstract

Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.

Restricted access
Toshichika Iizumi, Yuhei Takaya, Wonsik Kim, Toshiyuki Nakaegawa, and Shuhei Maeda

Abstract

Weather and climate variability associated with major climate modes is a main driver of interannual yield variability of commodity crops in global cropland areas. A global crop forecasting service that is currently in the test operation phase is based on temperature and precipitation forecasts, while recent literature suggests that crop forecasting services may benefit from the use of climate index forecasts. However, no consistent comparison is available on prediction skill between yield models relying on forecasts from temperature and precipitation and from climate indices. Here, we present a global assessment of 26-yr (1983–2008) within-season yield anomaly hindcasts for maize, rice, wheat, and soybean derived using different types of statistical yield models. One type of model utilizes temperature and precipitation for individual cropping areas (the TP model type) to represent the current service, whereas the other type relies on large-scale climate indices (the CI model). For the TP models, three specifications with different model complexities are compared. The results show that the CI model is characterized by a small reduction in the skillful area from the reanalysis model to the hindcast model and shows the largest skillful areas for rice and soybean. In the TP models, the skill of the simple model is comparable to that of the more complex models. Our findings suggest that the use of climate index forecasts for global crop forecasting services in addition to temperature and precipitation forecasts likely increases the total number of crops and countries where skillful yield anomaly prediction is feasible.

Restricted access
J. V. Ratnam, Takeshi Doi, Yushi Morioka, Pascal Oettli, Masami Nonaka, and Swadhin K. Behera

Abstract

The selective ensemble mean (SEM) technique is applied to the late spring and summer months (May–August) surface air temperature anomaly predictions of the Scale Interaction Experiment–Frontier Research Center for Global Change, version 2 (SINTEX-F2), coupled general circulation model over Japan. Using the Köppen–Geiger climatic classification we chose four regions over Japan for applying the SEM technique. The SINTEX-F2 ensemble members for the SEM are chosen based on the anomaly correlation coefficients (ACC) of the SINTEX-F2 predicted and observed surface air temperature anomalies. The SEM technique is applied to generate the forecasts of the surface air temperature anomalies for the period 1983–2018 using the selected members. Analysis shows the ACC skill score of the SEM prediction to be higher compared to the ACC skill score of predictions obtained by averaging all the 24 members of the SINTEX-F2 (ENSMEAN). The SEM predicted surface air temperature anomalies also have higher hit rate and lower false alarm rate compared to the ENSMEAN predicted anomalies over a range of temperature anomalies. The results indicate the SEM technique to be a simple and easy to apply method to improve the SINTEX-F2 predictions of surface air temperature anomalies over Japan. The better performance of the SEM in generating the surface air temperature anomalies can be partly attributed to realistic prediction of 850-hPa geopotential height anomalies over Japan.

Restricted access
Gui-Ying Yang, Samantha Ferrett, Steve Woolnough, John Methven, and Chris Holloway

Abstract

A novel technique is developed to identify equatorial waves in analyses and forecasts. In a real-time operational context, it is not possible to apply a frequency filter based on a wide centered time window due to the lack of future data. Therefore, equatorial wave identification is performed based primarily on spatial projection onto wave mode horizontal structures. Spatial projection alone cannot distinguish eastward- from westward-moving waves, so a broadband frequency filter is also applied. The novelty in the real-time technique is to off-center the time window needed for frequency filtering, using forecasts to extend the window beyond the current analysis. The quality of this equatorial wave diagnosis is evaluated. First, the “edge effect” arising because the analysis is near the end of the filter time window is assessed. Second, the impact of using forecasts to extend the window beyond the current date is quantified. Both impacts are shown to be small referenced to wave diagnosis based on a centered time window of reanalysis data. The technique is used to evaluate the skill of the Met Office forecast system in 2015–18. Global forecasts exhibit substantial skill (correlation > 0.6) in equatorial waves, to at least day 4 for Kelvin waves and day 6 for westward mixed Rossby–gravity (WMRG), and meridional mode number n = 1 and n = 2 Rossby waves. A local wave phase diagram is introduced that is useful to visualize and validate wave forecasts. It shows that in the model Kelvin waves systematically propagate too fast, and there is a 25% underestimate of amplitude in Kelvin and WMRG waves over the central Pacific.

Restricted access
George P. Pacey, David M. Schultz, and Luis Garcia-Carreras

Abstract

The frequency of European convective windstorms, environments in which they form, and their convective organizational modes remain largely unknown. A climatology is produced using 10 233 severe convective wind reports from the European Severe Weather Database between 2009 and 2018. Severe convective wind days have increased from 50 days yr−1 in 2009 to 117 days yr−1 in 2018, largely because of an increase in reporting. The highest frequency of reports occurred across central Europe, particularly Poland. Reporting was most frequent in summer, when a severe convective windstorm occurred every other day on average. The preconvective environment was assessed using 361 proximity soundings from 45 stations between 2006 and 2018, and a clustering technique was used to distinguish different environments from nine variables. Two environments for severe convective storms occurred: Type 1, generally low-shear–high-CAPE (convective available potential energy; mostly in the warm season) and Type 2, generally high-shear–low CAPE (mostly in the cold season). Because convective organizational mode often relates to the type of weather hazard, convective organizational mode was studied from 185 windstorms that occurred between 2013 and 2018. In Type-1 environments, the most frequent convective mode was cells, accounting for 58.5% of events, followed by linear modes (29%) and the nonlinear noncellular mode (12.5%). In Type-2 environments, the most frequent convective mode was linear modes (55%), followed by cells (36%) and the nonlinear noncellular mode (9%). Only 10% of windstorms were associated with bow echoes, a lower percentage than other studies, suggesting that forecasters should not necessarily wait to see a bow echo before issuing a warning for strong winds.

Open access