Browse

You are looking at 1 - 10 of 2,389 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Callie McNicholas and Clifford F. Mass

Abstract

To examine the utility of smartphone pressure observations (SPOs), a climatology of mesoscale pressure features was developed to evaluate whether SPOs could better resolve mesoscale phenomena than existing surface pressure networks (MADIS). A comparison between MADIS and smartphone pressure analyses was performed by tracking and characterizing bandpass-filtered, mesoscale pressure features. Over the year 2018, nearly 3000 pressure features were tracked across the central and eastern United States. Pressure features identified by smartphone observations lasted, on average, 25 min longer, traveled 25 km farther, and exhibited larger amplitudes than features observed by MADIS. An examination of smartphone pressure features tracks by season and location found that almost all pressure features propagated eastward. With over 87% of observed pressure features associated with convection, the climatology of surface pressure features largely reflects the geographic and seasonal variation of mesoscale convection. Phase relationships between pressure features and other surface variables were consistent with those expected for mesohighs and wake lows. These results suggest that SPOs could enhance convective analyses and forecasts compared to existing surface networks like MADIS by better resolving mesoscale structures and features, such as wake lows and mesohighs.

Significance Statement

While smartphone pressure networks provide unprecedented observation coverage and density, it was unclear whether they can add value to existing surface pressure networks. This study addresses this question by developing a yearlong record of mesoscale pressure features over the eastern and central United States. Analysis of this record revealed that smartphone analyses better resolved mesoscale pressure features, especially across the central United States where existing surface pressure networks are sparser. Nearly all observed pressure features were observed near precipitation, with five in six associated with convection. Relationships between mesoscale pressure features and other surface state variables were consistent with those expected for mesohighs and wake lows.

Restricted access
Jacob Coburn and Sara C. Pryor

Abstract

Wind gusts, and in particular intense gusts, are societally relevant but extremely challenging to forecast. This study systematically assesses the skill enhancement that can be achieved using artificial neural networks (ANNs) for forecasting of wind gust occurrence and magnitude. Geophysical predictors from the ERA5 reanalysis are used in conjunction with an autoregressive term in regression and ANN models with different predictors, and varying model complexity. Models are derived and assessed for the warm (April–September) and cold (October–March) seasons for three high passenger volume airports in the United States. Model uncertainty is assessed by deriving models for 1000 different randomly selected training (70%) and testing (30%) subsets. Gust prediction fidelity in independent test samples is critically dependent on inclusion of an autoregressive term. Gust occurrence probabilities derived using five-layer ANNs exhibit consistently higher fidelity than those from regression models and shallower ANNs. Inclusion of the autoregressive term and increasing the number of hidden layers in ANNs from 1 to 5 also improve the model performance for gust magnitudes (lower RMSE, increased correlation, and model standard deviations that more closely approximate observed values). Deeper ANNs (e.g., 20 hidden layers) exhibit higher skill in forecasting strong (17–25.7 m s−1) and damaging (≥25.7 m s−1) wind gusts. However, such deep networks exhibit evidence of overfitting and still substantially underestimate (by 50%) the frequency of strong and damaging wind gusts at the three airports considered herein.

Significance Statement

Improved short-term forecasting of wind gusts will enhance aviation safety and logistics and may offer other societal benefits. Here we present a rigorous investigation of the relative skill of models of wind gust occurrence and magnitude that employ different statistical methods. It is shown that artificial neural networks (ANNs) offer considerable skill enhancement over regression methods, particularly for strong and damaging wind gusts. For wind gust magnitudes in particular, application of deeper learning networks (e.g., five or more hidden layers) offers tangible improvements in forecast accuracy. However, deeper networks are vulnerable to overfitting and exhibit substantial variability with the specific training and testing data subset used. Also, even deep ANNs reproduce only half of strong and damaging wind gusts. These results indicate the need for future work to elucidate the dynamical mechanisms of intense wind gusts and advance solutions to their prediction.

Restricted access
Gayatri Vani K., Greeshma M. Mohan, Anupam Hazra, S. D. Pawar, Samir Pokhrel, Hemantkumar S. Chaudhari, Mahen Konwar, Subodh K. Saha, Chandrima Mallick, Subrata K. Das, Sachin Deshpande, Sachin D. Ghude, Manoj Domkawale, Suryachandra A. Rao, Ravi. S. Nanjundiah, and M. Rajeevan

Abstract

The evaluation and usefulness of lightning prediction for the Indian subcontinent are demonstrated. Implementation of the lightning parameterizations based on storm parameters, in the Weather Research and Forecasting (WRF) Model, with different microphysics schemes are carried out. With the availability of observed lightning measurements over Maharashtra from the lightning detection network (LDN), lightning cases have been identified during the pre-monsoon season of 2016–18. Lightning parameterization based on cloud top height defined by a reflectivity threshold factor of 20 dBZ is chosen. Initial analysis is carried out for 16 lightning events with four microphysical schemes for the usefulness in lightning prediction. Objective analysis is carried out and quantitative model performance (skill scores) is assessed based on observed data. The skills are evaluated for 10- and 50-km2 boxes from the 1-km domain. There is good POD of 0.86, 0.82, 0.85, and 0.84, and false alarm ratio (FAR) of 0.28, 0.25, 0.29, and 0.26 from WSM6, Thompson, Morrison, and WDM6, respectively. There is an overestimation in lightning flash with a spatial and temporal shift. The fractional skill score is evaluated as a function of spatial scale with neighborhoods from 25 to 250 km. These high skill scores and high degree of correlation between observations and model simulation gives us confidence to use the system for real-time operational forecast over India. The skill for 2019 and 2020 pre-monsoon are calculated to address the predictability of operational lightning prediction over India.

Significance Statement

A high-resolution model, namely, the Weather Research and Forecasting (WRF) Model, with multiple microphysics parameterization schemes and lightning parameterization is used here. The objective analysis is carried out for the lightning cases over India and the quantitative performance is assessed. The results highlight that there is fairly good probability of detection (POD) of 0.86, 0.82, 0.85, and 0.84 and false alarm ratio (FAR) of 0.28, 0.25, 0.29, and 0.26 from four different microphysical schemes (WSM6, Thompson, Morrison, and WDM6, respectively). These high skill scores and high degree of correlation between observations and model simulation gives us confidence to use the system for real-time operational forecast. The validation of lightning forecast system deployed over India for five pre-monsoon months in real time is carried out, which gives POD of 0.90, FAR of 0.64, hit rate of 0.57, and POFD of 0.50 for the whole Indian region.

Restricted access
Andrew R. Wade and Israel L. Jirak

Abstract

This study explored how forecasters can best use the two main forms of operational convection-allowing model guidance: the High-Resolution Ensemble Forecast (HREF) system and the hourly High-Resolution Rapid Refresh (HRRR). The former represents a wider range of possible outcomes, but the latter updates much more frequently and incorporates newer observations. HREF and time-lagged High-Resolution Rapid Refresh (HRRR-TL) probabilistic forecasts of reflectivity and updraft helicity, as well as two methods of combining HREF and HRRR into hourly updating blended guidance, were evaluated for the 2021 Spring Forecasting Experiment (SFE) period. In both objective skill and the subjective ratings of SFE participants, the 1200 UTC HREF proved difficult to outperform over this sample of events, even when incorporating HRRR initializations as late as 1800 UTC. It was usually better to use either of the experimental blending techniques than to simply discard the older HREF in favor of newer HRRR solutions. The greater model diversity and dispersion of solutions within the HREF is likely primarily responsible for this result. A possible bias in diurnal convection initiation timing and coverage in the newly upgraded HRRRv4 was also investigated, including on subdomains targeted to weakly forced diurnal initiation, and was found to have little or no systematic effect on HRRRv4’s operational utility.

Restricted access
Burkely T. Gallo, Katie A. Wilson, Jessica Choate, Kent Knopfmeier, Patrick Skinner, Brett Roberts, Pamela Heinselman, Israel Jirak, and Adam J. Clark

Abstract

During the 2019 Spring Forecasting Experiment in NOAA’s Hazardous Weather Testbed, two NWS forecasters issued experimental probabilistic forecasts of hail, tornadoes, and severe convective wind using NSSL’s Warn-on-Forecast System (WoFS). The aim was to explore forecast skill in the time frame between severe convective watches and severe convective warnings during the peak of the spring convective season. Hourly forecasts issued during 2100–0000 UTC, valid from 0100 to 0200 UTC demonstrate how forecasts change with decreasing lead time. Across all 13 cases in this study, the descriptive outlook statistics (e.g., mean outlook area, number of contours) change slightly and the measures of outlook skill (e.g., fractions skill score, reliability) improve incrementally with decreasing lead time. WoFS updraft helicity (UH) probabilities also improve slightly and less consistently with decreasing lead time, though both the WoFS and the forecasters generated skillful forecasts throughout. Larger skill differences with lead time emerge on a case-by-case basis, illustrating cases where forecasters consistently improved upon WoFS guidance, cases where the guidance and the forecasters recognized small-scale features as lead time decreased, and cases where the forecasters issued small areas of high probabilities using guidance and observations. While forecasts generally “honed in” on the reports with slightly smaller contours and higher probabilities, increased confidence could include higher certainty that severe weather would not occur (e.g., lower probabilities). Long-range (1–5 h) WoFS UH probabilities were skillful, and where the guidance erred, forecasters could adjust for those errors and increase their forecasts’ skill as lead time decreased.

Significance Statement

Forecasts are often assumed to improve as an event approaches and uncertainties resolve. This work examines the evolution of experimental forecasts valid over one hour with decreasing lead time issued using the Warn-on-Forecast System (WoFS). Because of its rapidly updating ensemble data assimilation, WoFS can help forecasters understand how thunderstorm hazards may evolve in the next 0–6 h. We found slight improvements in forecast and WoFS performance as a function of lead time over the full experiment; the first forecasts issued and the initial WoFS guidance performed well at long lead times, and good performance continued as the event approached. However, individual cases varied and forecasters frequently combined raw model output with observed mesoscale features to provide skillful small-scale forecasts.

Restricted access
Wei Sun, Zhiquan Liu, Guiting Song, Yangyang Zhao, Shan Guo, Feifei Shen, and Xiangming Sun

Abstract

To improve the wind speed forecasts at turbine locations and at hub height, this study develops the WRFDA system to assimilate the wind speed observations measured on the nacelle of turbines (hereafter referred as turbine wind speed observations) with both 3DVAR and 4DVAR algorithms. Results exhibit that the developed data assimilation (DA) system helps in greatly improving the analysis and the forecast of wind turbine speed. Among three experiments with no cycling DA, with 2-h cycling DA, and with 4-h cycling DA, the last experiment generates the best analysis, improving the averaged forecasts (from T + 9 to T + 24) of wind speed over all wind farms by 32.5% in the bias and 6.3% in the RMSE. After processing the turbine wind speed observations into superobs, even bigger improvements are revealed when validating against either the original turbine wind speed observations or the superobs. Taken the results validated against the superobs as an example, the bias and RMSE of the forecasts (from T + 9 to T + 24) averaged over all wind farms are reduced by 38.8% and 12.0%, respectively. Compared to the best-performed 3DVAR experiment (4-h cycling and superobs), the experiment following the same DA strategy but using 4DVAR algorithm exhibits further improvements, especially for the averaged bias in the forecasts of all wind farms, and the changing amount in the forecasts of the enhanced wind farms. Compared to the control experiment, the 4DVAR experiment reduces the bias and RMSE in the forecasts (from T + 9 to T + 24) by 54.6% (0.66 m s−1) and 12.7% (0.34 m s−1).

Open access
David J. Wilke, Jeffrey D. Kepert, and Kevin J. Tory

Abstract

The meteorological conditions over the South Coast of New South Wales, Australia, are investigated on 18 March 2018, the day of the Tathra bushfire. We present an analysis of the event based on high-resolution (100- and 400-m grid-length) simulations with the Bureau of Meteorology’s ACCESS numerical weather prediction system and available observations. Through this analysis we find several mesoscale features that likely contributed to the extreme fire event. Key among these was the development of horizontal convective rolls, which emanated from inland and aided the fire’s spread toward Tathra. The rolls interacted with the terrain to produce complex regions of strongly ascending and descending air, likely accelerating the lofting of firebrands and potentially contributing to the significant lee-slope fire behavior observed. Mountain waves, specifically trapped lee waves, occurred on the day and are hypothesized to have contributed to the strong winds around the time the fire began. These waves may also have influenced conditions during the period of peak fire activity when the fire spotted across the Bega River and impacted Tathra. Finally, the passage of the cold front through the fireground was complex, with frontal regression observed at a nearby station and likely also through Tathra. We postulate that interactions between the strong prefrontal flow and the initially weak change resulted in highly variable and dangerous fire weather across the fireground for a significant period after the change initially occurred.

Significance Statement

The town of Tathra on the South Coast of New South Wales, Australia, was devastated on 18 March 2018, when a wildfire ignited in nearby bushland and quickly intensified to impact the town. Using high-resolution numerical weather simulations, we investigate the conditions that led to the extreme fire behavior. The simulations show that the fire ignited and intensified under highly variable conditions driven by complex interactions between the flow over nearby mountains and the passage of a strong cold front. This case study highlights the value of such models in understanding high-impact weather for the purpose of hazard preparedness and emergency response. Additionally, it contributes to a growing number of case studies that indicate the future direction of high-impact forecast services.

Restricted access
Lianglyu Chen, Chengsi Liu, Youngsun Jung, Patrick Skinner, Ming Xue, and Rong Kong

Abstract

The Center for Analysis and Prediction of Storms has recently developed capabilities to directly assimilate radar reflectivity and radial velocity data within the GSI-based ensemble Kalman filter (EnKF) and hybrid ensemble three-dimensional variational (En3DVar) system for initializing convective-scale forecasts. To assess the performance of EnKF and hybrid En3DVar with different hybrid weights (with 100%, 20%, and 0% of static background error covariance corresponding to pure 3DVar, hybrid En3DVar, and pure En3DVar) for assimilating radar data in a Warn-on-Forecast framework, a set of data assimilation and forecast experiments using the WRF Model are conducted for six convective storm cases of May 2017. Using an object-based verification approach, forecast objects of composite reflectivity and 30-min updraft helicity swaths are verified against reflectivity and rotation track objects in Multi-Radar Multi-Sensor data on space and time scales typical of National Weather Service warnings. Forecasts initialized by En3DVar or the best performing EnKF ensemble member produce the highest object-based verification scores, while forecasts from 3DVar and the worst EnKF member produce the lowest scores. Averaged across six cases, hybrid En3DVar using 20% static background error covariance does not improve forecasts over pure En3DVar, although improvements are seen in some individual cases. The false alarm ratios of EnKF members for both composite reflectivity and updraft helicity at the initial time are lower than those from variational methods, suggesting that EnKF analysis reduces spurious reflectivity and mesocyclone objects more effectively.

Restricted access
Sergey Kravtsov, Paul Roebber, Thomas M. Hamill, and James Brown

Abstract

This paper utilizes statistical and statistical–dynamical methodologies to select, from the full observational record, a minimal subset of dates that would provide representative sampling of local precipitation distributions across the contiguous United States (CONUS). The CONUS region is characterized by a great diversity of precipitation-producing systems, mechanisms, and large-scale meteorological patterns (LSMPs), which can provide favorable environment for local precipitation extremes. This diversity is unlikely to be adequately captured in methodologies that rely on grossly reducing the dimensionality of the data—by representing it in terms of a few patterns evolving in time—and thus requires data thinning techniques based on high-dimensional dynamical or statistical data modeling. We have built a novel high-dimensional empirical model of temperature and precipitation capable of producing statistically accurate surrogate realizations of the observed 1979–99 (training period) evolution of these fields. This model also provides skillful hindcasts of precipitation over the 2000–20 (validation) period. We devised a subsampling strategy based on the relative entropy of the empirical model’s precipitation (ensemble) forecasts over CONUS and demonstrated that it generates a set of dates that captures a majority of high-impact precipitation events, while substantially reducing a heavy-precipitation bias inherent in an alternative methodology based on the direct identification of large precipitation events in the Global Ensemble Forecast System (GEFS), version 12 reforecasts. The impacts of data thinning on the accuracy of precipitation statistical postprocessing, as well as on the calibration and validation of the Hydrologic Ensemble Forecast Service (HEFS) reforecasts are yet to be established.

Significance Statement

High-impact weather events are usually associated with extreme precipitation, which is notoriously difficult to predict even using highly resolved state-of-the-art numerical weather prediction models based on first physical principles. The same is true for statistical models that use past data to anticipate the future behavior likely to stem from an observed initial state. Here we use both types of models to identify the occurrences of the states, over the historical climate record, which are likely to lead to extreme precipitation events. We show that the overall statistics of precipitation over the contiguous United States is encapsulated in a greatly reduced set of such states, which could substantially alleviate the computational burden associated with testing of hydrological forecast models used for decision support.

Restricted access
Bruce Ainslie, Rita So, and Jack Chen

Abstract

An evaluation of an operational wildfire air quality model (WFAQM) has been performed. Evaluation metrics were chosen through an analysis of interviews and a survey of professionals who use WFAQM forecasts as part of their daily responsibilities. The survey revealed that professional users generally focus on whether forecast air quality will exceed thresholds that trigger local air quality advisories (e.g., an event), their analysis scale is their region of responsibility, they are interested in short-term (≈24 h) guidance, missing an event is worse than issuing a false alarm, and there are two types of users—one that takes the forecast at face value, and the other that uses it as one of several information sources. Guided by these findings, model performance of Environment and Climate Change Canada’s current operational WFAQM (FireWork) was assessed over western Canada during three (2016–18) summer (May–September) wildfire seasons. Evaluation was performed at the geographic scale at which individual forecasts are issued (the forecast region) using gridded particulate matter 2.5 (PM2.5) fields developed from a machine learning–based downscaling of satellite and meteorological data. For the “at face value” user group, model performance was measured using the Peirce skill score. For the “as information source” user group, model performance was measured using the divergence skill score. For this metric, forecasts were first converted to event probabilities using binomial regression. We find when forecasts are taken at face value, FireWork cannot outperform a nearest-neighbor-based persistence model. However, when forecasts are considered as an information source, FireWork is superior to the persistence-based model.

Restricted access