Search Results

You are looking at 1 - 10 of 36 items for

  • Author or Editor: Nusrat Yussouf x
  • Refine by Access: All Content x
Clear All Modify Search
Nusrat Yussouf
and
David J. Stensrud

Abstract

The conventional Weather Surveillance Radar-1988 Doppler (WSR-88D) scans a given weather phenomenon in approximately 5 min, and past results suggest that it takes 30–60 min to establish a storm into a model assimilating these data using an ensemble Kalman filter (EnKF) data assimilation technique. Severe-weather events, however, can develop and evolve very rapidly. Therefore, assimilating observations for a 30–60-min period prior to the availability of accurate analyses may not be feasible in an operational setting. A shorter assimilation period also is desired if forecasts are produced to increase the warning lead time. With the advent of the emerging phased-array radar (PAR) technology, it is now possible to scan the same weather phenomenon in less than 1 min. Therefore, it is of interest to see if the faster scanning rate of PAR can yield improvements in storm-scale analyses and forecasts from assimilating over a shorter period of time. Observing system simulation experiments are conducted to evaluate the ability to quickly initialize a storm into a numerical model using PAR data in place of WSR-88D data. Synthetic PAR and WSR-88D observations of a splitting supercell storm are created from a storm-scale model run using a realistic volume-averaging technique in native radar coordinates. These synthetic reflectivity and radial velocity observations are assimilated into the same storm-scale model over a 15-min period using an EnKF data assimilation technique followed by a 50-min ensemble forecast. Results indicate that assimilating PAR observations at 1-min intervals over a short 15-min period yields significantly better analyses and ensemble forecasts than those produced using WSR-88D observations. Additional experiments are conducted in which the adaptive scanning capability of PAR is utilized for thunderstorms that are either very close to or far away from the radar location. Results show that the adaptive scanning capability improves the analyses and forecasts when compared with the nonadaptive PAR data. These results highlight the potential for flexible rapid-scanning PAR observations to help to quickly and accurately initialize storms into numerical models yielding improved storm-scale analyses and very short range forecasts.

Full access
Nusrat Yussouf
and
David J. Stensrud

Abstract

The ability of a multimodel short-range bias-corrected ensemble (BCE) forecasting system, created as part of NOAA’s New England High Resolution Temperature Program during the summer of 2004, to obtain accurate predictions of near-surface variables at independent locations within the model domain is explored. The original BCE approach produces bias-corrected forecasts only at National Weather Service (NWS) observing surface station locations. To extend this approach to obtain bias-corrected forecasts at any given location, an extended BCE technique is developed and applied to the independent observations provided by the Oklahoma Mesonet. First, a Cressman weighting scheme is used to interpolate the bias values of 2-m temperature, 2-m dewpoint temperature, and 10-m wind speeds calculated from the original BCE approach at the NWS observation station locations to the Oklahoma Mesonet locations. These bias values are then added to the raw numerical model forecasts bilinearly interpolated to this same specified location. This process is done for each forecast member within the ensemble and at each forecast time. It is found that the performance of the extended BCE is very competitive with the original BCE approach across the state of Oklahoma. Therefore, a simple postprocessing scheme like the extended BCE system can be used as part of an operational forecasting system to provide reasonably accurate predictions of near-surface variables at any location within the model domain.

Full access
David J. Stensrud
and
Nusrat Yussouf

Abstract

A multimodel short-range ensemble forecasting system created as part of a National Oceanic and Atmospheric Administration pilot program on temperature and air quality forecasting over New England during the summer of 2002 is evaluated. A simple 7-day running mean bias correction is applied individually to each of the 23 ensemble members. Various measures of accuracy are used to compare these bias-corrected ensemble predictions of 2-m temperature and dewpoint temperature with those available from the nested grid model (NGM) model output statistics (MOS). Results indicate that the bias-corrected ensemble mean prediction is as accurate as the NGM MOS for temperature predictions, and is more accurate than the NGM MOS for dewpoint temperature predictions, for the 48 days studied during the warm season. When the additional probabilistic information from the ensemble is examined, results indicate that the ensemble clearly provides value above that of NGM MOS for both variables, especially as the events become more unlikely. Results also indicate that the ensemble has some ability to predict forecast skill for temperature with a correlation between ensemble spread and the error of the ensemble mean of greater than 0.7 for some forecast periods. The use of a multimodel ensemble clearly helps to improve the spread–skill relationship.

Full access
Nusrat Yussouf
and
David J. Stensrud

Abstract

A simple binning technique developed to produce reliable probabilistic quantitative precipitation forecasts (PQPFs) from a multimodel short-range ensemble forecasting system is evaluated during the cool season of 2005/06. The technique uses forecasts and observations of 3-h accumulated precipitation amounts from the past 12 days to adjust the present day’s 3-h quantitative precipitation forecasts from each ensemble member for each 3-h forecast period. Results indicate that the PQPFs obtained from this simple binning technique are significantly more reliable than the raw (original) ensemble forecast probabilities. Brier skill scores and areas under the relative operating characteristic curve also reveal that this technique yields skillful probabilistic forecasts of rainfall amounts during the cool season. This holds true for accumulation periods of up to 48 h. The results obtained from this wintertime experiment parallel those obtained during the summer of 2004. In an attempt to reduce the effects of a small sample size on two-dimensional probability maps, the simple binning technique is modified by implementing 5- and 9-point smoothing schemes on the adjusted precipitation forecasts. Results indicate that the smoothed ensemble probabilities remain an improvement over the raw (original) ensemble forecast probabilities, although the smoothed probabilities are not as reliable as the unsmoothed adjusted probabilities. The skill of the PQPFs also is increased as the ensemble is expanded from 16 to 22 members during the period of study. These results reveal that simple postprocessing techniques have the potential to provide greatly improved probabilistic guidance of rainfall events for all seasons of the year.

Full access
Nusrat Yussouf
and
David J. Stensrud

Abstract

Observational studies indicate that the densities and intercept parameters of hydrometeor distributions can vary widely among storms and even within a single storm. Therefore, assuming a fixed set of microphysical parameters within a given microphysics scheme can lead to significant errors in the forecasts of storms. To explore the impact of variations in microphysical parameters, Observing System Simulation Experiments are conducted based on both perfect- and imperfect-model assumptions. Two sets of ensembles are designed using either fixed or variable parameters within the same single-moment microphysics scheme. The synthetic radar observations of a splitting supercell thunderstorm are assimilated into the ensembles over a 30-min period using an ensemble Kalman filter data assimilation technique followed by 1-h ensemble forecasts. Results indicate that in the presence of a model error, a multiparameter ensemble with a combination of different hydrometeor density and intercept parameters leads to improved analyses and forecasts and better captures the truth within the forecast envelope compared to single-parameter ensemble experiments with a single, constant, inaccurate hydrometeor intercept and density parameters. This conclusion holds when examining the general storm structure, the intensity of midlevel rotation, surface cold pool strength, and the extreme values of the model fields that are most helpful in determining and identifying potential hazards. Under a perfect-model assumption, the single- and multiparameter ensembles perform similarly as model error does not play a role in these experiments. This study highlights the potential for using a variety of realistic microphysical parameters across the ensemble members in improving the analyses and very short-range forecasts of severe weather events.

Full access
David J. Stensrud
and
Nusrat Yussouf

Abstract

A simple binning technique is developed to produce reliable 3-h probabilistic quantitative precipitation forecasts (PQPFs) from the National Centers for Environmental Prediction (NCEP) multimodel short-range ensemble forecasting system obtained during the summer of 2004. The past 12 days’ worth of forecast 3-h accumulated precipitation amounts and observed 3-h accumulated precipitation amounts from the NCEP stage-II multisensor analyses are used to adjust today’s 3-h precipitation forecasts. These adjustments are done individually to each of ensemble members for the 95 days studied. Performance of the adjusted ensemble precipitation forecasts is compared with the raw (original) ensemble predictions. Results show that the simple binning technique provides significantly more skillful and reliable PQPFs of rainfall events than the raw forecast probabilities. This is true for the base 3-h accumulation period as well as for accumulation periods up to 48 h. Brier skill scores and the area under the relative operating characteristics curve also indicate that this technique yields skillful probabilistic forecasts. The performance of the adjusted forecasts also progressively improves with the increased accumulation period. In addition, the adjusted ensemble mean QPFs are very similar to the raw ensemble mean QPFs, suggesting that the method does not significantly alter the ensemble mean forecast. Therefore, this simple postprocessing scheme is very promising as a method to provide reliable PQPFs for rainfall events without degrading the ensemble mean forecast.

Full access
Nusrat Yussouf
and
David J. Stensrud

Abstract

A postprocessing method initially developed to improve near-surface forecasts from a summertime multimodel short-range ensemble forecasting system is evaluated during the cool season of 2005/06. The method, known as the bias-corrected ensemble (BCE) approach, uses the past complete 12 days of model forecasts and surface observations to remove the mean bias of near-surface variables from each ensemble member for each station location and forecast time. In addition, two other performance-based weighted-average BCE schemes, the exponential smoothing method BCE and the minimum variance estimate BCE, are implemented and evaluated. Values of root-mean-squared error from the 2-m temperature and dewpoint temperature forecasts indicate that the BCE approach outperforms the routinely available Global Forecast System (GFS) model output statistics (MOS) forecasts during the cool season by 9% and 8%, respectively. In contrast, the GFS MOS provides more accurate forecasts of 10-m wind speed than any of the BCE methods. The performance-weighted BCE schemes yield no significant improvement in forecast accuracy for 2-m temperature and 2-m dewpoint temperature when compared with the original BCE, although the weighted BCE schemes are found to improve the forecast accuracy of the 10-m wind speed. The probabilistic forecast guidance provided by the BCE system is found to be more reliable than the raw ensemble forecasts. These results parallel those obtained during the summers of 2002–04 and indicate that the BCE method is a promising and inexpensive statistical postprocessing scheme that could be used in all seasons.

Full access
Nusrat Yussouf
,
David J. Stensrud
, and
S. Lakshmivarahan

Abstract

An ensemble of 48-h forecasts from 23 cases during the months of July and August 2002, which was created as part of a National Oceanic and Atmospheric Administration pilot program on temperature and air quality forecasting, is evaluated using a clustering method. The ensemble forecasting system consists of 23 total forecasts from four different models: the National Centers for Environmental Prediction (NCEP) Eta Model (ETA), the NCEP Regional Spectral Model (RSM), the Rapid Update Cycle (RUC) model, and the fifth-generation Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model (MM5). Forecasts of 2-m temperature, 850-hPa u-component wind speed, 500-hPa temperature, and 250-hPa u-component wind speed are bilinearly interpolated to a common grid, and a cluster analysis is conducted at each of the 17 output times for each of the case days using a hierarchical clustering approach.

Results from the clustering indicate that the forecasts largely cluster by model, with these intramodel clusters occurring quite often near the surface and less often at higher levels in the atmosphere. Results also indicate that model physics diversity plays a relatively larger role than initial condition diversity in producing distinct groupings of the forecasts. If the goal of ensemble forecasting is to have each model forecast represent an equally likely solution, then this goal remains distant as the model forecasts too often cluster based upon the model that produces the forecasts. Ensembles that contain both initial condition and model dynamics and physics uncertainty are recommended.

Full access
Dustan M. Wheatley
,
Nusrat Yussouf
, and
David J. Stensrud

Abstract

A Weather Research and Forecasting Model (WRF)-based ensemble data assimilation system is used to produce storm-scale analyses and forecasts of the 4–5 July 2003 severe mesoscale convective system (MCS) over Indiana and Ohio, which produced numerous high wind reports across the two states. Single-Doppler observations are assimilated into a 36-member, storm-scale ensemble during the developing stage of the MCS with the ensemble Kalman filter (EnKF) approach encoded in the Data Assimilation Research Testbed (DART). The storm-scale ensemble is constructed from mesoscale EnKF analyses produced from the assimilation of routinely available observations from land and marine stations, rawinsondes, and aircraft, in an attempt to better represent the complex mesoscale environment for this event. Three EnKF simulations were performed using the National Severe Storms Laboratory (NSSL) one- and two-moment and Thompson microphysical schemes. All three experiments produce a linear convective segment at the final analysis time, similar to the observed system at 2300 UTC 4 July 2003. The higher-order schemes—in particular, the Thompson scheme—are better able to produce short-range forecasts of both the convective and stratiform components of the observed bowing MCS, and produce the smallest temperature errors when comparing surface observations and dropsonde data to corresponding model data. Only the higher-order microphysical schemes produce any appreciable rear-to-front flow in the stratiform precipitation region that trailed the simulated systems. Forecast performance by the three microphysics schemes is discussed in context of differences in microphysical composition produced in the stratiform precipitation regions of the rearward expanding MCSs.

Full access
Kenneth A. James
,
David J. Stensrud
, and
Nusrat Yussouf

Abstract

Near-real-time values of vegetation fraction are incorporated into a 2-km nested version of the Advanced Research Weather Research and Forecasting (ARW) model and compared to forecasts from a control run that uses climatological values of vegetation fraction for eight severe weather events during 2004. It is hypothesized that an improved partitioning of surface sensible and latent heat fluxes occurs when incorporating near-real-time values of the vegetation fraction into models, which may result in improved forecasts of the low-level environmental conditions that support convection and perhaps even lead to improved explicit convective forecasts. Five of the severe weather events occur in association with weak synoptic-scale forcing, while three of the events occur in association with moderate or strong synoptic-scale forcing.

Results show that using the near-real-time values of the vegetation fraction alters the values and structure of low-level temperature and dewpoint temperature fields compared to the forecasts using climatological vegetation fractions. The environmental forecasts that result from using the real-time vegetation fraction are more thermodynamically supportive of convection, including stronger and deeper frontogenetic circulations, and statistically significant improvements of most unstable CAPE forecasts compared to the control run. However, despite the improved environmental forecasts, the explicit convective forecasts using real-time vegetation fractions show little to no improvement over the control forecasts. The convective forecasts are generally poor under weak synoptic-scale forcing and generally good under strong synoptic-scale forcing. These results suggest that operational forecasters can best use high-resolution forecasts to help diagnose environmental conditions within an ingredients-based forecasting approach.

Full access