Search Results

You are looking at 11 - 20 of 26 items for

  • Author or Editor: Jun Du x
  • Refine by Access: All Content x
Clear All Modify Search
Steven L. Mullen
,
Jun Du
, and
Frederick Sanders

Abstract

The impact of differences in analysis–forecast systems on dispersion of an ensemble forecast is examined for a case of cyclogenesis. Changes in the dispersion properties between two 25-member ensemble forecasts with different cumulus parameterization schemes and different initial analyses are compared. The statistical significance of the changes is assessed.

Error growth due to initial condition uncertainty depends significantly on the analysis–forecast system. Quantitative precipitation forecasts and probabilistic quantitative precipitation forecasts are extremely sensitive to the specification of physical parameterizations in the model. Regions of large variability tend to coincide with a high likelihood of parameterized convection. Analysis of other model fields suggests that those with relatively large energy in the mesoscale also exhibit highly significant differences in dispersion.

The results presented here provide evidence that the combined effect of uncertainties in model physics and the initial state provides a means to increase the dispersion of ensemble prediction systems, but care must be taken in the construction of mixed ensemble systems to ensure that other properties of the ensemble distribution are not overly degraded.

Full access
Weihong Qian
,
Jun Du
,
Xiaolong Shan
, and
Ning Jiang

Abstract

Properly including moisture effects into a dynamical parameter can significantly increase the parameter’s ability to diagnose heavy rain locations. The relative humidity–based weighting approach used to extend the moist potential vorticity (MPV) to the generalized moist potential vorticity (GMPV) is analyzed and demonstrates such an improvement. Following the same approach, two new diagnostic parameters, moist vorticity (MV) and moist divergence (MD), have been proposed in this study by incorporating moisture effects into the traditional vorticity and divergence. A regional heavy rain event that occurred along the Yangtze River on 1 July 1991 is used as a case study, and 41 daily regional heavy rain events during the notorious flooding year of 1998 in eastern China are used for a systematic evaluation. Results show that after the moisture effects were properly incorporated, the improved ability of all three parameters to capture a heavy rain area is significant (statistically at the 99% confidence level): the GMPV is improved over the MPV by 194%, the MD over the divergence by 60%, and the MV over the vorticity by 34% in terms of the threat score (TS). The average TS is 0.270 for the MD, 0.262 for the MV, and 0.188 for the GMPV. Application of the MV and MD to assess heavy rain potential is not intended to replace a complete, multiscale forecasting methodology; however, the results from this study suggest that the MV and MD could be used to postprocess a model forecast to potentially improve heavy rain location predictions.

Full access
Daniel Gombos
,
James A. Hansen
,
Jun Du
, and
Jeff McQueen

Abstract

A minimum spanning tree (MST) rank histogram (RH) is a multidimensional ensemble reliability verification tool. The construction of debiased, decorrelated, and covariance-homogenized MST RHs is described. Experiments using Euclidean L 2, variance, and Mahalanobis norms imply that, unless the number of ensemble members is less than or equal to the number of dimensions being verified, the Mahalanobis norm transforms the problem into a space where ensemble imperfections are most readily identified. Short-Range Ensemble Forecast Mahalanobis-normed MST RHs for a cluster of northeastern U.S. cities show that forecasts of the temperature–humidity index are the most reliable of those considered, followed by mean sea level pressure, 2-m temperature, and 10-m wind speed forecasts. MST RHs of a Southwest city cluster illustrate that 2-m temperature forecasts are the most reliable weather component in this region, followed by mean sea level pressure, 10-m wind speed, and the temperature–humidity index. Forecast reliabilities of the Southwest city cluster are generally less reliable than those of the Northeast cluster.

Full access
David J. Stensrud
,
Harold E. Brooks
,
Jun Du
,
M. Steven Tracton
, and
Eric Rogers

Abstract

Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.

There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.

Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.

Full access
Huiling Yuan
,
Steven L. Mullen
,
Xiaogang Gao
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

The National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) is used to produce twice-daily (0000 and 1200 UTC), high-resolution ensemble forecasts to 24 h. The forecasts are performed at an equivalent horizontal grid spacing of 12 km for the period 1 November 2002 to 31 March 2003 over the southwest United States. The performance of 6-h accumulated precipitation is assessed for 32 U.S. Geological Survey hydrologic catchments. Multiple accuracy and skill measures are used to evaluate probabilistic quantitative precipitation forecasts. NCEP stage-IV precipitation analyses are used as “truth,” with verification performed on the stage-IV 4-km grid. The RSM ensemble exhibits a ubiquitous wet bias. The bias manifests itself in areal coverage, frequency of occurrence, and total accumulated precipitation over every region and during every 6-h period. The biases become particularly acute starting with the 1800–0000 UTC interval, which leads to a spurious diurnal cycle and the 1200 UTC cycle being more adversely affected than the 0000 UTC cycle. Forecast quality and value exhibit marked variability over different hydrologic regions. The forecasts are highly skillful along coastal California and the windward slopes of the Sierra Nevada Mountains, but they generally lack skill over the Great Basin and the Colorado basin except over mountain peaks. The RSM ensemble is able to discriminate precipitation events and provide useful guidance to a wide range of users over most regions of California, which suggests that mitigation of the conditional biases through statistical postprocessing would produce major improvements in skill.

Full access
Huiling Yuan
,
Steven L. Mullen
,
Xiaogang Gao
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

The National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) is used to generate ensemble forecasts over the southwest United States during the 151 days of 1 November 2002 to 31 March 2003. RSM forecasts to 24 h on a 12-km grid are produced from 0000 and 1200 UTC initial conditions. Eleven ensemble members are run each forecast cycle from the NCEP Global Forecast System (GFS) ensemble analyses (one control and five pairs of bred modes) and forecast lateral boundary conditions. The model domain covers two NOAA River Forecast Centers: the California Nevada River Forecast Center (CNRFC) and the Colorado Basin River Forecast Center (CBRFC). Ensemble performance is evaluated for probabilistic forecasts of 24-h accumulated precipitation in terms of several accuracy and skill measures. Differences among several NCEP precipitation analyses are assessed along with their impact on model verification, with NCEP stage IV blended analyses selected to represent “truth.”

Forecast quality and potential value are found to depend strongly on the verification dataset, geographic region, and precipitation threshold. In general, the RSM forecasts are skillful over the CNRFC region for thresholds between 1 and 50 mm but are unskillful over the CBRFC region. The model exhibits a wet bias for all thresholds that is larger over Nevada and the CBRFC region than over California. Mitigation of such biases over the Southwest will pose serious challenges to the modeling community in view of the uncertainties inherent in verifying analyses.

Full access
David J. Stensrud
,
Harold E. Brooks
,
Jun Du
,
M. Steven Tracton
, and
Eric Rogers
Full access
Guo Deng
,
Jun Du
,
Yushu Zhou
,
Ling Yan
,
Jing Chen
,
Fajing Chen
,
Hongqi Li
, and
Jingzhou Wang

Abstract

Using a 3-km regional ensemble prediction system (EPS), this study tested a three-dimensional (3D) rescaling mask for initial condition (IC) perturbation. Whether the 3D mask-based EPS improves ensemble forecasts over current two-dimensional (2D) mask-based EPS has been evaluated in three aspects: ensemble mean, spread, and probability. The forecasts of wind, temperature, geopotential height, sea level pressure, and precipitation were examined for a summer month (1–28 July 2018) and a winter month (1–27 February 2019) over a region in North China. The EPS was run twice per day (initiated at 0000 and 1200 UTC) to 36 h in forecast length, providing 56 warm-season forecast cases and 54 cold-season cases for verification. The warm and cold seasons are verified separately for comparison. The study found the following: 1) The vertical profile of IC perturbation becomes closer to that of analysis uncertainty with the 3D rescaling mask. 2) Ensemble performance is significantly improved in all three aspects. The biggest improvement is in the ensemble spread, followed by the probabilistic forecast, and the least improvement is in the ensemble mean forecast. Larger improvements are seen in the warm season than in the cold season. 3) More improvement is in the shorter time range (<24 h) than in the longer range. 4) Surface and lower-level variables are improved more than upper-level ones. 5) The underlying mechanism for the improvement has been investigated. Convective instability is found to be responsible for the spread increment and, thus, overall ensemble forecast improvement. Therefore, using a 3D rescaling mask is recommended for an EPS to increase its utility especially for shorter time range and surface weather elements.

Significant Statement

A weather prediction model is a complex system that consists of nonlinear differential equations. Small errors in either its inputs or model itself will grow with time during model integration, which will contaminate a forecast. To quantify such contamination (“uncertainty”) of a forecast, the ensemble forecasting technique is used. An ensemble of forecasts is a multiple of model runs at the same time but with slightly “perturbed” inputs or model versions. These small perturbations are supposed to represent true “uncertainty” in inputs or model representation. This study proposed a technique that makes a perturbation’s vertical structure more resemble real uncertainty (intrinsic error) in input data and confirmed that it can significantly improve ensemble forecast quality especially for a shorter time range and lower-level weather elements. It is found that convective instability is responsible for the improvement.

Open access
Jingzhuo Wang
,
Jing Chen
,
Jun Du
,
Yutao Zhang
,
Yu Xia
, and
Guo Deng

Abstract

This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent “strong” and “weak” bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread’s spatial structure is much less; the spread–skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.

Open access
Huiling Yuan
,
Xiaogang Gao
,
Steven L. Mullen
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

A feed-forward neural network is configured to calibrate the bias of a high-resolution probabilistic quantitative precipitation forecast (PQPF) produced by a 12-km version of the NCEP Regional Spectral Model (RSM) ensemble forecast system. Twice-daily forecasts during the 2002–2003 cool season (1 November–31 March, inclusive) are run over four U.S. Geological Survey (USGS) hydrologic unit regions of the southwest United States. Calibration is performed via a cross-validation procedure, where four months are used for training and the excluded month is used for testing. The PQPFs before and after the calibration over a hydrological unit region are evaluated by comparing the joint probability distribution of forecasts and observations. Verification is performed on the 4-km stage IV grid, which is used as “truth.” The calibration procedure improves the Brier score (BrS), conditional bias (reliability) and forecast skill, such as the Brier skill score (BrSS) and the ranked probability skill score (RPSS), relative to the sample frequency for all geographic regions and most precipitation thresholds. However, the procedure degrades the resolution of the PQPFs by systematically producing more forecasts with low nonzero forecast probabilities that drive the forecast distribution closer to the climatology of the training sample. The problem of degrading the resolution is most severe over the Colorado River basin and the Great Basin for relatively high precipitation thresholds where the sample of observed events is relatively small.

Full access