Search Results

You are looking at 1 - 10 of 21 items for :

  • Author or Editor: Jun Du x
  • Refine by Access: All Content x
Clear All Modify Search
Jun Du
and
Binbin Zhou

Abstract

This study proposes a dynamical performance-ranking method (called the Du–Zhou ranking method) to predict the relative performance of individual ensemble members by assuming the ensemble mean is a good estimation of the truth. The results show that the method 1) generally works well, especially for shorter ranges such as a 1-day forecast; 2) has less error in predicting the extreme (best and worst) performers than the intermediate performers; 3) works better when the variation in performance among ensemble members is large; 4) works better when the model bias is small; 5) works better in a multimodel than in a single-model ensemble environment; and 6) works best when using the magnitude difference between a member and its ensemble mean as the “distance” measure in ranking members. The ensemble mean and median generally perform similarly to each other.

This method was applied to a weighted ensemble average to see if it can improve the ensemble mean forecast over a commonly used, simple equally weighted ensemble averaging method. The results indicate that the weighted ensemble mean forecast has a smaller systematic error. This superiority of the weighted over the simple mean is especially true for smaller-sized ensembles, such as 5 and 11 members, but it decreases with the increase in ensemble size and almost vanishes when the ensemble size increases to 21 members. There is, however, little impact on the random error and the spatial patterns of ensemble mean forecasts. These results imply that it might be difficult to improve the ensemble mean by just weighting members when an ensemble reaches a certain size. However, it is found that the weighted averaging can reduce the total forecast error more when a raw ensemble-mean forecast itself is less accurate. It is also expected that the effectiveness of weighted averaging should be improved when the ensemble spread is improved or when the ranking method itself is improved, although such an improvement should not be expected to be too big (probably less than 10%, on average).

Full access
Binbin Zhou
and
Jun Du

Abstract

A new multivariable-based diagnostic fog-forecasting method has been developed at NCEP. The selection of these variables, their thresholds, and the influences on fog forecasting are discussed. With the inclusion of the algorithm in the model postprocessor, the fog forecast can now be provided centrally as direct NWP model guidance. The method can be easily adapted to other NWP models. Currently, knowledge of how well fog forecasts based on operational NWP models perform is lacking. To verify the new method and assess fog forecast skill, as well as to account for forecast uncertainty, this fog-forecasting algorithm is applied to a multimodel-based Mesoscale Ensemble Prediction System (MEPS). MEPS consists of 10 members using two regional models [the NCEP Nonhydrostatic Mesoscale Model (NMM) version of the Weather Research and Forecasting (WRF) model and the NCAR Advanced Research version of WRF (ARW)] with 15-km horizontal resolution. Each model has five members (one control and four perturbed members) using the breeding technique to perturb the initial conditions and was run once per day out to 36 h over eastern China for seven months (February–September 2008). Both deterministic and probabilistic forecasts were produced based on individual members, a one-model ensemble, and two-model ensembles. A case study and statistical verification, using both deterministic and probabilistic measuring scores, were performed against fog observations from 13 cities in eastern China. The verification was focused on the 12- and 36-h forecasts.

By applying the various approaches, including the new fog detection scheme, ensemble technique, multimodel approach, and the increase in ensemble size, the fog forecast accuracy was steadily and dramatically improved in each of the approaches: from basically no skill at all [equitable threat score (ETS) = 0.063] to a skill level equivalent to that of warm-season precipitation forecasts of the current NWP models (0.334). Specifically, 1) the multivariable-based fog diagnostic method has a much higher detection capability than the liquid water content (LWC)-only based approach. Reasons why the multivariable approach works better than the LWC-only method were also illustrated. 2) The ensemble-based forecasts are, in general, superior to a single control forecast measured both deterministically and probabilistically. The case study also demonstrates that the ensemble approach could provide more societal value than a single forecast to end users, especially for low-probability significant events like fog. Deterministically, a forecast close to the ensemble median is particularly helpful. 3) The reliability of probabilistic forecasts can be effectively improved by using a multimodel ensemble instead of a single-model ensemble. For a small ensemble such as the one in this study, the increase in ensemble size is also important in improving probabilistic forecasts, although this effect is expected to decrease with the increase in ensemble size.

Full access
Weihong Qian
,
Jun Du
, and
Yang Ai

Abstract

Comparisons between anomaly and full-field methods have been carried out in weather analysis and forecasting over the last decade. Evidence from these studies has demonstrated the superiority of anomaly to full field in the following four aspects: depiction of weather systems, anomaly forecasts, diagnostic parameters, and model prediction. To promote the use and further discussion of the anomaly approach, this article summarizes those findings. After examining many types of weather events, anomaly weather maps show at least five advantages in weather system depiction: 1) less vagueness in visually connecting the location of an event with its associated meteorological conditions, 2) clearer and more complete depictions of vertical structures of a disturbance, 3) easier observation of time and spatial evolution of an event and its interaction or connection with other weather systems, 4) simplification of conceptual models by unifying different weather systems into one pattern, and 5) extension of model forecast length due to earlier detection of predictors. Anomaly verification is also mentioned. The anomaly forecast is useful for raising one’s awareness of potential societal impact. Combining the anomaly forecast with an ensemble is emphasized, where a societal impact index is discussed. For diagnostic parameters, two examples are given: an anomalous convective instability index for convection, and seven vorticity and divergence related parameters for heavy rain. Both showed positive contributions from the anomalous fields. For model prediction, the anomaly version of the beta-advection model consistently outperformed its full-field version in predicting typhoon tracks with clearer physical explanation. Application of anomaly global models to seasonal forecasts is also reviewed.

Full access
Jun Du
,
Binbin Zhou
, and
Jason Levit

Abstract

Responding to the call for new verification methods in a recent editorial in Weather and Forecasting, this study proposed two new verification metrics to quantify the forecast challenges that a user faces in decision-making when using ensemble models. The measure of forecast challenge (MFC) combines forecast error and uncertainty information together into one single score. It consists of four elements: ensemble mean error, spread, nonlinearity, and outliers. The cross correlation among the four elements indicates that each element contains independent information. The relative contribution of each element to the MFC is analyzed by calculating the correlation between each element and MFC. The biggest contributor is the ensemble mean error, followed by the ensemble spread, nonlinearity, and outliers. By applying MFC to the predictability horizon diagram of a forecast ensemble, a predictability horizon diagram index (PHDX) is defined to quantify how the ensemble evolves at a specific location as an event approaches. The value of PHDX varies between 1.0 and −1.0. A positive PHDX indicates that the forecast challenge decreases as an event nears (type I), providing creditable forecast information to users. A negative PHDX value indicates that the forecast challenge increases as an event nears (type II), providing misleading information to users. A near-zero PHDX value indicates that the forecast challenge remains large as an event nears, providing largely uncertain information to users. Unlike current verification metrics that verify at a particular point in time, PHDX verifies a forecasting process through many forecasting cycles. Forecasting-process-oriented verification could be a new direction in model verification. The sample ensemble forecasts used in this study are produced from the NCEP global and regional ensembles.

Open access
Weihong Qian
,
Ning Jiang
, and
Jun Du

Abstract

Although the use of anomaly fields in the forecast process has been shown to be useful and has caught forecasters’ attention, current short-range (1–3 days) weather analyses and forecasts are still predominantly total-field based. This paper systematically examines the pros and cons of anomaly- versus total-field-based approaches in weather analysis using a case from 1 July 1991 (showcase) and 41 cases from 1998 (statistics) of heavy rain events that occurred in China. The comparison is done for both basic atmospheric variables (height, temperature, wind, and humidity) and diagnostic parameters (divergence, vorticity, and potential vorticity). Generally, anomaly fields show a more enhanced and concentrated signal (pattern) directly related to surface anomalous weather events, while total fields can obscure the visualization of anomalous features due to the climatic background. The advantage is noticeable in basic atmospheric variables, but is marginal in nonconservative diagnostic parameters and is lost in conservative diagnostic parameters. Sometimes a mix of total and anomaly fields works the best; for example, in the moist vorticity when anomalous vorticity combines with total moisture, it can depict the heavy rain area the best when comparing to either the purely total or purely anomalous moist vorticity. Based on this study, it is recommended that anomaly-based weather analysis could be a valuable supplement to the commonly used total-field-based approach. Anomalies can help a forecaster to more quickly identify where an abnormal weather event might occur as well as more easily pinpoint possible meteorological causes than a total field. However, one should not use the anomaly structure approach alone to explain the underlying dynamics without a total field.

Full access
Jing Huang
,
Jun Du
, and
Weihong Qian

Abstract

A total of 163 tropical cyclones (TCs) occurred in the eastern China seas during 1979–2011 with four types of tracks: left turning, right turning, straight moving, and irregular. The left-turning type is unusual and hard to predict. In this paper, 133 TCs from the first three types have been investigated. A generalized beta–advection model (GBAM) is derived by decomposing a meteorological field into climatic and anomalous components. The ability of the GBAM to predict tracks 1–2 days in advance is compared with three classical beta–advection models (BAMs). For both normal and unusual tracks, the GBAM apparently outperformed the BAMs. The GBAM’s ability to predict unusual TC tracks is particularly encouraging, while the BAMs have no ability to predict the left-turning and right-turning TC tracks. The GBAM was also used to understand unusual TC tracks because it can be separated into two forms: a climatic-flow BAM (CBAM) and an anomalous-flow BAM (ABAM). In the CBAM a TC vortex is steered by the large-scale climatic background flow, while in the ABAM, a TC vortex interacts with the surrounding anomalous flows. This decomposition approach can be used to examine the climatic and anomalous flows separately. It is found that neither the climatic nor the anomalous flow alone can explain unusual tracks. Sensitivity experiments show that two anomalous highs as well as a nearby TC played the major roles in the unusual left turn of Typhoon Aere (2004). This study demonstrates that a simple model can work well if key factors are properly included.

Full access
James D. Brown
,
Dong-Jun Seo
, and
Jun Du

Abstract

Precipitation forecasts from the Short-Range Ensemble Forecast (SREF) system of the National Centers for Environmental Prediction (NCEP) are verified for the period April 2006–August 2010. Verification is conducted for 10–20 hydrologic basins in each of the following: the middle Atlantic, the southern plains, the windward slopes of the Sierra Nevada, and the foothills of the Cascade Range in the Pacific Northwest. Mean areal precipitation is verified conditionally upon forecast lead time, amount of precipitation, season, forecast valid time, and accumulation period. The stationary block bootstrap is used to quantify the sampling uncertainties of the verification metrics. In general, the forecasts are more skillful for moderate precipitation amounts than either light or heavy precipitation. This originates from a threshold-dependent conditional bias in the ensemble mean forecast. Specifically, the forecasts overestimate low observed precipitation and underestimate high precipitation (a type-II conditional bias). Also, the forecast probabilities are generally overconfident (a type-I conditional bias), except for basins in the southern plains, where forecasts of moderate to high precipitation are reliable. Depending on location, different types of bias correction may be needed. Overall, the northwest basins show the greatest potential for statistical postprocessing, particularly during the cool season, when the type-I conditional bias and correlations are both high. The basins of the middle Atlantic and southern plains show less potential for statistical postprocessing, as the type-II conditional bias is larger and the correlations are weaker. In the Sierra Nevada, the greatest benefits of statistical postprocessing should be expected for light precipitation, specifically during the warm season, when the type-I conditional bias is large and the correlations are strong.

Full access
Jun Du
,
Steven L. Mullen
, and
Frederick Sanders

Abstract

The impact of initial condition uncertainty (ICU) on quantitative precipitation forecasts (QPFs) is examined for a case of explosive cyclogenesis that occurred over the contiguous United States and produced widespread, substantial rainfall. The Pennsylvania State University–National Center for Atmospheric Research (NCAR) Mesoscale Model Version 4 (MM4), a limited-area model, is run at 80-km horizontal resolution and 15 layers to produce a 25-member, 36-h forecast ensemble. Lateral boundary conditions for MM4 are provided by ensemble forecasts from a global spectral model, the NCAR Community Climate Model Version 1 (CCM1). The initial perturbations of the ensemble members possess a magnitude and spatial decomposition that closely match estimates of global analysis error, but they are not dynamically conditioned. Results for the 80-km ensemble forecast are compared to forecasts from the then operational Nested Grid Model (NGM), a single 40-km/15-layer MM4 forecast, a single 80-km/29-layer MM4 forecast, and a second 25-member MM4 ensemble based on a different cumulus parameterization and slightly different unperturbed initial conditions.

Large sensitivity to ICU marks ensemble QPF. Extrema in 6-h accumulations at individual grid points vary by as much as 3.00". Ensemble averaging reduces the root-mean-square error (rmse) for QPF. Nearly 90% of the improvement is obtainable using ensemble sizes as small as 8–10. Ensemble averaging can adversely affect the bias and equitable threat scores, however, because of its smoothing nature. Probabilistic forecasts for five mutually exclusive, completely exhaustive categories are found to be skillful relative to a climatological forecast. Ensemble sizes of approximately 10 can account for 90% of improvement in categorical forecasts relative to that for the average of individual forecasts. The improvements due to short-range ensemble forecasting (SREF) techniques exceed any due to doubling the resolution, and the error growth due to ICU greatly exceeds that due to different resolutions.

If the authors’ results are representative, they indicate that SREF can now provide useful QPF guidance and increase the accuracy of QPF when used with current analysis–forecast systems.

Full access
Weihong Qian
,
Jun Du
,
Xiaolong Shan
, and
Ning Jiang

Abstract

Properly including moisture effects into a dynamical parameter can significantly increase the parameter’s ability to diagnose heavy rain locations. The relative humidity–based weighting approach used to extend the moist potential vorticity (MPV) to the generalized moist potential vorticity (GMPV) is analyzed and demonstrates such an improvement. Following the same approach, two new diagnostic parameters, moist vorticity (MV) and moist divergence (MD), have been proposed in this study by incorporating moisture effects into the traditional vorticity and divergence. A regional heavy rain event that occurred along the Yangtze River on 1 July 1991 is used as a case study, and 41 daily regional heavy rain events during the notorious flooding year of 1998 in eastern China are used for a systematic evaluation. Results show that after the moisture effects were properly incorporated, the improved ability of all three parameters to capture a heavy rain area is significant (statistically at the 99% confidence level): the GMPV is improved over the MPV by 194%, the MD over the divergence by 60%, and the MV over the vorticity by 34% in terms of the threat score (TS). The average TS is 0.270 for the MD, 0.262 for the MV, and 0.188 for the GMPV. Application of the MV and MD to assess heavy rain potential is not intended to replace a complete, multiscale forecasting methodology; however, the results from this study suggest that the MV and MD could be used to postprocess a model forecast to potentially improve heavy rain location predictions.

Full access
Daniel Gombos
,
James A. Hansen
,
Jun Du
, and
Jeff McQueen

Abstract

A minimum spanning tree (MST) rank histogram (RH) is a multidimensional ensemble reliability verification tool. The construction of debiased, decorrelated, and covariance-homogenized MST RHs is described. Experiments using Euclidean L 2, variance, and Mahalanobis norms imply that, unless the number of ensemble members is less than or equal to the number of dimensions being verified, the Mahalanobis norm transforms the problem into a space where ensemble imperfections are most readily identified. Short-Range Ensemble Forecast Mahalanobis-normed MST RHs for a cluster of northeastern U.S. cities show that forecasts of the temperature–humidity index are the most reliable of those considered, followed by mean sea level pressure, 2-m temperature, and 10-m wind speed forecasts. MST RHs of a Southwest city cluster illustrate that 2-m temperature forecasts are the most reliable weather component in this region, followed by mean sea level pressure, 10-m wind speed, and the temperature–humidity index. Forecast reliabilities of the Southwest city cluster are generally less reliable than those of the Northeast cluster.

Full access