Search Results

You are looking at 1 - 10 of 25 items for

  • Author or Editor: Jun Du x
  • All content x
Clear All Modify Search
Binbin Zhou and Jun Du

Abstract

A new multivariable-based diagnostic fog-forecasting method has been developed at NCEP. The selection of these variables, their thresholds, and the influences on fog forecasting are discussed. With the inclusion of the algorithm in the model postprocessor, the fog forecast can now be provided centrally as direct NWP model guidance. The method can be easily adapted to other NWP models. Currently, knowledge of how well fog forecasts based on operational NWP models perform is lacking. To verify the new method and assess fog forecast skill, as well as to account for forecast uncertainty, this fog-forecasting algorithm is applied to a multimodel-based Mesoscale Ensemble Prediction System (MEPS). MEPS consists of 10 members using two regional models [the NCEP Nonhydrostatic Mesoscale Model (NMM) version of the Weather Research and Forecasting (WRF) model and the NCAR Advanced Research version of WRF (ARW)] with 15-km horizontal resolution. Each model has five members (one control and four perturbed members) using the breeding technique to perturb the initial conditions and was run once per day out to 36 h over eastern China for seven months (February–September 2008). Both deterministic and probabilistic forecasts were produced based on individual members, a one-model ensemble, and two-model ensembles. A case study and statistical verification, using both deterministic and probabilistic measuring scores, were performed against fog observations from 13 cities in eastern China. The verification was focused on the 12- and 36-h forecasts.

By applying the various approaches, including the new fog detection scheme, ensemble technique, multimodel approach, and the increase in ensemble size, the fog forecast accuracy was steadily and dramatically improved in each of the approaches: from basically no skill at all [equitable threat score (ETS) = 0.063] to a skill level equivalent to that of warm-season precipitation forecasts of the current NWP models (0.334). Specifically, 1) the multivariable-based fog diagnostic method has a much higher detection capability than the liquid water content (LWC)-only based approach. Reasons why the multivariable approach works better than the LWC-only method were also illustrated. 2) The ensemble-based forecasts are, in general, superior to a single control forecast measured both deterministically and probabilistically. The case study also demonstrates that the ensemble approach could provide more societal value than a single forecast to end users, especially for low-probability significant events like fog. Deterministically, a forecast close to the ensemble median is particularly helpful. 3) The reliability of probabilistic forecasts can be effectively improved by using a multimodel ensemble instead of a single-model ensemble. For a small ensemble such as the one in this study, the increase in ensemble size is also important in improving probabilistic forecasts, although this effect is expected to decrease with the increase in ensemble size.

Full access
Jun Du and Binbin Zhou

Abstract

This study proposes a dynamical performance-ranking method (called the Du–Zhou ranking method) to predict the relative performance of individual ensemble members by assuming the ensemble mean is a good estimation of the truth. The results show that the method 1) generally works well, especially for shorter ranges such as a 1-day forecast; 2) has less error in predicting the extreme (best and worst) performers than the intermediate performers; 3) works better when the variation in performance among ensemble members is large; 4) works better when the model bias is small; 5) works better in a multimodel than in a single-model ensemble environment; and 6) works best when using the magnitude difference between a member and its ensemble mean as the “distance” measure in ranking members. The ensemble mean and median generally perform similarly to each other.

This method was applied to a weighted ensemble average to see if it can improve the ensemble mean forecast over a commonly used, simple equally weighted ensemble averaging method. The results indicate that the weighted ensemble mean forecast has a smaller systematic error. This superiority of the weighted over the simple mean is especially true for smaller-sized ensembles, such as 5 and 11 members, but it decreases with the increase in ensemble size and almost vanishes when the ensemble size increases to 21 members. There is, however, little impact on the random error and the spatial patterns of ensemble mean forecasts. These results imply that it might be difficult to improve the ensemble mean by just weighting members when an ensemble reaches a certain size. However, it is found that the weighted averaging can reduce the total forecast error more when a raw ensemble-mean forecast itself is less accurate. It is also expected that the effectiveness of weighted averaging should be improved when the ensemble spread is improved or when the ranking method itself is improved, although such an improvement should not be expected to be too big (probably less than 10%, on average).

Full access
Jing Huang, Jun Du, and Weihong Qian

Abstract

A total of 163 tropical cyclones (TCs) occurred in the eastern China seas during 1979–2011 with four types of tracks: left turning, right turning, straight moving, and irregular. The left-turning type is unusual and hard to predict. In this paper, 133 TCs from the first three types have been investigated. A generalized beta–advection model (GBAM) is derived by decomposing a meteorological field into climatic and anomalous components. The ability of the GBAM to predict tracks 1–2 days in advance is compared with three classical beta–advection models (BAMs). For both normal and unusual tracks, the GBAM apparently outperformed the BAMs. The GBAM’s ability to predict unusual TC tracks is particularly encouraging, while the BAMs have no ability to predict the left-turning and right-turning TC tracks. The GBAM was also used to understand unusual TC tracks because it can be separated into two forms: a climatic-flow BAM (CBAM) and an anomalous-flow BAM (ABAM). In the CBAM a TC vortex is steered by the large-scale climatic background flow, while in the ABAM, a TC vortex interacts with the surrounding anomalous flows. This decomposition approach can be used to examine the climatic and anomalous flows separately. It is found that neither the climatic nor the anomalous flow alone can explain unusual tracks. Sensitivity experiments show that two anomalous highs as well as a nearby TC played the major roles in the unusual left turn of Typhoon Aere (2004). This study demonstrates that a simple model can work well if key factors are properly included.

Full access
Steven L. Mullen, Jun Du, and Frederick Sanders

Abstract

The impact of differences in analysis–forecast systems on dispersion of an ensemble forecast is examined for a case of cyclogenesis. Changes in the dispersion properties between two 25-member ensemble forecasts with different cumulus parameterization schemes and different initial analyses are compared. The statistical significance of the changes is assessed.

Error growth due to initial condition uncertainty depends significantly on the analysis–forecast system. Quantitative precipitation forecasts and probabilistic quantitative precipitation forecasts are extremely sensitive to the specification of physical parameterizations in the model. Regions of large variability tend to coincide with a high likelihood of parameterized convection. Analysis of other model fields suggests that those with relatively large energy in the mesoscale also exhibit highly significant differences in dispersion.

The results presented here provide evidence that the combined effect of uncertainties in model physics and the initial state provides a means to increase the dispersion of ensemble prediction systems, but care must be taken in the construction of mixed ensemble systems to ensure that other properties of the ensemble distribution are not overly degraded.

Full access
Jun Du, Steven L. Mullen, and Frederick Sanders

Abstract

Large errors developed by 24 h during a 25-member ensemble forecast of quantity of precipitation. The errors could be attributed to an insufficient northeastward motion of the area of precipitation and excessive amounts. This was determined by partitioning of the root-mean-square error into a distortion error, the sum of contributions from incorrect position and magnitude, and a residual error. The distortion error accounted for more than half of the total error. The distortion error occurs on the synoptic scale and can likely be somewhat ameliorated by future improvements in analysis–forecast systems. The residual error occurs at smaller, less predictable scales, and prospects for its deterministic improvement are not so sanguine.

Full access
Jun Du, Steven L. Mullen, and Frederick Sanders

Abstract

The impact of initial condition uncertainty (ICU) on quantitative precipitation forecasts (QPFs) is examined for a case of explosive cyclogenesis that occurred over the contiguous United States and produced widespread, substantial rainfall. The Pennsylvania State University–National Center for Atmospheric Research (NCAR) Mesoscale Model Version 4 (MM4), a limited-area model, is run at 80-km horizontal resolution and 15 layers to produce a 25-member, 36-h forecast ensemble. Lateral boundary conditions for MM4 are provided by ensemble forecasts from a global spectral model, the NCAR Community Climate Model Version 1 (CCM1). The initial perturbations of the ensemble members possess a magnitude and spatial decomposition that closely match estimates of global analysis error, but they are not dynamically conditioned. Results for the 80-km ensemble forecast are compared to forecasts from the then operational Nested Grid Model (NGM), a single 40-km/15-layer MM4 forecast, a single 80-km/29-layer MM4 forecast, and a second 25-member MM4 ensemble based on a different cumulus parameterization and slightly different unperturbed initial conditions.

Large sensitivity to ICU marks ensemble QPF. Extrema in 6-h accumulations at individual grid points vary by as much as 3.00". Ensemble averaging reduces the root-mean-square error (rmse) for QPF. Nearly 90% of the improvement is obtainable using ensemble sizes as small as 8–10. Ensemble averaging can adversely affect the bias and equitable threat scores, however, because of its smoothing nature. Probabilistic forecasts for five mutually exclusive, completely exhaustive categories are found to be skillful relative to a climatological forecast. Ensemble sizes of approximately 10 can account for 90% of improvement in categorical forecasts relative to that for the average of individual forecasts. The improvements due to short-range ensemble forecasting (SREF) techniques exceed any due to doubling the resolution, and the error growth due to ICU greatly exceeds that due to different resolutions.

If the authors’ results are representative, they indicate that SREF can now provide useful QPF guidance and increase the accuracy of QPF when used with current analysis–forecast systems.

Full access
Weihong Qian, Ning Jiang, and Jun Du

Abstract

Mathematical derivation, meteorological justification, and comparison to model direct precipitation forecasts are the three main concerns recently raised by Schultz and Spengler about moist divergence (MD) and moist vorticity (MV), which were introduced in earlier work by Qian et al. That previous work demonstrated that MD (MV) can in principle be derived mathematically with a value-added empirical modification. MD (MV) has a solid meteorological basis. It combines ascent motion and high moisture: the two elements necessary for rainfall. However, precipitation efficiency is not considered in MD (MV). Given the omission of an advection term in the mathematical derivation and the lack of precipitation efficiency, MD (MV) might be suitable mainly for heavy rain events with large areal coverage and long duration caused by large-scale quasi-stationary weather systems, but not for local intense heavy rain events caused by small-scale convection. In addition, MD (MV) is not capable of describing precipitation intensity. MD (MV) worked reasonably well in predicting heavy rain locations from short to medium ranges as compared with the ECMWF model precipitation forecasts. MD (MV) was generally worse than (though sometimes similar to) the model heavy rain forecast at shorter ranges (about a week) but became comparable or even better at longer ranges (around 10 days). It should be reiterated that MD (MV) is not intended to be a primary tool for predicting heavy rain areas, especially in the short range, but is a useful parameter for calibrating model heavy precipitation forecasts, as stated in the original paper.

Full access
Weihong Qian, Jun Du, and Yang Ai

Abstract

Comparisons between anomaly and full-field methods have been carried out in weather analysis and forecasting over the last decade. Evidence from these studies has demonstrated the superiority of anomaly to full-field in the following four aspects: depiction of weather systems, anomaly forecasts, diagnostic parameters and model prediction. To promote the use and further discussion of the anomaly approach, this article summarizes those findings.

After examining many types of weather events, anomaly weather maps show at least five advantages in weather system depiction: (1) less vagueness in visually connecting the location of an event with its associated meteorological conditions; (2) clearer and more complete depictions of vertical structures of a disturbance; (3) easier observation of time and spatial evolution of an event and its interaction or connection with other weather systems; (4) simplification of conceptual models by unifying different weather systems into one pattern; and (5) extension of model forecast length due to earlier detection of predictors. Anomaly verification is also mentioned. The anomaly forecast is useful for raising one’s awareness of potential societal impact. Combining the anomaly forecast with an ensemble is emphasized, where a societal impact index is discussed. For diagnostic parameters, two examples are given: an anomalous convective instability index for convection, and seven vorticity and divergence related parameters for heavy rain. Both showed positive contributions from the anomalous fields. For model prediction, the anomaly version of the beta-advection model consistently outperformed its full-field version in predicting typhoon tracks with clearer physical explanation. Application of anomaly global models to seasonal forecasts is also reviewed.

Full access
Weihong Qian, Ning Jiang, and Jun Du

Abstract

Although the use of anomaly fields in the forecast process has been shown to be useful and has caught forecasters’ attention, current short-range (1–3 days) weather analyses and forecasts are still predominantly total-field based. This paper systematically examines the pros and cons of anomaly- versus total-field-based approaches in weather analysis using a case from 1 July 1991 (showcase) and 41 cases from 1998 (statistics) of heavy rain events that occurred in China. The comparison is done for both basic atmospheric variables (height, temperature, wind, and humidity) and diagnostic parameters (divergence, vorticity, and potential vorticity). Generally, anomaly fields show a more enhanced and concentrated signal (pattern) directly related to surface anomalous weather events, while total fields can obscure the visualization of anomalous features due to the climatic background. The advantage is noticeable in basic atmospheric variables, but is marginal in nonconservative diagnostic parameters and is lost in conservative diagnostic parameters. Sometimes a mix of total and anomaly fields works the best; for example, in the moist vorticity when anomalous vorticity combines with total moisture, it can depict the heavy rain area the best when comparing to either the purely total or purely anomalous moist vorticity. Based on this study, it is recommended that anomaly-based weather analysis could be a valuable supplement to the commonly used total-field-based approach. Anomalies can help a forecaster to more quickly identify where an abnormal weather event might occur as well as more easily pinpoint possible meteorological causes than a total field. However, one should not use the anomaly structure approach alone to explain the underlying dynamics without a total field.

Full access
Jun Du, Binbin Zhou, and Jason Levit

Abstract

Responding to the call for new verification methods in a recent editorial in Weather and Forecasting, this study proposed two new verification metrics to quantify the forecast challenges that a user faces in decision-making when using ensemble models. The measure of forecast challenge (MFC) combines forecast error and uncertainty information together into one single score. It consists of four elements: ensemble mean error, spread, nonlinearity, and outliers. The cross correlation among the four elements indicates that each element contains independent information. The relative contribution of each element to the MFC is analyzed by calculating the correlation between each element and MFC. The biggest contributor is the ensemble mean error, followed by the ensemble spread, nonlinearity, and outliers. By applying MFC to the predictability horizon diagram of a forecast ensemble, a predictability horizon diagram index (PHDX) is defined to quantify how the ensemble evolves at a specific location as an event approaches. The value of PHDX varies between 1.0 and −1.0. A positive PHDX indicates that the forecast challenge decreases as an event nears (type I), providing creditable forecast information to users. A negative PHDX value indicates that the forecast challenge increases as an event nears (type II), providing misleading information to users. A near-zero PHDX value indicates that the forecast challenge remains large as an event nears, providing largely uncertain information to users. Unlike current verification metrics that verify at a particular point in time, PHDX verifies a forecasting process through many forecasting cycles. Forecasting-process-oriented verification could be a new direction in model verification. The sample ensemble forecasts used in this study are produced from the NCEP global and regional ensembles.

Open access