Search Results

You are looking at 11 - 20 of 21 items for :

  • Author or Editor: Jun Du x
  • Refine by Access: All Content x
Clear All Modify Search
David J. Stensrud
,
Harold E. Brooks
,
Jun Du
,
M. Steven Tracton
, and
Eric Rogers

Abstract

Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.

There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.

Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.

Full access
Huiling Yuan
,
Steven L. Mullen
,
Xiaogang Gao
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

The National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) is used to produce twice-daily (0000 and 1200 UTC), high-resolution ensemble forecasts to 24 h. The forecasts are performed at an equivalent horizontal grid spacing of 12 km for the period 1 November 2002 to 31 March 2003 over the southwest United States. The performance of 6-h accumulated precipitation is assessed for 32 U.S. Geological Survey hydrologic catchments. Multiple accuracy and skill measures are used to evaluate probabilistic quantitative precipitation forecasts. NCEP stage-IV precipitation analyses are used as “truth,” with verification performed on the stage-IV 4-km grid. The RSM ensemble exhibits a ubiquitous wet bias. The bias manifests itself in areal coverage, frequency of occurrence, and total accumulated precipitation over every region and during every 6-h period. The biases become particularly acute starting with the 1800–0000 UTC interval, which leads to a spurious diurnal cycle and the 1200 UTC cycle being more adversely affected than the 0000 UTC cycle. Forecast quality and value exhibit marked variability over different hydrologic regions. The forecasts are highly skillful along coastal California and the windward slopes of the Sierra Nevada Mountains, but they generally lack skill over the Great Basin and the Colorado basin except over mountain peaks. The RSM ensemble is able to discriminate precipitation events and provide useful guidance to a wide range of users over most regions of California, which suggests that mitigation of the conditional biases through statistical postprocessing would produce major improvements in skill.

Full access
Huiling Yuan
,
Steven L. Mullen
,
Xiaogang Gao
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

The National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM) is used to generate ensemble forecasts over the southwest United States during the 151 days of 1 November 2002 to 31 March 2003. RSM forecasts to 24 h on a 12-km grid are produced from 0000 and 1200 UTC initial conditions. Eleven ensemble members are run each forecast cycle from the NCEP Global Forecast System (GFS) ensemble analyses (one control and five pairs of bred modes) and forecast lateral boundary conditions. The model domain covers two NOAA River Forecast Centers: the California Nevada River Forecast Center (CNRFC) and the Colorado Basin River Forecast Center (CBRFC). Ensemble performance is evaluated for probabilistic forecasts of 24-h accumulated precipitation in terms of several accuracy and skill measures. Differences among several NCEP precipitation analyses are assessed along with their impact on model verification, with NCEP stage IV blended analyses selected to represent “truth.”

Forecast quality and potential value are found to depend strongly on the verification dataset, geographic region, and precipitation threshold. In general, the RSM forecasts are skillful over the CNRFC region for thresholds between 1 and 50 mm but are unskillful over the CBRFC region. The model exhibits a wet bias for all thresholds that is larger over Nevada and the CBRFC region than over California. Mitigation of such biases over the Southwest will pose serious challenges to the modeling community in view of the uncertainties inherent in verifying analyses.

Full access
Guo Deng
,
Jun Du
,
Yushu Zhou
,
Ling Yan
,
Jing Chen
,
Fajing Chen
,
Hongqi Li
, and
Jingzhou Wang

Abstract

Using a 3-km regional ensemble prediction system (EPS), this study tested a three-dimensional (3D) rescaling mask for initial condition (IC) perturbation. Whether the 3D mask-based EPS improves ensemble forecasts over current two-dimensional (2D) mask-based EPS has been evaluated in three aspects: ensemble mean, spread, and probability. The forecasts of wind, temperature, geopotential height, sea level pressure, and precipitation were examined for a summer month (1–28 July 2018) and a winter month (1–27 February 2019) over a region in North China. The EPS was run twice per day (initiated at 0000 and 1200 UTC) to 36 h in forecast length, providing 56 warm-season forecast cases and 54 cold-season cases for verification. The warm and cold seasons are verified separately for comparison. The study found the following: 1) The vertical profile of IC perturbation becomes closer to that of analysis uncertainty with the 3D rescaling mask. 2) Ensemble performance is significantly improved in all three aspects. The biggest improvement is in the ensemble spread, followed by the probabilistic forecast, and the least improvement is in the ensemble mean forecast. Larger improvements are seen in the warm season than in the cold season. 3) More improvement is in the shorter time range (<24 h) than in the longer range. 4) Surface and lower-level variables are improved more than upper-level ones. 5) The underlying mechanism for the improvement has been investigated. Convective instability is found to be responsible for the spread increment and, thus, overall ensemble forecast improvement. Therefore, using a 3D rescaling mask is recommended for an EPS to increase its utility especially for shorter time range and surface weather elements.

Significant Statement

A weather prediction model is a complex system that consists of nonlinear differential equations. Small errors in either its inputs or model itself will grow with time during model integration, which will contaminate a forecast. To quantify such contamination (“uncertainty”) of a forecast, the ensemble forecasting technique is used. An ensemble of forecasts is a multiple of model runs at the same time but with slightly “perturbed” inputs or model versions. These small perturbations are supposed to represent true “uncertainty” in inputs or model representation. This study proposed a technique that makes a perturbation’s vertical structure more resemble real uncertainty (intrinsic error) in input data and confirmed that it can significantly improve ensemble forecast quality especially for a shorter time range and lower-level weather elements. It is found that convective instability is responsible for the improvement.

Open access
Jingzhuo Wang
,
Jing Chen
,
Jun Du
,
Yutao Zhang
,
Yu Xia
, and
Guo Deng

Abstract

This study demonstrates how model bias can adversely affect the quality assessment of an ensemble prediction system (EPS) by verification metrics. A regional EPS [Global and Regional Assimilation and Prediction Enhanced System-Regional Ensemble Prediction System (GRAPES-REPS)] was verified over a period of one month over China. Three variables (500-hPa and 2-m temperatures, and 250-hPa wind) are selected to represent “strong” and “weak” bias situations. Ensemble spread and probabilistic forecasts are compared before and after a bias correction. The results show that the conclusions drawn from ensemble verification about the EPS are dramatically different with or without model bias. This is true for both ensemble spread and probabilistic forecasts. The GRAPES-REPS is severely underdispersive before the bias correction but becomes calibrated afterward, although the improvement in the spread’s spatial structure is much less; the spread–skill relation is also improved. The probabilities become much sharper and almost perfectly reliable after the bias is removed. Therefore, it is necessary to remove forecast biases before an EPS can be accurately evaluated since an EPS deals only with random error but not systematic error. Only when an EPS has no or little forecast bias, can ensemble verification metrics reliably reveal the true quality of an EPS without removing forecast bias first. An implication is that EPS developers should not be expected to introduce methods to dramatically increase ensemble spread (either by perturbation method or statistical calibration) to achieve reliability. Instead, the preferred solution is to reduce model bias through prediction system developments and to focus on the quality of spread (not the quantity of spread). Forecast products should also be produced from the debiased but not the raw ensemble.

Open access
Huiling Yuan
,
Xiaogang Gao
,
Steven L. Mullen
,
Soroosh Sorooshian
,
Jun Du
, and
Hann-Ming Henry Juang

Abstract

A feed-forward neural network is configured to calibrate the bias of a high-resolution probabilistic quantitative precipitation forecast (PQPF) produced by a 12-km version of the NCEP Regional Spectral Model (RSM) ensemble forecast system. Twice-daily forecasts during the 2002–2003 cool season (1 November–31 March, inclusive) are run over four U.S. Geological Survey (USGS) hydrologic unit regions of the southwest United States. Calibration is performed via a cross-validation procedure, where four months are used for training and the excluded month is used for testing. The PQPFs before and after the calibration over a hydrological unit region are evaluated by comparing the joint probability distribution of forecasts and observations. Verification is performed on the 4-km stage IV grid, which is used as “truth.” The calibration procedure improves the Brier score (BrS), conditional bias (reliability) and forecast skill, such as the Brier skill score (BrSS) and the ranked probability skill score (RPSS), relative to the sample frequency for all geographic regions and most precipitation thresholds. However, the procedure degrades the resolution of the PQPFs by systematically producing more forecasts with low nonzero forecast probabilities that drive the forecast distribution closer to the climatology of the training sample. The problem of degrading the resolution is most severe over the Colorado River basin and the Great Basin for relatively high precipitation thresholds where the sample of observed events is relatively small.

Full access
Yu Xia
,
Jing Chen
,
Jun Du
,
Xiefei Zhi
,
Jingzhuo Wang
, and
Xiaoli Li

Abstract

This study experimented with a unified scheme of stochastic physics and bias correction within a regional ensemble model [Global and Regional Assimilation and Prediction System–Regional Ensemble Prediction System (GRAPES-REPS)]. It is intended to improve ensemble prediction skill by reducing both random and systematic errors at the same time. Three experiments were performed on top of GRAPES-REPS. The first experiment adds only the stochastic physics. The second experiment adds only the bias correction scheme. The third experiment adds both the stochastic physics and bias correction. The experimental period is one month from 1 to 31 July 2015 over the China domain. Using 850-hPa temperature as an example, the study reveals the following: 1) the stochastic physics can effectively increase the ensemble spread, while the bias correction cannot. Therefore, ensemble averaging of the stochastic physics runs can reduce more random error than the bias correction runs. 2) Bias correction can significantly reduce systematic error, while the stochastic physics cannot. As a result, the bias correction greatly improved the quality of ensemble mean forecasts but the stochastic physics did not. 3) The unified scheme can greatly reduce both random and systematic errors at the same time and performed the best of the three experiments. These results were further confirmed by verification of the ensemble mean, spread, and probabilistic forecasts of many other atmospheric fields for both upper air and the surface, including precipitation. Based on this study, we recommend that operational numerical weather prediction centers adopt this unified scheme approach in ensemble models to achieve the best forecasts.

Open access
Shang-Min Long
,
Shang-Ping Xie
,
Yan Du
,
Qinyu Liu
,
Xiao-Tong Zheng
,
Gang Huang
,
Kai-Ming Hu
, and
Jun Ying

Abstract

The 2015 Paris Agreement proposed targets to limit global-mean surface temperature (GMST) rise well below 2°C relative to preindustrial level by 2100, requiring a cease in the radiative forcing (RF) increase in the near future. In response to changing RF, the deep ocean responds slowly (ocean slow response), in contrast to the fast ocean mixed layer adjustment. The role of the ocean slow response under low warming targets is investigated using representative concentration pathway (RCP) 2.6 simulations from phase 5 of the Coupled Model Intercomparison Project. In RCP2.6, the deep ocean continues to warm while RF decreases after reaching a peak. The deep ocean warming helps to shape the trajectories of GMST and fuels persistent thermosteric sea level rise. A diagnostic method is used to decompose further changes after the RF peak into a slow warming component under constant peak RF and a cooling component due to the decreasing RF. Specifically, the slow warming component amounts to 0.2°C (0.6°C) by 2100 (2300), raising the hurdle for achieving the low warming targets. When RF declines, the deep ocean warming takes place in all basins but is the most pronounced in the Southern Ocean and Atlantic Ocean where surface heat uptake is the largest. The climatology and change of meridional overturning circulation are both important for the deep ocean warming. To keep the GMST rise at a low level, substantial decrease in RF is required to offset the warming effect from the ocean slow response.

Free access
Adam J. Clark
,
John S. Kain
,
David J. Stensrud
,
Ming Xue
,
Fanyou Kong
,
Michael C. Coniglio
,
Kevin W. Thomas
,
Yunheng Wang
,
Keith Brewster
,
Jidong Gao
,
Xuguang Wang
,
Steven J. Weiss
, and
Jun Du

Abstract

Probabilistic quantitative precipitation forecasts (PQPFs) from the storm-scale ensemble forecast system run by the Center for Analysis and Prediction of Storms during the spring of 2009 are evaluated using area under the relative operating characteristic curve (ROC area). ROC area, which measures discriminating ability, is examined for ensemble size n from 1 to 17 members and for spatial scales ranging from 4 to 200 km.

Expectedly, incremental gains in skill decrease with increasing n. Significance tests comparing ROC areas for each n to those of the full 17-member ensemble revealed that more members are required to reach statistically indistinguishable PQPF skill relative to the full ensemble as forecast lead time increases and spatial scale decreases. These results appear to reflect the broadening of the forecast probability distribution function (PDF) of future atmospheric states associated with decreasing spatial scale and increasing forecast lead time. They also illustrate that efficient allocation of computing resources for convection-allowing ensembles requires careful consideration of spatial scale and forecast length desired.

Full access
Yihong Duan
,
Jiandong Gong
,
Jun Du
,
Martin Charron
,
Jing Chen
,
Guo Deng
,
Geoff DiMego
,
Masahiro Hara
,
Masaru Kunii
,
Xiaoli Li
,
Yinglin Li
,
Kazuo Saito
,
Hiromu Seko
,
Yong Wang
, and
Christoph Wittmann

The Beijing 2008 Olympics Research and Development Project (B08RDP), initiated in 2004 under the World Meteorological Organization (WMO) World Weather Research Programme (WWRP), undertook the research and development of mesoscale ensemble prediction systems (MEPSs) and their application to weather forecast support during the Beijing Olympic Games. Six MEPSs from six countries, representing the state-of-the-art regional EPSs with near-real-time capabilities and emphasizing on the 6–36-h forecast lead times, participated in the project.

The background, objectives, and implementation of B08RDP, as well as the six MEPSs, are reviewed. The accomplishments are summarized, which include 1) providing value-added service to the Olympic Games, 2) advancing MEPS-related research, 3) accelerating the transition from research to operations, and 4) training forecasters in utilizing forecast uncertainty products. The B08RDP has fulfilled its research (MEPS development) and demonstration (value-added service) purposes. The research conducted covers the areas of verification, examining the value of MEPS relative to other numerical weather prediction (NWP) systems, combining multimodel or multicenter ensembles, bias correction, ensemble perturbations [initial condition (IC), lateral boundary condition (LBC), land surface IC, and model physics], downscaling, forecast applications, data assimilation, and storm-scale ensemble modeling. Seven scientific issues important to MEPS have been identified. It is recognized that the daily use of forecast uncertainty information by forecasters remains a challenge. Development of forecaster-friendly products and training activities should be a long-term effort and needs to be continuously enhanced.

The B08RDP dataset is also a valuable asset to the research community. The experience gained in international collaboration, organization, and implementation of a multination regional EPS for a common goal and to address common scientific issues can be shared by the ongoing projects The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble—Limited Area Models (TIGGE-LAM) and North American Ensemble Forecast System—Limited Area Models (NAEFS-LAM).

Full access