Search Results

You are looking at 1 - 10 of 69 items for :

  • Author or Editor: Xuguang Wang x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Xuguang Wang

Abstract

Gridpoint statistical interpolation (GSI), a three-dimensional variational data assimilation method (3DVAR) has been widely used in operations and research in numerical weather prediction. The operational GSI uses a static background error covariance, which does not reflect the flow-dependent error statistics. Incorporating ensemble covariance in GSI provides a natural way to estimate the background error covariance in a flow-dependent manner. Different from other 3DVAR-based hybrid data assimilation systems that are preconditioned on the square root of the background error covariance, commonly used GSI minimization is preconditioned upon the full background error covariance matrix. A mathematical derivation is therefore provided to demonstrate how to incorporate the flow-dependent ensemble covariance in the GSI variational minimization.

Full access
Yongming Wang
and
Xuguang Wang

Abstract

Explicit forecasts of a tornado-like vortex (TLV) require subkilometer grid spacing because of their small size. Most previous TLV prediction studies started from interpolated kilometer grid spacing initial conditions (ICs) rather than subkilometer grid spacing ICs. The tornadoes embedded in the 8 May 2003 Oklahoma City tornadic supercell are used to understand the impact of IC resolution on TLV predictions. Two ICs at 500-m and 2-km grid spacings are, respectively, produced through an efficient dual-resolution (DR) and a single-coarse-resolution (SCR) EnVar ingesting a 2-km ensemble. Both experiments launch 1-h forecasts at 500-m grid spacing. Diagnostics of data assimilation (DA) cycling reveal DR produces stronger and broader rear-flank cold pools, more intense downdrafts and updrafts with finer scales, and more hydrometeors at high altitudes through accumulated differences between two DA algorithms. Relative differences in DR, compared to SCR, include the integration from higher-resolution analyses, the update for higher-resolution backgrounds, and the propagation of ensemble perturbations along higher-resolution model trajectory. Predictions for storm morphology and cold pools are more realistic in DR than in SCR. The DR-TLV tracks match better with the observed tornado tracks than SCR-TLV in timing of intensity variation, and in duration. Additional experiments suggest 1) the analyzed kinematic variables strongly influence timing of intensity variation through affecting both low-level rear-flank outflow and midlevel updraft; 2) potential temperature analysis by DR extends the second track’s duration consistent with enhanced low-level stretching, delayed broadening large-scale downdraft, and (or) increased near-surface baroclinic vorticity supply; and 3) hydrometeor analyses have little impact on TLV predictions.

Free access
Yongming Wang
and
Xuguang Wang

Abstract

A convective-scale static background-error covariance (BEC) matrix is further developed to include the capability of direct reflectivity assimilation and evaluated within the GSI-based three-dimensional variational (3DVar) and hybrid ensemble–variational (EnVar) methods. Specific developments are summarized as follows: 1) Control variables (CVs) are extended to include reflectivity, vertical velocity, and all hydrometeor types. Various horizontal momentum and moisture CV options are included. 2) Cross correlations between all CVs are established. 3) A storm intensity-dependent binning method is adopted to separately calculate static error matrices for clear-air and storms with varying intensities. The resultant static BEC matrices are simultaneously applied at proper locations guided by the observed reflectivity. 4) The EnVar is extended to adaptively incorporate static BECs based on the quality of ensemble covariances. Evaluation and examination of the new static BECs are first performed on the 8 May 2003 Oklahoma City supercell. Detailed diagnostics and 3DVar examinations suggest zonal/meridional winds and pseudo–relative humidity are selected as horizontal momentum and moisture CVs for direct reflectivity assimilation, respectively; inclusion of cross correlations favors spin up and maintains the analyzed storms; application of binning improves characteristics and persistence of the simulated storm. Relative to an experiment using the full ensemble BECs (Exp-PureEnVar), incorporating static BECs in hybrid EnVar reduces spinup time and better analyzes reflectivity distributions while the background ensemble is deficient in sampling errors. Compared to both pure 3DVar and Exp-PureEnVar, hybrid EnVar better predicts reflectivity distributions and better maintains a strong mesocyclone. Further examination through the 20 May 2013 Oklahoma supercells confirms these results and additionally demonstrates the effectiveness of adaptive hybridization.

Full access
Yongming Wang
and
Xuguang Wang

Abstract

A GSI-based EnVar data assimilation system is extended to directly assimilate radar reflectivity to initialize convective-scale forecasts. When hydrometeor mixing ratios are used as state variables (method mixing ratio), large differences of the cost function gradients with respect to the small hydrometeor mixing ratios and wind prevent efficient convergence. Using logarithmic mixing ratios as state variables (method logarithm) fixes this problem, but generates spuriously large hydrometeor increments partly due to the transform to and from the logarithmic space. The tangent linear of the reflectivity operators further contributes to spuriously small and large hydrometeor increments in method mixing ratio and method logarithm, respectively. A new method is proposed by directly adding the reflectivity as a state variable (method dBZ). Without the tangent linear and adjoint of the nonlinear operator, the new method therefore avoids the aforementioned problems.

The newly proposed method is examined on the analysis and prediction of the 8 May 2003 Oklahoma City tornadic supercell storm. Both the probabilistic forecast of strong low-level vorticity and maintenance of strong updraft and vorticity in method dBZ are more consistent with reality than in method logarithm and method mixing ratio. Detailed diagnostics suggest that a more realistic cold pool due to the better analyzed hydrometeors in method dBZ than in other methods leads to constructive interaction between the surface gust front and the updraft aloft associated with the midlevel mesocyclone. Similar low-level vorticity forecast and maintenance of the storm are produced by the WSM6 and Thompson microphysics schemes in method dBZ. The Thompson scheme matches the reflectivity distribution with the observations better for all lead times, but shows more southeastward track bias compared to the WSM6 scheme.

Full access
Aaron Johnson
and
Xuguang Wang

Abstract

This study investigates impacts on convection-permitting ensemble forecast performance of different methods of generating the ensemble IC perturbations in the context of simultaneous physics diversity among the ensemble members. A total of 10 convectively active cases are selected for a systematic comparison of different methods of perturbing IC perturbations in 10-member convection-permitting ensembles, both with and without physics diversity. These IC perturbation methods include simple downscaling of coarse perturbations from a global model (LARGE), perturbations generated with ensemble data assimilation directly on the multiscale domain (MULTI), and perturbations generated using each method with small scales filtered out as a control. MULTI was found to be significantly more skillful than LARGE at early lead times in all ensemble physics configurations, with the advantage of MULTI gradually decreasing with increasing forecast lead time. The advantage of MULTI, relative to LARGE, was reduced but not eliminated by the presence of physics diversity because of the extra ensemble spread that the physics diversity provided. The advantage of MULTI, relative to LARGE, was also reduced by filtering the IC perturbations to a commonly resolved spatial scale in both ensembles, which highlights the importance of flow-dependent small-scale (<~10 m) IC perturbations in the ensemble design. The importance of the physics diversity, relative to the IC perturbation method, depended on the spatial scale of interest, forecast lead time, and the meteorological characteristics of the forecast case. Such meteorological characteristics include the strength of synoptic-scale forcing, the role of cold pool interactions, and the occurrence of convective initiation or dissipation.

Full access
Junkyung Kay
and
Xuguang Wang

Abstract

A multiresolution ensemble (MR-ENS) method is developed to resolve a wider range of scales of the background error covariance (BEC) in the hybrid four-dimensional ensemble–variational (4DEnVar) while saving computational costs. MR-ENS is implemented in the NCEP Global Forecast System (GFS) gridpoint statistical interpolation (GSI) hybrid 4DEnVar. MR-ENS generates analysis increment by incorporating high-resolution static BEC and flow-dependent ensemble BECs from both high and low resolutions. MR-ENS is compared with three 4DEnVar update approaches: 1) the single-resolution (SR)-Low approach where the analysis increments are generated from the ensemble BEC and the static BEC at the same low resolution; 2) the dual-resolution (DR) approach where the analysis increment is generated using the high-resolution static BEC and low-resolution ensemble BEC; and 3) the SR-High approach, which is the same as 1) except that all covariances are at high-resolution. Experiments show that MR-ENS improves global and tropical cyclone track forecasts compared to SR-Low and DR. Inclusion of the high-resolution ensemble leads to increased background ensemble spread, better fitting of the background to observations, increased effective ranks, more accurate ensemble error correlation, and increased power of analysis increment at small scales. The majority of the improvement of MR-ENS relative to SR-Low is due to the partial use of high-resolution background ensemble. Compared to SR-High, MR-ENS decreases the overall cost by about 40% and shows comparable global and tropical cyclone track forecast performances. Diagnostics show that particularly in the tropics, MR-ENS improves the analysis increment over a wide range of scales and increases the effective rank of the ensemble BEC to the degree comparable to SR-High.

Free access
Aaron Johnson
and
Xuguang Wang

Abstract

Neighborhood and object-based probabilistic precipitation forecasts from a convection-allowing ensemble are verified and calibrated. Calibration methods include logistic regression, one- and two-parameter reliability-based calibration, and cumulative distribution function (CDF)-based bias adjustment. Newly proposed object-based probabilistic forecasts for the occurrence of a forecast object are derived from the percentage of ensemble members with a matching object. Verification and calibration of single- and multimodel subensembles are performed to explore the effect of using multiple models.

The uncalibrated neighborhood-based probabilistic forecasts have skill minima during the afternoon convective maximum. Calibration generally improves the skill, especially during the skill minima, resulting in positive skill. In general all calibration methods perform similarly, with a slight advantage of logistic regression (one-parameter reliability based) calibration for 1-h (6 h) accumulations.

The uncalibrated object-based probabilistic forecasts are, in general, less skillful than the uncalibrated neighborhood-based probabilistic forecasts. Object-based calibration also results in positive skill at all lead times. For object-based calibration the skill is significantly different among the calibration methods, with the logistic regression performing the best and CDF-based bias adjustment performing the worst.

For both the neighborhood and object-based probabilistic forecasts, the impact of using 10 or 25 days of training data for calibration is generally small and is most significant for the two-parameter reliability-based method. An uncalibrated Advanced Research Weather Research and Forecasting Model (ARW-WRF) subensemble is significantly more skillful than an uncalibrated WRF Nonhydrostatic Mesoscale Model (NMM) subensemble. The difference is reduced by calibration. The multimodel subensemble only shows an advantage for the neighborhood-based forecasts beyond 1-day lead time and shows no advantage for the object-based forecasts.

Full access
Aaron Johnson
and
Xuguang Wang

Abstract

The impacts of multiscale flow-dependent initial condition (IC) perturbations for storm-scale ensemble forecasts of midlatitude convection are investigated using perfect-model observing system simulation experiments. Several diverse cases are used to quantitatively and qualitatively understand the impacts of different IC perturbations on ensemble forecast skill. Scale dependence of the results is assessed by evaluating 2-h storm-scale reflectivity forecasts separately from hourly accumulated mesoscale precipitation forecasts.

Forecasts are initialized with different IC ensembles, including an ensemble of multiscale perturbations produced by a multiscale data assimilation system, mesoscale perturbations produced at a coarser resolution, and filtered multiscale perturbations. Mesoscale precipitation forecasts initialized with the multiscale perturbations are more skillful than the forecasts initialized with the mesoscale perturbations at several lead times. This multiscale advantage is due to greater consistency between the IC perturbations and IC uncertainty. This advantage also affects the short-term, smaller-scale forecasts. Reflectivity forecasts on very small scales and very short lead times are more skillful with the multiscale perturbations as a direct result of the smaller-scale IC perturbation energy. The small-scale IC perturbations also contribute to some improvements to the mesoscale precipitation forecasts after the ~5-h lead time. Altogether, these results suggest that the multiscale IC perturbations provided by ensemble data assimilation on the convection-permitting grid can improve storm-scale ensemble forecasts by improving the sampling of IC uncertainty, compared to downscaling of IC perturbations from a coarser-resolution ensemble.

Full access
Bo Huang
and
Xuguang Wang

Abstract

Valid-time-shifting (VTS) ensembles, either in the form of full ensemble members (VTSM) or ensemble perturbations (VTSP), were investigated as inexpensive means to increase ensemble size in the NCEP Global Forecast System (GFS) hybrid four-dimensional ensemble–variational (4DEnVar) data assimilation system. VTSM is designed to sample timing and/or phase errors, while VTSP can eliminate spurious covariances through temporal smoothing. When applying a shifting time interval (τ = 1, 2, or 3 h), VTSM and VTSP triple the baseline background ensemble size from 80 (ENS80) to 240 (ENS240) in the EnVar variational update, where the overall cost is only increased by 23%–27%, depending on the selected τ. Experiments during a 10-week summer period show the best-performing VTSP with τ = 2 h improves global temperature and wind forecasts out to 5 days over ENS80. This could be attributed to the improved background ensemble distribution, ensemble correlation accuracy, and increased effective rank in the populated background ensemble. VTSM generally degrades global forecasts in the troposphere. Improved global forecasts above 100 hPa by VTSM may benefit from the increased spread that alleviates the underdispersiveness of the original background ensemble at such levels. Both VTSM and VTSP improve tropical cyclone track forecasts over ENS80. Although VTSM and VTSP are much less expensive than directly running a 240-member background ensemble, owing to the improved ensemble covariances, the best-performing VTSP with τ = 1 h performs comparably or only slightly worse than ENS240. The best-performing VTSM with τ = 3 h even shows more accurate track forecasts than ENS240, likely contributed to by its better sampling of timing and/or phase errors for cases with small ensemble track spread.

Full access
Xu Lu
and
Xuguang Wang

Abstract

Short-term spinup for strong storms is a known difficulty for the operational Hurricane Weather Research and Forecasting (HWRF) Model after assimilating high-resolution inner-core observations. Our previous study associated this short-term intensity prediction issue with the incompatibility between the HWRF Model and the data assimilation (DA) analysis. While improving physics and resolution of the model was found to be helpful, this study focuses on further improving the intensity predictions through the four-dimensional incremental analysis update (4DIAU). In the traditional 4DIAU, increments are predetermined by subtracting background forecasts from analyses. Such predetermined increments implicitly require linear evolution assumption during the update, which are hardly valid for rapidly evolving hurricanes. To confirm the hypothesis, a corresponding 4D analysis nudging (4DAN) method, which uses online increments is first compared with the 4DIAU in an oscillation model. Then, variants of 4DIAU are proposed to improve its application for nonlinear systems. Next, 4DIAU, 4DAN and their proposed improvements are implemented into the HWRF 4DEnVar DA system and are investigated with Hurricane Patricia (2015). Results from both the oscillation model and HWRF Model show that 1) the predetermined increments in 4DIAU can be detrimental when there are discrepancies between the updated and background forecasts during a nonlinear evolution; 2) 4DAN can improve the performance of incremental update upon 4DIAU, but its improvements are limited by the overfiltering; 3) relocating initial background before the incremental update can improve the corresponding traditional methods; and 4) the feature-relative 4DIAU method improves the incremental update the most and produces the best track and intensity predictions for Patricia among all experiments.

Full access