Search Results

You are looking at 1 - 10 of 14 items for :

  • Author or Editor: Wei Wang x
  • Monthly Weather Review x
  • Refine by Access: All Content x
Clear All Modify Search
Wei Wang and Nelson L. Seaman

Abstract

A comparison study of four cumulus parameterization schemes (CPSs), the Anthes–Kuo, Betts–Miller, Grell, and Kain–Fritsch schemes, is conducted using The Pennsylvania State University–National Center for Atmospheric Research mesoscale model. Performance of these CPSs is examined using six precipitation events over the continental United States for both cold and warm seasons. Grid resolutions of 36 and 12 km are chosen to represent current mesoscale research models and future operational models. The key parameters used to evaluate skill include precipitation, sea level pressure, wind, and temperature predictions. Precipitation is evaluated statistically using conventional skill scores (such as threat and bias scores) for different threshold values based on hourly rainfall observations. Rainfall and other mesoscale features are also evaluated by careful examination of analyzed and simulated fields, which are discussed in the context of timing, evolution, intensity, and structure of the precipitation systems.

It is found that the general 6-h precipitation forecast skill for these schemes is fairly good in predicting four out of six cases examined in this study, even for higher thresholds. The forecast skill is generally higher for cold-season events than for warm-season events. There is an increase in the forecast skill in the 12-km model, and the gain is most obvious in predicting heavier rainfall amounts. The model’s precipitation forecast skill is better in rainfall volume than in either the areal coverage or the peak amount. The scheme with the convective available potential energy–based closure assumption (Kain–Fritsch scheme) appears to perform better. Some systematic behaviors associated with various schemes are also noted wherever possible.

The partition of rainfall into subgrid scale and grid scale is sensitive to the particular parameterization scheme chosen, but relatively insensitive to either the model grid sizes or the convective environments.

The prediction of mesoscale surface features in warm-season cases, such as mesoscale pressure centers, wind-shift lines (gust fronts), and temperature fields, strongly suggests that the CPSs with moist downdrafts are able to predict these surface features more accurately.

Full access
Wei Wang and Thomas T. Warner

Abstract

The Penn State/NCAR mesoscale model has been used in a study of special static- and dynamic-initialization techniques that improve a very-short-range forecast of the heavy convective rainfall that occurred in Texas, Oklahoma and Kansas during 9–10 May 1979, the SESAME IV study period. In this study, the model is initialized during the precipitation event. Two types of four-dimensional data assimilation (FDDA) procedures are used in the dynamic-initialization experiments in order to incorporate data during a 12-hour preforecast period. With the first type, FDDA by Newtonian relaxation is used to incorporate sounding data during the preforecast period. With the second FDDA procedure, radar-based precipitation-rate estimates and hourly raingage data are used to define a three-dimensional latent-heating rate field that contributes to the diabatic heating term in the model's thermodynamic equation during the preforecast period. This latent-heating specification procedure is also employed in static-initialization experiments, where the observed rainfall rate and radar echo pattern near the initial time of the forecast are used to infer a latent-heating rate that is specified in the mesoscale model's thermodynamic equation during the early part of the actual forecast. The precipitation forecasts from these static- and dynamic-initialization experiments are compared with the forecast produced when only operational radiosonde data are used in a conventional static initialization.

The conventional (control) initialization procedure that used only operational radiosonde data produced a precipitation prediction for the first three to four hours of the forecast period that would have been inadequate in an operational setting. Whereas at the initial time of the forecast there was substantial convective precipitation observed in a band near the edge of an elevated mixed layer, the model did not initiate the heavy rainfall until about the fourth hour of the forecast.

The use of the experimental static initialization with prescribed latent heating during the first forecast hour produced greatly improved rainfall rates during the first three to four hours. The success of the technique was shown to be not especially sensitive to moderate variations in the duration, intensity and vertical distribution of the imposed heating. Applications of the Newtonian-relaxation procedure during the preforecast period, that relaxed the model solution toward the initial large-scale analysis, produced a better precipitation forecast than did the control, with a maximum in approximately the correct position, but the intensities were too small. Combined use of either the preforecast or in-forecast latent-heat forcing with the Newtonian relaxation produced an improved forecast of rainfall intensity compared to use of the Newtonian relaxation alone. Even though both the experimental static- and dynamic-initialization procedures produced considerably improved very-short-range precipitation forecasts, compared to the control, the experimental static-initialization procedure that used latent-heat forcing during the first forecast hour did slightly better for this case.

Full access
Yansen Wang, Wei-Kuo Tao, and Joanne Simpson

Abstract

A two-dimensional cloud-resolving model is linked with a TOGA COARE flux algorithm to examine the impact of the ocean surface fluxes on the development of a tropical squall line and its associated precipitation processes. The model results show that the 12-h total surface rainfall amount in the run excluding the surface fluxes is about 80% of that for the run including surface fluxes (domain-averaged rainfall, 3.4 mm). The model results also indicate that latent heat flux or evaporation from the ocean is the most influential factor among the three fluxes (latent heat, sensible heat, and momentum) for the development of the squall system. The average latent and sensible heat fluxes in the convective (disturbed) region are 60 and 11 W m−2 larger, respectively, than those of the nonconvective (clear) region due to the gust wind speed, a cool pool near the surface, and drier air from downdrafts associated with the convective activity. These results are in good agreement with observations.

In addition, sensitivity tests using a simple bulk aerodynamic approximation as well as a Blackadar-type surface flux formulation have predicted much larger latent and sensible heat fluxes than those obtained using the TOGA COARE flux algorithm. Consequently, much more surface rainfall was simulated using a simple aerodynamic approximation or a Blackadar-type surface flux formulation. The results presented here also suggest that a fine vertical resolution (at least in the lowest model grid point) is needed in order to study the interactive processes between the ocean and convection using a cloud-resolving model.

Full access
Sergey Sokolovskiy, Ying-Hwa Kuo, and Wei Wang

Abstract

Assimilation into numerical weather models of the refractivity, Abel-retrieved from radio occultations, as the local refractivity at ray tangent point may result in large errors in the presence of strong horizontal gradients (atmospheric fronts, strong convection). To reduce these errors, other authors suggested modeling the Abel-retrieved refractivity as a nonlocal linear function of the 3D refractivity, which can be used as a linear observation operator for assimiliation. The authors of this study introduce their approach for the nonlocal linear observation operator, which consists of modeling the excess phase path, calculated along certain trajectories below the top of an atmospheric model. In this study (not aimed at development of an observation operator for any specific atmospheric model), both approaches are validated by assessing the accuracy of both linearized observation operators by numerical simulations with the high-resolution Weather Research and Forecasting (WRF) model and comparing them to the accuracy of interpretation of the Abel-retrieved refractivity as local. Improvement of the accuracy of about an order of magnitude is found with the nonlocal refractivity and further improvement is found with the excess phase path. The effect of horizontal resolution of an atmospheric model on the accuracy of modeling local and nonlocal linear observables is also investigated, and it is demonstrated that the nonlocal linear modeling of radio occultation observables is especially important for weather prediction models with sufficiently high horizontal resolution, grid size <100 km (mesoscale models).

Full access
Xinrong Wu, Wei Li, Guijun Han, Shaoqing Zhang, and Xidong Wang

Abstract

While fixed covariance localization can greatly increase the reliability of the background error covariance in filtering by suppressing the long-distance spurious correlations evaluated by a finite ensemble, it may degrade the assimilation quality in an ensemble Kalman filter (EnKF) as a result of restricted longwave information. Tuning an optimal cutoff distance is usually very expensive and time consuming, especially for a general circulation model (GCM). Here the authors present an approach to compensate the demerit in fixed localization. At each analysis step, after the standard EnKF is done, a multiple-scale analysis technique is used to extract longwave information from the observational residual (referred to the EnKF ensemble mean). Within a biased twin-experiment framework consisting of a global barotropical spectral model and an idealized observing system, the performance of the new method is examined. Compared to a standard EnKF, the hybrid method is superior when an overly small/large cutoff distance is used, and it has less dependence on cutoff distance. The new scheme is also able to improve short-term weather forecasts, especially when an overly large cutoff distance is used. Sensitivity studies show that caution should be taken when the new scheme is applied to a dense observing system with an overly small cutoff distance in filtering. In addition, the new scheme has a nearly equivalent computational cost to the standard EnKF; thus, it is particularly suitable for GCM applications.

Full access
Wei Wang, Ying-Hwa Kuo, and Thomas T. Warner

Abstract

An analysis of a diabatically driven and long-lived midtropospheric vortex in the lee of the Tibetan Plateau during 24–27 June 1987 is presented. The large-scale conditions were characterized by the westward expansion of the 500-mb western Pacific subtropical high and the amplification of a trough in the lee of the plateau. Embedded within the lee trough, three mesoscale convective systems (MCSs) developed. A vortex emerged following the dissipation of one MCS, with its strongest circulation located in the 400–500-mb layer. Low-level warm advection, and surface sensible and latent heating contributed to the convective initiation. Weak wind and weak ambient vorticity conditions inside the lee trough provided a favorable environment for these MCSs and the vortex to develop and evolve. The organized vortex circulation featured a coherent core of cyclonic vorticity extending from near the surface to 300 mb, with virtually no vertical tilt. The air in the vicinity of the vortex was very moist, and the temperature profile was nearly moist adiabatic, with moderate convective available potential energy. The wind near the vortex center was weak, with little vertical shear. These characteristics are similar to those of mesoscale convectively generated vortices found in the United States. The vortex circulation persisted in the same area for 3 days. The steadiness of large-scale circulation in the region, that is, the presence of the stationary lee trough and a geopotential ridge that developed to the east of the trough, likely contributed to the persistence of the vortex over the same area.

Potential vorticity (PV) diagnosis suggests that the significant increase in the relative vorticity associated with the vortex development was largely a result of diabatic heating associated with the MCS. An elevated PV anomaly was found near 400 mb in situ after the dissipation of the MCS. The PV anomaly was distinctly separated from those associated with baroclinic disturbances located to the north of the Tibetan Plateau, and the region of the PV anomaly was nearly saturated (with relative humidity exceeding 80%). Further support for this hypothesis was provided by the estimated heating profile and the rate of PV generation due to diabatic heating. The heating peaked at 300 mb, while the diabatic generation of PV reached its maximum at 500 mb. The preexisting ambient vorticity contributed about 20% to the total PV generation near the mature stage of the MCS.

The vortex was also associated with heavy precipitation over the western Sichuan Basin of China. The persistent, heavy rainfall took place in the southeasterly flow associated with the vortex circulation, about 300 km north of the vortex center.

Full access
George Tai-Jen Chen, Chung-Chieh Wang, and David Ta-Wei Lin

Abstract

The present study investigates the characteristics of low-level jets (LLJs) (≥12.5 m s−1) below 600 hPa over northern Taiwan in the mei-yu season and their relationship to heavy rainfall events (≥50 mm in 24 h) through the use of 12-h sounding data, weather maps at 850 and 700 hPa, and hourly rainfall data at six surface stations during the period of May–June 1985–94. All LLJs are classified based on their height, appearance (single jet or double jet), and movement (migratory and nonmigratory). The frequency, vertical structure, and spatial and temporal distribution of LLJs relative to the onset of heavy precipitation are discussed.

Results on the general characteristics of LLJs suggest that they occurred about 15% of the time in northern Taiwan, with a top speed below 40 m s−1. The level of maximum wind appeared mostly between 850 and 700 hPa, with highest frequency at 825–850 hPa. A single jet was observed more often (76%) than a double jet (24%), while in the latter case a barrier jet usually existed at 900–925 hPa as the lower branch.

Migratory and nonmigratory LLJs each constituted about half of all cases, and there existed no apparent relationship between their appearance and movement. Migratory LLJs tended to be larger in size, stronger over a thicker layer, more persistent, and were much more closely linked to heavy rainfall than nonmigratory jets. They often formed over southern China between 20° and 30°N and moved toward Taiwan presumably along with the mei-yu frontal system.

Before and near the onset of the more severe heavy rain events (≥100 mm in 24 h) in northern Taiwan, there was a 94% chance that an LLJ would be present over an adjacent region at 850 hPa, and 88% at 700 hPa, in agreement with earlier studies. Occurrence frequencies of LLJs for less severe events (50–100 mm in 24 h) were considerably lower, and the difference in accumulative rainfall amount was seemingly also affected by the morphology of the LLJs, including their strength, depth, elevation of maximum wind, persistence, proximity to northern Taiwan, source region of moisture, and their relative timing of arrival before rainfall. During the data period, about 40% of all migratory LLJs at 850 or 700 hPa passing over northern Taiwan were associated with heavy rainfall within the next 24 h. The figure, however, was much lower compared to earlier studies, and some possible reasons are offered to account for this deficit.

Full access
Tae-Kwon Wee, Ying-Hwa Kuo, Dong-Kyou Lee, Zhiquan Liu, Wei Wang, and Shu-Ya Chen

Abstract

The authors have discovered two sizeable biases in the Weather Research and Forecasting (WRF) model: a negative bias in geopotential and a warm bias in temperature, appearing both in the initial condition and the forecast. The biases increase with height and thus manifest themselves at the upper part of the model domain. Both biases stem from a common root, which is that vertical structures of specific volume and potential temperature are convex functions. The geopotential bias is caused by the particular discrete hydrostatic equation used in WRF and is proportional to the square of the thickness of model layers. For the vertical levels used in this study, the bias far exceeds the gross 1-day forecast bias combining all other sources. The bias is fixed by revising the discrete hydrostatic equation. WRF interpolates potential temperature from the grids of an external dataset to the WRF grids in generating the initial condition. Associated with the Exner function, this leads to the marked bias in temperature. By interpolating temperature to the WRF grids and then computing potential temperature, the bias is removed. The bias corrections developed in this study are expected to reduce the disparity between the forecast and observations, and eventually to improve the quality of analysis and forecast in the subsequent data assimilation. The bias corrections might be especially beneficial to assimilating height-based observations (e.g., radio occultation data).

Full access
Christopher A. Davis, David A. Ahijevych, Wei Wang, and William C. Skamarock

Abstract

An evaluation of medium-range forecasts of tropical cyclones (TCs) is performed, covering the eastern North Pacific basin during the period 1 August–3 November 2014. Real-time forecasts from the Model for Prediction Across Scales (MPAS) and operational forecasts from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) are evaluated. A new TC-verification method is introduced that treats TC tracks as objects. The method identifies matching pairs of forecast and observed tracks, missed and false alarm tracks, and derives statistics using a multicategory contingency table methodology. The formalism includes track, intensity, and genesis.

Two configurations of MPAS, a uniform 15-km mesh and a variable-resolution mesh transitioning from 60 km globally to 15 km over the eastern Pacific, are compared with each other and with the operational GFS. The two configurations of MPAS reveal highly similar forecast skill and biases through at least day 7. This result supports the effectiveness of TC prediction using variable resolution.

Both MPAS and the GFS suffer from biases in predictions of genesis at longer time ranges; MPAS produces too many storms whereas the GFS produces too few. MPAS better discriminates hurricanes than does the GFS, but the false alarms in MPAS lower overall forecast skill in the medium range relative to GFS. The biases in MPAS forecasts are traced to errors in the parameterization of shallow convection south of the equator and the resulting erroneous invigoration of the ITCZ over the eastern North Pacific.

Full access
Steven M. Cavallo, Ryan D. Torn, Chris Snyder, Christopher Davis, Wei Wang, and James Done

Abstract

Real-time analyses and forecasts using an ensemble Kalman filter (EnKF) and the Advanced Hurricane Weather Research and Forecasting Model (AHW) are evaluated from the 2009 North Atlantic hurricane season. This data assimilation system involved cycling observations that included conventional in situ data, tropical cyclone (TC) position, and minimum SLP and synoptic dropsondes each 6 h using a 96-member ensemble on a 36-km domain for three months. Similar to past studies, observation assimilation systematically reduces the TC position and minimum SLP errors, except for strong TCs, which are characterized by large biases due to grid resolution. At 48 different initialization times, an AHW forecast on 12-, 4-, and 1.33-km grids is produced with initial conditions drawn from a single analysis member. Whereas TC track analyses and forecasts exhibit a pronounced northward bias, intensity forecast errors are similar to (lower than) the NWS Hurricane Weather Research Model (HWRF) and GFDL forecasts for forecast lead times ≤60 h (>60 h), with the largest track errors associated with the weakest systems, such as Tropical Storm (TS) Erika. Several shortcomings of the data assimilation system are addressed through postseason sensitivity tests, including using the maximum 800-hPa circulation to identify the TC position during assimilation and turning off the quality control for the TC minimum SLP observation when the initial intensity is far too weak. In addition, the improved forecast of TS Erika relative to HWRF is shown to be related to having initial conditions that are more representative of a sheared TC and not using a cumulus parameterization.

Full access