Search Results

You are looking at 1 - 10 of 177 items for

  • Author or Editor: Ming Xue x
  • All content x
Clear All Modify Search
Ming Xue

Abstract

High-order numerical diffusion is commonly used in numerical models to provide scale selective control over small-scale noise. Conventional high-order schemes have undesirable side effects, however: they can introduce noise themselves. Two types of monotonic high-order diffusion schemes are proposed. One is based on flux correction/limiting on the corrective fluxes, which is the difference between a high-order (fourth order and above) diffusion scheme and a lower-order (typically second order) one. Overshooting and undershooting found in the solutions of higher-order diffusions near sharp gradients are prevented, while the highly selective property of damping is retained.

The second simpler (flux limited) scheme simply ensures that the diffusive fluxes are always downgradient; otherwise, the fluxes are set to zero. This much simpler scheme yields as good a solution in 1D cases as and better solutions in 2D than the one using the first more elaborate flux limiter. The scheme also preserves monotonicity in the solutions and is computational much more efficient.

The simple flux-limited fourth- and sixth-order diffusion schemes are also applied to thermal bubble convection. It is shown that overshooting and undershooting are consistently smaller when the flux-limited version of the high-order diffusion is used, no matter whether the advection scheme is monotonic or not. This conclusion applies to both scalar and momentum fields. Higher-order monotonic diffusion works better and even more so when used together with monotonic advection.

Full access
Ming Hu and Ming Xue

Abstract

Various configurations of the intermittent data assimilation procedure for Level-II Weather Surveillance Radar-1988 Doppler radar data are examined for the analysis and prediction of a tornadic thunderstorm that occurred on 8 May 2003 near Oklahoma City, Oklahoma. Several tornadoes were produced by this thunderstorm, causing extensive damages in the south Oklahoma City area. Within the rapidly cycled assimilation system, the Advanced Regional Prediction System three-dimensional variational data assimilation (ARPS 3DVAR) is employed to analyze conventional and radar radial velocity data, while the ARPS complex cloud analysis procedure is used to analyze cloud and hydrometeor fields and adjust in-cloud temperature and moisture fields based on reflectivity observations and the preliminary analysis of the atmosphere. Forecasts for up to 2.5 h are made from the assimilated initial conditions. Two one-way nested grids at 9- and 3-km grid spacings are employed although the assimilation configuration experiments are conducted for the 3-km grid only while keeping the 9-km grid configuration the same. Data from the Oklahoma City radar are used. Different combinations of the assimilation frequency, in-cloud temperature adjustment schemes, and the length and coverage of the assimilation window are tested, and the results are discussed with respect to the length and evolution stage of the thunderstorm life cycle. It is found that even though the general assimilation method remains the same, the assimilation settings can significantly impact the results of assimilation and the subsequent forecast. For this case, a 1-h-long assimilation window covering the entire initial stage of the storm together with a 10-min spinup period before storm initiation works best. Assimilation frequency and in-cloud temperature adjustment scheme should be set carefully to add suitable amounts of potential energy during assimilation. High assimilation frequency does not necessarily lead to a better result because of the significant adjustment during the initial forecast period. When a short assimilation window is used, covering the later part of the initial stage of storm and using a high assimilation frequency and a temperature adjustment scheme based on latent heat release can quickly build up the storm and produce a reasonable analysis and forecast. The results also show that when the data from a single Doppler radar are assimilated with properly chosen assimilation configurations, the model is able to predict the evolution of the 8 May 2003 Oklahoma City tornadic thunderstorm well for up to 2.5 h. The implications of the choices of assimilation settings for real-time applications are discussed.

Full access
Xiao-Ming Hu and Ming Xue

Abstract

When assessed using the difference between urban and rural air temperatures, the urban heat island (UHI) is most prominent during the nighttime. Typically, nocturnal UHI intensity is maintained throughout the night. The UHI intensity over Dallas–Fort Worth (DFW), Texas, however, experienced frequent “collapses” (sudden decreases) around midnight during August 2011, while the region was experiencing an intense heat wave. Observational and modeling studies were conducted to understand this unique phenomenon. Sea-breeze passage was found to be ultimately responsible for the collapses of the nocturnal UHI. Sea-breeze circulation developed along the coast of the Gulf of Mexico during the daytime. During the nighttime, the sea-breeze circulation was advected inland (as far as ~400 km) by the low-level jet-enhanced southerly flow, maintaining the characteristics of sea-breeze fronts, including the enhanced wind shear and vertical mixing. Ahead of the front, surface radiative cooling enhanced the near-surface temperature inversion in rural areas through the night with calm winds. During the frontal passage (around midnight at DFW), the enhanced vertical mixing at the leading edge of the fronts brought warmer air to the surface, leading to rural surface warming events. In contrast, urban effects led to a nearly neutral urban boundary layer. The enhanced mechanical mixing associated with sea-breeze fronts, therefore, did not increase urban surface temperature. The different responses to the sea-breeze frontal passages between rural (warming) and urban areas (no warming) led to the collapse of the UHI. The inland penetration of sea-breeze fronts at such large distances from the coast and their effects on UHI have not been documented in the literature.

Full access
Diandong Ren and Ming Xue

Abstract

To clarify the definition of the equation for the temperature toward which the soil skin temperature is restored, the prediction equations in the commonly used force–restore model for soil temperature are rederived from the heat conduction equation. The derivation led to a deep-layer temperature, commonly denoted T 2, that is defined as the soil temperature at depth πd plus a transient term, where d is the e-folding damping depth of soil temperature diurnal oscillations. The corresponding prediction equation for T 2 has the same form as the commonly used one except for an additional term involving the lapse rate of the “seasonal mean” soil temperature and the damping depth d. A term involving the same also appears in the skin temperature prediction equation, which also includes a transient term. In the literature, T 2 was initially defined as the short-term (over several days) mean of the skin temperature, but in practice it is often used as the deep-layer temperature. Such inconsistent use can lead to drift in T 2 prediction over a several-day period, as is documented in this paper. When T 2 is properly defined and initialized, large drift in T 2 prediction is avoided and the surface temperature prediction is usually improved. This is confirmed by four sets of experiments, each for a period during each season of 2000, that are initialized using and verified against measurements of the Oklahoma Atmospheric Surface-Layer Instrumentation System (OASIS) project.

Full access
Jidong Gao and Ming Xue

Abstract

A new efficient dual-resolution (DR) data assimilation algorithm is developed based on the ensemble Kalman filter (EnKF) method and tested using simulated radar radial velocity data for a supercell storm. Radar observations are assimilated on both high-resolution and lower-resolution grids using the EnKF algorithm with flow-dependent background error covariances estimated from the lower-resolution ensemble. It is shown that the flow-dependent and dynamically evolved background error covariances thus estimated are effective in producing quality analyses on the high-resolution grid.

The DR method has the advantage of being able to significantly reduce the computational cost of the EnKF analysis. In the system, the lower-resolution ensemble provides the flow-dependent background error covariance, while the single-high-resolution forecast and analysis provides the benefit of higher resolution, which is important for resolving the internal structures of thunderstorms. The relative smoothness of the covariance obtained from the lower 4-km-resolution ensemble does not appear to significantly degrade the quality of analysis. This is because the cross covariance among different variables is of first-order importance for “retrieving” unobserved variables from the radar radial velocity data.

For the DR analysis, an ensemble size of 40 appears to be a reasonable choice with the use of a 4-km horizontal resolution in the ensemble and a 1-km resolution in the high-resolution analysis. Several sensitivity tests show that the DR EnKF system is quite robust to different observation errors. A 4-km thinned data resolution is a compromise that is acceptable under the constraint of real-time applications. A data density of 8 km leads to a significant degradation in the analysis.

Full access
Brett Roberts and Ming Xue

Abstract

The idealized supercell simulations in a previous study by Roberts et al. are further analyzed to clarify the physical mechanisms leading to differences in mesocyclone intensification between an experiment with surface friction applied to the full wind (FWFRIC) and an experiment with friction applied to the environmental wind only (EnvFRIC). The low-level mesocyclone intensifies rapidly during the 3 min preceding tornadogenesis in FWFRIC, while the intensification during the same period is much weaker in EnvFRIC, which fails to produce a tornado. To quantify the mechanisms responsible for this discrepancy in mesocyclone evolution, material circuits enclosing the low-level mesocyclone are initialized and traced back in time, and circulation budgets for these circuits are analyzed. The results show that in FWFRIC, surface drag directly generates a substantial proportion of the final circulation around the mesocyclone, especially below 1 km AGL; in EnvFRIC, circulation budgets indicate the mesocyclone circulation is overwhelmingly barotropic. It is proposed that the import of near-ground, frictionally generated vorticity into the low-level mesocyclone in FWFRIC is a key factor causing the intensification and lowering of the mesocyclone toward the ground, creating a large upward vertical pressure gradient force that leads to tornadogenesis. Similar circulation analyses are also performed for circuits enclosing the tornado at its genesis stage. The frictionally generated circulation component is found to contribute more than half of the final circulation for circuits enclosing the tornado vortex below 400 m AGL, and the frictional contribution decreases monotonically with the height of the final circuit.

Full access
Nathan Dahl and Ming Xue

Abstract

Prolonged heavy rainfall produced widespread flooding in the Oklahoma City area early on 14 June 2010. This event was poorly predicted by operational models; however, it was skillfully predicted by the Storm-Scale Ensemble Forecast produced by the Center for Analysis and Prediction of Storms as part of the Hazardous Weather Testbed 2010 Spring Experiment. In this study, the quantitative precipitation forecast skill of ensemble members is assessed and ranked using a neighborhood-based threat score calculated against the stage IV precipitation data, and Oklahoma Mesonet observations are used to evaluate the forecast skill for surface conditions. Statistical correlations between skill metrics and qualitative comparisons of relevant features for higher- and lower-ranked members are used to identify important processes. The results demonstrate that the development of a cold pool from previous convection and the movement and orientation of the associated outflow boundary played dominant roles in the event. Without assimilated radar data from this earlier convection, the modeled cold pool was too weak and too slow to develop. Furthermore, forecast skill was sensitive to the choice of microphysics parameterization; members that used the Thompson scheme produced initial cold pools that propagated too slowly, substantially increasing errors in the timing and placement of later precipitation. The results also suggest important roles played by finescale, transient features in the period of outflow boundary stalling and reorientation associated with the heaviest rainfall. The unlikelihood of a deterministic forecast reliably predicting these features highlights the benefit of using convection-allowing/convection-resolving ensemble forecast methods for events of this kind.

Full access
Haixia Liu and Ming Xue

Abstract

The 12–13 June 2002 convective initiation case from the International H2O Project (IHOP_2002) field experiment over the central Great Plains of the United States is simulated numerically with the Advanced Regional Prediction System (ARPS) at 3-km horizontal resolution. The case involves a developing mesoscale cyclone, a dryline extending from a low center southwestward with a cold front closely behind, which intercepts the midsection of the dryline, and an outflow boundary stretching eastward from the low center resulting from earlier mesoscale convection. Convective initiation occurred in the afternoon at several locations along and near the dryline or near the outflow boundary, but was not captured by the most intensive deployment of observation instruments during the field experiment, which focused instead on the dryline–outflow boundary intersection point.

Standard and special surface and upper-air observations collected during the field experiment are assimilated into the ARPS at hourly intervals in a 6-h preforecast period in the control experiment. This experiment captured the initiation of four groups of convective cells rather well, with timing errors ranging between 10 and 100 min and location errors ranging between 5 and 60 km. The general processes of convective initiation are discussed. Interestingly, a secondary initiation of cells due to the collision between the main outflow boundary and the gust fronts developing out of model-predicted convection earlier is also captured accurately about 7 h into the prediction. The organization of cells into a squall line after 7 h is reproduced less well.

A set of sensitivity experiments is performed in which the impact of assimilating nonstandard data gathered by IHOP_2002, and the length and interval of the data assimilation are examined. Overall, the control experiment that assimilated the most data produced the best forecast although some of the other experiments did better in some aspects, including the timing and location of the initiation of some of the cell groups. Possible reasons for the latter results are suggested. The lateral boundary locations are also found to have significant impacts on the initiation and subsequent evolution of convection, by affecting the interior flow response and/or feeding in more accurate observation information through the boundary, as available gridded analyses from a mesoscale operational model were used as the boundary condition. Another experiment examines the impact of the vertical correlation scale in the analysis scheme on the cold pool analysis and the subsequent forecast. A companion paper will analyze in more detail the process and mechanism of convective initiation, based on the results of a nested 1-km forecast.

Full access
Mingjing Tong and Ming Xue

Abstract

The ensemble Kalman filter method is applied to correct errors in five fundamental microphysical parameters that are closely involved in the definition of drop/particle size distributions of microphysical species in a commonly used single-moment ice microphysics scheme, for a model-simulated supercell storm, using radar data. The five parameters include the intercept parameters for rain, snow, and hail/graupel and the bulk densities of hail/graupel and snow. The ensemble square root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation.

The five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. A data selection procedure based on correlation information is introduced, which combined with variance inflation, effectively avoids the collapse of the spread of parameter ensemble, hence filter divergence. Parameter estimation results demonstrate, for the first time, that the ensemble-based method can be used to correct model errors in microphysical parameters through simultaneous state and parameter estimation, using radar reflectivity observations. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is less reliable, mainly because the identifiability of the parameters becomes weaker and the problem might have no unique solution. The parameter estimation results are found to be very sensitive to the realization of the initial parameter ensemble, which is mainly related to the use of relatively small ensemble sizes. Increasing ensemble size generally improves the parameter estimation. The quality of parameter estimation also depends on the quality of observation data. It is also found that the results of state estimation are generally improved when simultaneous parameter estimation is performed, even when the estimated parameter values are not very accurate.

Full access
Mingjing Tong and Ming Xue

Abstract

The possibility of estimating fundamental parameters common in single-moment ice microphysics schemes using radar observations is investigated for a model-simulated supercell storm by examining parameter sensitivity and identifiability. These parameters include the intercept parameters for rain, snow, and hail/graupel, and the bulk densities of snow and hail/graupel. These parameters are closely involved in the definition of drop/particle size distributions of microphysical species but often assume highly uncertain specified values.

The sensitivity of model forecast within data assimilation cycles to the parameter values, and the issue of solution uniqueness of the estimation problem, are examined. The ensemble square root filter (EnSRF) is employed for model state estimation. Sensitivity experiments show that the errors in the microphysical parameters have a larger impact on model microphysical fields than on wind fields; radar reflectivity observations are therefore preferred over those of radial velocity for microphysical parameter estimation. The model response time to errors in individual parameters are also investigated. The results suggest that radar data should be used at about 5-min intervals for parameter estimation. The response functions calculated from ensemble mean forecasts for all five individual parameters show concave shapes, with unique minima occurring at or very close to the true values; therefore, true values of these parameters can be retrieved at least in those cases where only one parameter contains error.

The identifiability of multiple parameters together is evaluated from their correlations with forecast reflectivity. Significant levels of correlation are found that can be interpreted physically. As the number of uncertain parameters increases, both the level and the area coverage of significant correlations decrease, implying increased difficulties with multiple-parameter estimation. The details of the estimation procedure and the results of a complete set of estimation experiments are presented in Part II of this paper.

Full access