Search Results

You are looking at 1 - 10 of 24 items for

  • Author or Editor: Sheng Chen x
  • All content x
Clear All Modify Search
Yuxiao Chen, Jing Chen, Dehui Chen, Zhizhen Xu, Jie Sheng, and Fajing Chen

Abstract

The simulated radar reflectivity used by current mesoscale numerical weather prediction models can reflect the grid precipitation but cannot reflect the subgrid precipitation generated by a cumulus parameterization scheme. To solve this problem, this study developed a new simulated radar reflectivity calculation method to obtain the new radar reflectivity corresponding to the subgrid-scale and grid-scale precipitation based on the mesoscale Global/Regional Assimilation and Prediction System (GRAPES) model of the China Meteorological Administration. Based on this new method, two 15-day forecast experiments were carried out for two different time periods (11–25 April 2019 and 1–15 August 2019), and the radar reflectivity products obtained by the new method and previous method were compared. The results show that the radar reflectivity obtained by the new simulated radar reflectivity calculation method gives a clear indication of the subgrid-scale precipitation in the model. Verification results show that the threat scores of the improved experiments are better than those of the control experiments in general and that the reliability of the simulated radar reflectivity for the indication of precipitation is improved. It is concluded that the new simulated radar reflectivity calculation method is effective and significantly improves the reflectivity products. This method has good prospects for providing more information about forecasting precipitation and convective activity in operational models.

Open access
Yin-Sheng Chen, Martin Ehrendorfer, and Allan H. Murphy

Abstract

This paper investigates the relationship between the quality and value of forecast in the context of a generalized N-action, N-event model of the cost-loss ratio situation. The forecasts of interest are imperfect categorical forecasts, calibrated according to past performance and represented by multidimensional sets of conditional and predictive probabilities. Forecasts quality is measured by the ranked probability score (RPS), a natural measure of the accuracy of forecasts in the context of this model. The measure of value is the difference between the expected expense associated with climatological information and the expected expense associated with imperfect forecasts. Thus, climatological and perfect information define lower and upper bounds, respectively, on the quality and value of the imperfect forecasts.

Quality-value relationships are explored in the three-action, three-event situation, using brute form and mathematical programming methods. Numerical results are presented for several specific cases. In all cases, the relationships are described by envelopes of values rather than by single-valued functions, indicating that a range of forecast value is generally associated with a given level of forecast quality (and vice versa). The existence of these envelopes reveals two important deficiencies in scalar (i.e., one-dimensional) measures of forecast quality, such as the RPS, when they an used as surrogates for measures of value: 1) these quality measures generally provide only imprecise estimates of forecast value and 2) increases in forecast quality, as reflected by such measures may actually be associated with decreases in forecast value.

Full access
Allan H. Murphy, Yin-Sheng Chen, and Robert T. Clemen

Abstract

In this paper we investigate the interrelationships between objective and subjective temperature forecasts. An information-content approach is adopted within the overall context of a general framework for forecast verification. This approach can be used to address questions such as whether the subjective forecasts contain information regarding the corresponding observed temperatures that is not included in the objective forecasts. Two methods of analysis are employed: 1) ordinary least squares regression analysis and 2) a Bayesian information-content analysis.

Maximum and minimum temperature forecasts formulated operationally for six National Weather Service offices during the period 1980–86 are analyzed. Results produced by the two methods are quite consistent and can be summarized as follows: 1) the subjective forecasts contain information not included in the objective forecasts for all cases (i.e., stratifications) considered and 2) the objective forecasts contain information not included in the subjective forecasts in a substantial majority of these cases. Generally, the incremental information content in the subjective forecasts considerably exceeds the incremental information content in the objective forecasts. The implications of these results for operational short-range temperature forecasting are briefly discussed.

Full access
Sheng Chen, Fangli Qiao, Wenzheng Jiang, Jingsong Guo, and Dejun Dai

Abstract

The impact of ocean surface waves on wind stress at the air–sea interface under low to moderate wind conditions was systematically investigated based on a simple constant flux model and flux measurements obtained from two coastal towers in the East China Sea and South China Sea. It is first revealed that the swell-induced perturbations can reach a height of nearly 30 m above the mean sea surface, and these perturbations disturb the overlying airflow under low wind and strong swell conditions. The wind profiles severely depart from the classical logarithmic profiles, and the deviations increase with the peak wave phase speeds. At wind speeds of less than 4 m s−1, an upward momentum transfer from the wave to the atmosphere is predicted, which is consistent with previous studies. A comparison between the observations and model indicates that the wind stress calculated by the model is largely consistent with the observational wind stress when considering the effects of surface waves, which provides a solution for accurately calculating wind stress in ocean and climate models. Furthermore, the surface waves at the air–sea interface invalidate the traditional Monin–Obukhov similarity theory (MOST), and this invalidity decreases as observational height increases.

Open access
Qiu-shi Chen, Le-sheng Bai, and David H. Bromwich

Abstract

In comparison to the Tatsumi’s spectral method, the harmonic-Fourier spectral method has two major advantages. 1) The semi-implicit scheme is quite efficient because the solutions of the Poisson and Helmholtz equations are readily derived. 2) The lateral boundary value problem of a limited-area model is easily solved. These advantages are the same as those of the spherical harmonics used in global models if the singularity at the pole points for a globe is considered to be the counterpart of the lateral boundary condition for a limited region.

If a limited-area model is nested in a global model, the prediction of the limited-area model at each time step is the sum of the inner part and the harmonic part predictions. The inner part prediction is solved by the double sine series from the inner part equations for the limited-area model. The harmonic part prediction is derived from the prediction of the global model. An external wind lateral boundary method is proposed based on the basic property of the wind separation in a limited region. The boundary values of a limited-area model in this method are not given at the closed boundary line, but always given by harmonic functions defined throughout the limited domain. The harmonic functions added to the inner parts at each time step represent the effects of the lateral boundary values on the prediction of the limited-area model, and they do not cause any discontinuity near the boundary.

Tests show that predicted motion systems move smoothly in and out through the boundary, where the predicted variables are very smooth without any other boundary treatment. In addition, the boundary method can also be used in the most complicated mountainous region where the boundary intersects high mountains. The tests also show that the adiabatic dynamical part of the limited-area model very accurately predicts the rapid development of a cyclone caused by dry baroclinic instability along the east coast of North America and a lee cyclogenesis case in East Asia. The predicted changes of intensity and location of both cyclones are close to those given by the observations.

Full access
Shou-Jun Chen, Le-Sheng Bai, and Ernest C. Kung

Abstract

To explicitly describe the energy exchange between meso and synoptic-scale motions, a diagnostic scheme of kinetic energy has been developed. By using a horizontal filtering technique, meteorological variables are separated into synoptic and mesoscale components. A set of budget equations are derived for the kinetic energy of synoptic scale motion V̂, the kinetic energy K′ of mesoscale motion V′, and the scalar product V̂·V′.

The scheme is applied to diagnose a severe rainstorm case over northern China during summer. The results show that the scale interactions between wind and height fields produce V̂·V′, which transfers kinetic energy to and K′. The term V̂·V′ thus acts as a medium in scale interactions conveying the energy between meso- and synoptic-scale motions and the potential energy source residing in the mass field.

Full access
Shou-Jun Chen, Le-sheng Bai, and Stanley L. Barnes

Abstract

A real-time quasi-geostrophic omega diagnostic scheme, based on Hoskins' Q-vector analysis and developed by Barnes, was applied to a cold mesoscale vortex with severe convection over northeast China in summer. The limited area model used at the Beijing Weather Center did not predict this event because the baroclinic forcing was rather weak, but the Q-vector analysis clearly indicated the forcing 12 h before. In addition to Barnes' diagnostics, we estimate divergence tendency in low levels through computation of the rotational component of the Q-vector. Combined with the diagnosed stability tendency, moisture analysis, and low-level wind convergence zone, the convective area can be identified. This microcomputer diagnostic-graphics scheme, when coupled with intelligent use of conventional data, has potential as an aid for local short-range weather forecasting.

Full access
Allan H. Murphy, Barbara G. Brown, and Yin-Sheng Chen

Abstract

A diagnostic approach to forecast verification is described and illustrated. This approach is based on a general framework for forecast verification. It is “diagnostic” in the sense that it focuses on the fundamental characteristics of the forecasts, the corresponding observations, and their relationship.

Three classes of diagnostic verification methods are identified: 1) the joint distribution of forecasts and observations and conditional and marginal distributions associated with factorizations of this joint distribution; 2) summary measures of these joint, conditional, and marginal distributions; and 3) performance measures and their decompositions. Linear regression models that can be used to describe the relationship between forecasts and observations are also presented. Graphical displays are advanced as a means of enhancing the utility of this body of diagnostic verification methodology.

A sample of National Weather Service maximum temperature forecasts (and observations) for Minneapolis, Minnesota, is analyzed to illustrate the use of this methodology. Graphical displays of the basic distributions and various summary measures are employed to obtain insights into distributional characteristics such as central tendency, variability, and asymmetry. The displays also facilitate the comparison of these characteristics among distributions–for example, between distributions involving forecasts and observations, among distributions involving different types of forecasts, and among distributions involving forecasts for different seasons or lead times. Performance measures and their decompositions are shown to provide quantitative information regarding basic dimensions of forecast quality such as bias, accuracy, calibration (or reliability), discrimination, and skill. Information regarding both distributional and performance characteristics is needed by modelers and forecasters concerned with improving forecast quality. Some implications of these diagnostic methods for verification procedures and practices are discussed.

Full access
Yu-Kun Hou, Hua Chen, Chong-Yu Xu, Jie Chen, and Sheng-Lian Guo

Abstract

Statistical downscaling is useful for managing scale and resolution problems in outputs from global climate models (GCMs) for climate change impact studies. To improve downscaling of precipitation occurrence, this study proposes a revised regression-based statistical downscaling method that couples a support vector classifier (SVC) and first-order two-state Markov chain to generate the occurrence and a support vector regression (SVR) to simulate the amount. The proposed method is compared to the Statistical Downscaling Model (SDSM) for reproducing the temporal and quantitative distribution of observed precipitation using 10 meteorological indicators. Two types of calibration and validation methods were compared. The first method used sequential split sampling of calibration and validation periods, while the second used odd years for calibration and even years for validation. The proposed coupled approach outperformed the other methods in downscaling daily precipitation in all study periods using both calibration methods. Using odd years for calibration and even years for validation can reduce the influence of possible climate change–induced nonstationary data series. The study shows that it is necessary to combine different types of precipitation state classifiers with a method of regression or distribution to improve the performance of traditional statistical downscaling. These methods were applied to simulate future precipitation change from 2031 to 2100 with the CMIP5 climate variables. The results indicated increasing tendencies in both mean and maximum future precipitation predicted using all the downscaling methods evaluated. However, the proposed method is an at-site statistical downscaling method, and therefore this method will need to be modified for extension into a multisite domain.

Full access
Jiepeng Chen, Xin Wang, Wen Zhou, Chunzai Wang, Qiang Xie, Gang Li, and Sheng Chen

Abstract

Previous research has suggested that the anomalous western North Pacific anticyclone (WNPAC) can generally persist from an El Niño mature winter to the subsequent summer, influencing southern China precipitation significantly, where southern China includes the Yangtze River valley and South China. Since the late 1970s, three extreme El Niño events have been recorded: 1982/83, 1997/98, and 2015/16. There was a sharp contrast in the change in southern China rainfall and corresponding atmospheric circulations in the decaying August between the 2015/16 extreme El Niño event and the earlier two extreme El Niño events. Enhanced rainfall in the middle and upper reaches of the Yangtze River and suppressed rainfall over South China resulted from basinwide warming in the tropical Indian Ocean induced by the extreme El Niño in August 1983 and 1998, which was consistent with previous studies. However, an anomalous western North Pacific cyclone emerged in August 2016 and then caused positive rainfall anomalies over South China and negative rainfall anomalies from the Yangtze River to the middle and lower reaches of the Yellow River. Without considering the effect of the long-term global warming trend, in August 2016 the negative SST anomalies over the western Indian Ocean and cooling in the north tropical Atlantic contributed to the anomalous western North Pacific cyclone and a rainfall anomaly pattern with opposite anomalies in South China and the Yangtze River region. Numerical experiments with the CAM5 model are conducted to confirm that cooler SST in the western Indian Ocean contributed more than cooler SST in the north tropical Atlantic to the anomalous western North Pacific cyclone and anomalous South China rainfall.

Full access