Search Results

You are looking at 1 - 10 of 143 items for

  • Author or Editor: S. Zhang x
  • All content x
Clear All Modify Search
S. Zhang

Abstract

A skillful decadal prediction that foretells varying regional climate conditions over seasonal–interannual to multidecadal time scales is of societal significance. However, predictions initialized from the climate-observing system tend to drift away from observed states toward the imperfect model climate because of the model biases arising from imperfect model equations, numeric schemes, and physical parameterizations, as well as the errors in the values of model parameters. Here, a simple coupled model that simulates the fundamental features of the real climate system and a “twin” experiment framework are designed to study the impact of initialization and parameter optimization on decadal predictions. One model simulation is treated as “truth” and sampled to produce “observations” that are assimilated into other simulations to produce observation-estimated states and parameters. The degree to which the model forecasts based on different estimates recover the truth is an assessment of the impact of coupled initial shocks and parameter optimization on climate predictions of interests. The results show that the coupled model initialization through coupled data assimilation in which all coupled model components are coherently adjusted by observations minimizes the initial coupling shocks that reduce the forecast errors on seasonal–interannual time scales. Model parameter optimization with observations effectively mitigates the model bias, thus constraining the model drift in long time-scale predictions. The coupled model state–parameter optimization greatly enhances the model predictability. While valid “atmospheric” forecasts are extended 5 times, the decadal predictability of the “deep ocean” is almost doubled. The coherence of optimized model parameters and states is critical to improve the long time-scale predictions.

Full access
S. Zhang and A. Rosati

Abstract

A “biased twin” experiment using two coupled general circulation models (CGCMs) that are biased with respect to each other is used to study the impact of deep ocean bias on ensemble ocean data assimilation. The “observations” drawn from one CGCM based on the Argo network are assimilated into the other. Traditional ensemble filtering can successfully recover the upper-ocean temperature and salinity of the target model but it usually fails to converge in the deep ocean where the model bias is large compared to the ocean’s intrinsic variability. The inconsistency between the well-constrained upper ocean and poorly constrained deep ocean generates spurious assimilation currents. An adaptively inflated ensemble filter is designed to enhance the consistency of upper- and deep-ocean adjustments, based on “climatological” standard deviations being adaptively updated by observations. The new algorithm reduces deep-ocean errors greatly, in particular, reducing current errors up to 70% and vertical motion errors up to 50%. Specifically, the tropical circulation is greatly improved with a better representation of the undercurrent, upwelling, and Western Boundary Current systems. The structure of the subtropical gyre is also substantially improved. Consequently, the new algorithm leads to better estimates of important global hydrographic features such as global overturning and pycnocline depth. Based on these improved estimates, decadal trends of basin-scale heat content and salinity as well as the seasonal–interannual variability of the tropical ocean are constructed coherently. Interestingly, the Indian Ocean (especially the north Indian Ocean), which is associated with stronger atmospheric feedbacks, is the most sensitive basin to the covariance formulation used in the assimilation. Also, while reconstruction of the local thermohaline structure plays a leading-order role in estimating the decadal trend of the Atlantic meridional overturning circulation (AMOC), more accurate estimates of the AMOC variability require coupled assimilation to produce coherently improved external forcings as well as internal heat and salt transport.

Full access
H. Zhang and C. S. Frederiksen

Abstract

Using a version of the Australian Bureau of Meteorology Research Centre (BMRC) atmospheric general circulation model, this study investigates the model's sensitivity to different soil moisture initial conditions in its dynamically extended seasonal forecasts of June–August 1998 climate anomalies, with focus on the south and northeast China regions where severe floods occurred. The authors' primary aim is to understand the model's responses to different soil moisture initial conditions in terms of the physical and dynamical processes involved. Due to a lack of observed global soil moisture data, the efficacy of using soil moisture anomalies derived from the NCEP–NCAR reanalysis is assessed. Results show that by imposing soil moisture percentile anomalies derived from the reanalysis data into the BMRC model initial condition, the regional features of the model's simulation of seasonal precipitation and temperature anomalies are modulated. Further analyses reveal that the impacts of soil moisture conditions on the model's surface temperature forecasts are mainly from localized interactions between land surface and the overlying atmosphere. In contrast, the model's sensitivity in its forecasts of rainfall anomalies is mainly due to the nonlocal impacts of the soil moisture conditions. Over the monsoon-dominated east Asian region, the contribution from local water recycling, through surface evaporation, to the model simulation of precipitation is limited. Rather, it is the horizontal moisture transport by the regional atmospheric circulation that is the dominant factor in controlling the model rainfall. The influence of different soil moisture conditions on the model forecasts of rainfall anomalies is the result of the response of regional circulation to the anomalous soil moisture condition imposed. Results from the BMRC model sensitivity study support similar findings from other model studies that have appeared in recent years and emphasize the importance of improving the land surface data assimilation and soil hydrological processes in dynamically extended GCM seasonal forecasts.

Full access
Jian Zhang and S. Trivikrama Rao

Abstract

Aircraft measurements taken during the North American Research Strategy for Tropospheric Ozone-Northeast field study reveal the presence of ozone concentration levels in excess of 80 ppb on a regional scale in the nocturnal residual layer during ozone episodes. The air mass containing increased concentrations of ozone commonly is found on a horizontal spatial scale of about 600 km over the eastern United States. The diurnal variation in ozone concentrations at different altitudes, ozone flux measurements, and vertical profiles of ozone suggest that ozone and its precursors trapped aloft in the nocturnal residual layer can influence the ground-level ozone concentrations on the following day as the surface-based inversion starts to break up. A simple one-dimensional model, treating both meteorological and chemical processes, has been applied to investigate the relative contributions of vertical mixing and photochemical reactions to the temporal evolution of the ground-level ozone concentration during the daytime. The results demonstrate that the vertical mixing process contributes significantly to the ozone buildup at ground level in the morning as the mixing layer starts to grow rapidly. When the top of the mixing layer reaches the ozone-rich layer aloft, high ozone concentrations are brought down into the mixing layer, rapidly increasing the ground-level ozone concentration because of fumigation. As the mixing layer grows further, it contributes to dilution while the chemical processes continue to contribute to ozone production. Model simulations also were performed for an urban site with different amounts of reduction in the ground-level emissions as well as a 50% reduction in the concentration levels of ozone and its precursors aloft. The results reveal that a greater reduction in the ground-level ozone concentration can be achieved by decreasing the concentrations of ozone and precursors aloft than can be achieved from a reduction of local emissions. Given the regional extent of the polluted dome aloft during a typical ozone episode in the northeastern United States, these results demonstrate the necessity and importance of implementing emission reduction strategies on the regional scale; such regionwide emission controls would reduce effectively the long-range transport of pollutants in the Northeast.

Full access
Yongqiang Zhang, Francis H. S. Chiew, Lu Zhang, and Hongxia Li

Abstract

This paper explores the use of the Moderate Resolution Imaging Spectroradiometer (MODIS), mounted on the polar-orbiting Terra satellite, to determine leaf area index (LAI), and use actual evapotranspiration estimated using MODIS LAI data combined with the Penman–Monteith equation [remote sensing evapotranspiration (E RS)] in a lumped conceptual daily rainfall–runoff model. The model is a simplified version of the HYDROLOG (SIMHYD) model, which is used to estimate runoff in ungauged catchments. Two applications were explored: (i) the calibration of SIMHYD against both the observed streamflow and E RS, and (ii) the modification of SIMHYD to use MODIS LAI data directly. Data from 2001 to 2005 from 120 catchments in southeast Australia were used for the study. To assess the modeling results for ungauged catchments, optimized parameter values from the geographically nearest gauged catchment were used to model runoff in the ungauged catchment. The results indicate that the SIMHYD calibration against both the observed streamflow and E RS produced better simulations of daily and monthly runoff in ungauged catchments compared to the SIMHYD calibration against only the observed streamflow data, despite the modeling results being assessed solely against the observed streamflow data. The runoff simulations were even better for the modified SIMHYD model that used the MODIS LAI directly. It is likely that the use of other remotely sensed data (such as soil moisture) and smarter modification of rainfall–runoff models to use remotely sensed data directly can further improve the prediction of runoff in ungauged catchments.

Full access
Lesya Borowska, Guifu Zhang, and Dusan S. Zrnić

Abstract

When spectral moments in the azimuth are spaced by less than a beamwidth, it is called oversampling. Superresolution is a type of oversampling that refers to sampling at half a beamwidth on the national network of Doppler weather radars [Weather Surveillance Radar-1988 Doppler (WSR-88D)]. Such close spacing is desirable because it extends the range at which small severe weather features, such as tornadoes or microbursts, can be resolved. This study examines oversampling for phased array radars. The goal of the study is to preserve the same effective beamwidth as on the WSR-88D while obtaining smaller spectral moment estimate errors at the same or faster volume update times. To that effect, a weighted average of autocorrelations of radar signals from three consecutive radials is proposed. Errors in three spectral moments obtained from these autocorrelations are evaluated theoretically. Methodologies on how to choose weights that preserve the desirable effective beamwidth are presented. The results are demonstrated on the fields of spectral moments obtained with the National Weather Radar Testbed (NWRT), a phased array weather radar at NOAA’s National Severe Storms Laboratory (NSSL).

Full access
S. Zhang, X. Zou, and Jon E. Ahlquist

Abstract

The forward model solution and its functional (e.g., the cost function in 4DVAR) are discontinuous with respect to the model's control variables if the model contains discontinuous physical processes that occur during the assimilation window. In such a case, the tangent linear model (the first-order approximation of a finite perturbation) is unable to represent the sharp jumps of the nonlinear model solution. Also, the first-order approximation provided by the adjoint model is unable to represent a finite perturbation of the cost function when the introduced perturbation in the control variables crosses discontinuous points. Using an idealized simple model and the Arakawa–Schubert cumulus parameterization scheme, the authors examined the behavior of a cost function and its gradient obtained by the adjoint model with discontinuous model physics. Numerical results show that a cost function involving discontinuous physical processes is zeroth-order discontinuous, but piecewise differentiable. The maximum possible number of involved discontinuity points of a cost function increases exponentially as 2kn, where k is the total number of thresholds associated with on–off switches, and n is the total number of time steps in the assimilation window. A backward adjoint model integration with the proper forcings added at various time steps, similar to the backward adjoint model integration that provides the gradient of the cost function at a continuous point, produces a one-sided gradient (called a subgradient and denoted as ∇s J) at a discontinuous point. An accuracy check of the gradient shows that the adjoint-calculated gradient is computed exactly on either side of a discontinuous surface. While a cost function evaluated using a small interval in the control variable space oscillates, the distribution of the gradient calculated at the same resolution not only shows a rather smooth variation, but also is consistent with the general convexity of the original cost function. The gradients of discontinuous cost functions are observed roughly smooth since the adjoint integration correctly computes the one-sided gradient at either side of discontinuous surface. This implies that, although (∇s J)T δ x may not approximate δJ = J(x + δ x) − J(x) well near the discontinuous surface, the subgradient calculated by the adjoint of discontinuous physics may still provide useful information for finding the search directions in a minimization procedure. While not eliminating the possible need for the use of a nondifferentiable optimization algorithm for 4DVAR with discontinuous physics, consistency between the computed gradient by adjoints and the convexity of the cost function may explain why a differentiable limited-memory quasi-Newton algorithm still worked well in many 4DVAR experiments that use a diabatic assimilation model with discontinuous physics.

Full access
T. Zhang, K. Stamnes, and S. A. Bowling

Abstract

A comprehensive atmospheric radiative transfer model combined with the surface energy balance equation is applied to investigate the impact of clouds on surface radiative fluxes and snowmelt in the Arctic and subarctic. Results show that at the surface, the shortwave cloud-radiative forcing is negative, while the longwave forcing is positive and generally much larger than the shortwave forcing. Thus. the all-wave surface cloud-radiative forcing is positive, with clouds warming the lower atmosphere and enhancing snowmelt during the melting period in the Arctic and subarctic. These results agree with and explain observations and measurements over the past three decades showing that the onset of snowmelt stubs earlier under cloudy sky conditions than under clear sky conditions in the Arctic. Clouds could change the date of onset of snowmelt by as much as a month, which is of the order of the observed interannual variations in the timing of snowmelt in the Arctic and subarctic. The all-wave cloud radiative forcing during the period of snowmelt reaches a maximum at equivalent cloud droplet radius (re) of about 9 pm, and cloud liquid water path of about 29 g m−2. For thin clouds, the impact of changes in liquid water path on all-wave cloud radiative forcing is greater than changes in equivalent cloud droplet size, while for thick clouds, the equivalent cloud droplet size becomes more important. Cloud-base temperature and to a minor extent cloud-base height also influence the surface radiative fluxes and snowmelt. This study indicates that the coupling between clouds and snowmelt could amplify the climate perturbation in the Arctic.

Full access
S. Zhang, A. Rosati, and T. Delworth

Abstract

The Atlantic meridional overturning circulation (AMOC) has an important influence on climate, and yet adequate observations of this circulation are lacking. Here, the authors assess the adequacy of past and current widely deployed routine observing systems for monitoring the AMOC and associated North Atlantic climate. To do so, this study draws on two independent simulations of the twentieth century using an Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) coupled climate model. One simulation is treated as “truth” and is sampled according to the observing system being evaluated. The authors then assimilate these synthetic “observations” into the second simulation within a fully coupled system that instantaneously exchanges information among all coupled components and produces a nearly balanced and coherent estimate for global climate states including the North Atlantic climate system. The degree to which the assimilation recovers the truth is an assessment of the adequacy of the observing system being evaluated. As the coupled system responds to the constraint of the atmosphere or ocean, the assessment of the recovery for climate quantities such as Labrador Sea Water (LSW) and the North Atlantic Oscillation increases the understanding of the factors that determine AMOC variability. For example, the low-frequency sea surface forcings provided by the atmospheric and sea surface temperature observations are found to excite a LSW variation that governs the long-time-scale variability of the AMOC. When the most complete modern observing system, consisting of atmospheric winds and temperature, is used along with Argo ocean temperature and salinity down to 2000 m, a skill estimate of AMOC reconstruction is 90% (out of 100% maximum). Similarly encouraging results hold for other quantities, such as the LSW. The past XBT observing system, in which deep-ocean temperature and salinity were not available, has a lesser ability to recover the truth AMOC (the skill is reduced to 52%). While these results raise concerns about the ability to properly characterize past variations of the AMOC, they also hold promise for future monitoring of the AMOC and for initializing prediction models.

Full access
T. Zhang, K. Stamnes, and S. A. Bowling

Abstract

Studies show that the energy available to melt snow at high latitudes is almost exclusively provided by radiation. Solar energy determines the period of possible snowmelt, while downwelling atmospheric longwave radiation modifies the timing and triggers the onset of snowmelt. Atmospheric thickness, defined as the vertical distance between the 500- and 1000-mb pressure surfaces, is directly related to the mean temperature and water vapor path of an atmospheric layer and thus has a direct influence on the downwelling longwave radiation and snowmelt. A comprehensive radiative transfer model was applied to calculate the downwelling longwave radiation to the snow surface over the period of snowmelt from 1980 through 1991 using radiosonde data obtained at Barrow and McGrath, Alaska, under clear-sky conditions. The results indicate that the atmospheric thickness has a positive impact on downwelling longwave radiation, which ranges from about 130 W m−2 for an atmospheric thickness of 4850 m to about 280 W m−2 for an atmospheric thickness of 5450 m. This study demonstrates that atmospheric water vapor path has a greater impact on atmospheric downwelling longwave radiation to the snow surface than the mean atmospheric temperature. This study also indicates that when the near-surface air temperature is used to infer downwelling longwave radiation, significant errors can occur. Thus, compared with the results obtained from the atmospheric radiative transfer model, the empirical formula due to Parkinson and Washington underestimates the downwelling longwave radiation when the near-surface air temperature is relatively high and overestimates it when the near-surface air temperature is relatively low. Investigations of the relationship between the atmospheric thickness and the snowmelt onset were conducted. Results indicate that for the period from 1980 through 1991, an atmospheric thickness of 5250 m at Barrow and 5200 m at McGrath in Alaska was sufficient to trigger the onset of snowmelt. The difference in the threshold values of the atmospheric thickness may be due to differences in the atmospheric structure and different contributions of other energy sources such as sensible and latent heat to melt snow. This study also demonstrates that snow cover disappears earlier during warm and wet (higher atmospheric temperature and precipitable water path, and greater atmospheric thickness) springs and later during cold and dry (lower atmospheric temperature and precipitable water path, smaller atmospheric thickness) springs. Atmospheric precipitable water path has a greater impact on snowmelt than the mean atmospheric temperature. Generally, higher atmospheric temperature is correlated with higher atmospheric water vapor path and since atmospheric temperature is closely coupled to the atmospheric water vapor path in the Arctic and Subarctic and since it can be obtained through routine numerical weather prediction models, the atmospheric thickness may be used as a reliable indicator of regional-scale snowmelt in the Arctic and subarctic.

Full access