Search Results

You are looking at 1 - 10 of 21 items for :

  • Author or Editor: Gerald R. North x
  • Journal of Climate x
  • Refine by Access: All Content x
Clear All Modify Search
Gabriele C. Hegerl
and
Gerald R. North

Abstract

Three statistically optimal approaches, which have been proposed for detecting anthropogenic climate change, are intercompared. It is shown that the core of all three methods is identical. However, the different approaches help to better understand the properties of the optimal detection. Also, the analysis allows us to examine the problems in implementing these optimal techniques in a common framework. An overview of practical considerations necessary for applying such an optimal method for detection is given. Recent applications show that optimal methods present some basis for optimism toward progressively more significant detection of forced climate change. However, it is essential that good hypothesized signals and good information on climate variability be obtained since erroneous variability, especially on the timescale of decades to centuries, can lead to erroneous conclusions.

Full access
Gerald R. North
and
Mark J. Stevens

Abstract

Optimal signal detection theory has been applied in a search through 100 yr of surface temperature data for the climate response to four specific radiative forcings. The data used comes from 36 boxes on the earth and was restricted to the frequency band 0.06–0.13 cycles yr−1 (16.67–7.69 yr) in the analysis. Estimates were sought of the strengths of the climate response to solar variability, volcanic aerosols, greenhouse gases, and anthropogenic aerosols. The optimal filter was constructed with a signal waveform computed from a two-dimensional energy balance model (EBM). The optimal weights were computed from a 10000-yr control run of a noise-forced EBM and from 1000-yr control runs from coupled ocean–atmosphere models at Geophysical Fluid Dynamics Laboratory (GFDL) and Max-Planck Institute; the authors also used a 1000-yr run using the GFDL mixed layer model. Results are reasonably consistent across these four separate model formulations. It was found that the component of the volcanic response perpendicular to the other signals was very robust and highly significant. Similarly, the component of the greenhouse gas response perpendicular to the others was very robust and highly significant. When the sum of all four climate forcings was used, the climate response was more than three standard deviations above the noise level. These findings are considered to be powerful evidence of anthropogenically induced climate change.

Full access
Kwang-Y. Kim
and
Gerald R. North

Abstract

This study considers the theory of a general three-dimensional (space and time) statistical prediction/extrapolation algorithm. The predictor is in the form of a linear data filter. The prediction kernel is based on the minimization of prediction error and its construction requires the covariance statistics of a predictand field. The algorithm is formulated in terms of the spatiotemporal EOFs of the predictand field. This EOF representation facilitates the selection of useful physical modes for prediction. Limited tests have been conducted concerning the sensitivity of the prediction algorithm with respect to its construction parameters and the record length of available data for constructing a covariance matrix. Tests reveal that the performance of the predictor is fairly insensitive to a wide range of the construction parameters. The accuracy of the filter, however, depends strongly on the accuracy of the covariance matrix, which critically depends on the length of available data. This inaccuracy implies suboptimal performance of the prediction filter. Simple examples demonstrate the utility of the new algorithm.

Full access
Gerald R. North
and
Qigang Wu

Abstract

Estimates of the amplitudes of the forced responses of the surface temperature field over the last century are provided by a signal processing scheme utilizing space–time empirical orthogonal functions for several combinations of station sites and record intervals taken from the last century. These century-long signal fingerprints come mainly from energy balance model calculations, which are shown to be very close to smoothed ensemble average runs from a coupled ocean–atmosphere model (Hadley Centre Model). The space–time lagged covariance matrices of natural variability come from 100-yr control runs from several well-known coupled ocean–atmosphere models as well as a 10 000-yr run from the stochastic energy balance climate model (EBCM). Evidence is found for robust, but weaker than expected signals from the greenhouse [amplitude ∼65% of that expected for a rather insensitive model (EBCM: ΔT 2×CO2 ≈ 2.3°C)], volcanic (also about 65% expected amplitude), and even the 11-yr component of the solar signal (a most probable value of about 2.0 times that expected). In the analysis the anthropogenic aerosol signal is weak and the null hypothesis for this signal can only be rejected in a few sampling configurations involving the last 50 yr of the record. During the last 50 yr the full strength value (1.0) also lies within the 90% confidence interval. Some amplitude estimation results based upon the (temporally smoothed) Hadley fingerprints are included and the results are indistinguishable from those based on the EBCM. In addition, a geometrical derivation of the multiple regression formula from the filter point of view is provided, which shows how the signals “not of interest” are removed from the data stream in the estimation process. The criteria for truncating the EOF sequence are somewhat different from earlier analyses in that the amount of the signal variance accounted for at a given level of truncation is explicitly taken into account.

Full access
Lai-Yung Leung
and
Gerald R. North

Abstract

This paper introduces the use of information theory in characterizing climate predictability. Specifically, the concepts of entropy and transinformation are employed. Entropy measures the amount of uncertainty in our knowledge of the state of the climate system. Transinformation represents the information gained about an anomaly at any time t with knowledge of the size of the initial anomaly. It has many desirable properties that can be used as a measure of the predictability of the climate system. These concepts when applied to climate predictability are illustrated through a simple stochastic climate model (an energy balance model forced by noise). The transinformation is found to depict the degradation of information about an anomaly despite the fact that we have perfect knowledge of the initial state. Its usefulness, especially when generalized to other climate models, is discussed.

Full access
Gerald R. North
and
Kwang-Y. Kim

Abstract

This paper considers some tests of the procedures suggested in Part I on the detection of forced climate signals embedded in natural variability. The optimal filters are constructed from simulations of signals and natural variability in a noise-forced energy balance model that explicitly resolves land-sea geography and that has an upwelling-diffusion deep ocean. Filters are considered for the climate forcing of faint sunspot signals and for the greenhouse warming problem. In each case, the results are promising in that signal-to-noise ratios of unity or greater might be achievable. Rather than conclusive arguments, them exercises are meant to bring out key aspects of the detection problem that deserve the most attention and which parts of the procedure are most sensitive to assumptions.

Full access
Kwang Y. Kim
and
Gerald R. North

Abstract

This study makes use of a simple stochastic energy balance climate model that resolves the land–sea distribution and that includes a crude upwelling-diffusion deep ocean to study the natural variability of the surface temperature in different frequency bands. This is done by computing the eigenfunctions of the space-time lagged covariance function. The resulting frequency-dependent theoretical orthogonal functions (fdTOFs) are compared with the corresponding frequency-dependent empirical orthogonal functions (fdEOFs) derived from 40 years of data. The computed and modeled eigenvalues are consistent with the difference mainly explained by sampling error due to the short observational record. The magnitude of expected sampling errors is demonstrated by a series of Monte Carlo simulations with the model. The sampling error for the eigenvalues features a strong bias that appears in the simulations and apparently in the data. Component-by-component pattern correlations between the fdEOFs and the fdTOFs vary from 0.81 to 0.28 for the first ten components. Monte Carlo simulations show that the sampling error could be an important source of error especially in the low (interannual) frequency band. However, sampling error alone cannot satisfactorily explain the difference between the model and observations. Rather, model inaccuracy and/or spatial bias of observations seem to be important sources of error. The fdTOFs are expected to be useful in estimation/prediction/detection studies.

Full access
Lai-Yung Leung
and
Gerald R. North

Abstract

Atmospheric variability an a zonally symmetric planet in the absence of external forcing anomalies is studied. With idealized boundary conditions such as the absence of ocean and topography, and by using perpetual equinox solar forcing, a 15-year long stationary time series of the atmosphere is simulated with the NCAR Community Climate Model (CCM0). This provides sufficient time samples for realistic study of the properties of the atmosphere. Zonally averaged and space-time statistics for the surface air temperature field on this planet are presented. Such statistics can serve as noise climatologies for climate sensitivity experiments, allowing the effects of changes of external forcing on the atmosphere to be asssessed.

In search of a simple statistical model for atmospheric variability, the space-time spectra obtained from the CCM simulation are fitted statistically with a stochastic energy balance model. The space-time spectra for three zonal wavenumbers are found to be fitted satisfactorily by the stochastic model with only five parameters (a heat diffusion coefficient, a constant zonal advection speed, a radiative damping constant and two parameters for blue spatial noise amplitudes). The estimated parameters agree with previously obtained values. This suggests that useful statistics for large-scale atmospheric variability may be obtained from simple statistical models. With the method of analysis provided in this study, the ability of the stochastic model for describing atmospheric variability on a more realistic planet (including geography and seasonal cycle) can be tested. This may involve comparing space-time statistics from the stochastic model with observed quantities and by using empirical orthogonal functions as a basis set for expansion.

Full access
Kwang-Y. Kim
and
Gerald R. North

Abstract

Considered here are examples of statistical prediction based on the algorithm developed by Kim and North. The predictor is constructed in terms of space–time EOFs of data and prediction domains. These EOFs are essentially a different representation of the covariance matrix, which is derived from past observational data. The two sets of EOFs contain information on how to extend the data domain into prediction domain (i.e., statistical prediction) with minimum error variance. The performance of the predictor is similar to that of an optimal autoregressive model since both methods are based on the minimization of prediction error variance. Four different prediction techniques—canonical correlation analysis (CCA), maximum covariance analysis (MCA), principal component regression (PCR), and principal oscillation pattern (POP)—have been compared with the present method. A comparison shows that oscillation patterns in a dataset can faithfully be extended in terms of temporal EOFs, resulting in a slightly better performance of the present method than that of the predictors based on the maximum pattern correlations (CCA, MCA, and PCR) or the POP predictor. One-dimensional applications demonstrate the usefulness of the predictor. The NINO3 and the NINO3.4 sea surface temperature time series (3-month moving average) were forecasted reasonably up to the lead time of about 6 months. The prediction skill seems to be comparable to other more elaborate statistical methods. Two-dimensional prediction examples also demonstrate the utility of the new algorithm. The spatial patterns of SST anomaly field (3-month moving average) were forecasted reasonably up to about 6 months ahead. All these examples illustrate that the prediction algorithm is useful and computationally efficient for routine prediction practices.

Full access
Kuor-Jier Joseph Yip
and
Gerald R. North

Abstract

Tropical wave phenomena have been examined in the last 520 days of two 15-year runs of a low-resolution general circulation model (CCMO). The model boundary conditions were simplified to all-land, perpetual equinox, and no topography. The two runs were for fixed soil moisture at 75% and 0% , the so-called “wet” and “dry” models. Both models develop well-defined ITCZs with low-level convergence erratically concentrated along the equator. Highly organized eastward-propagating waves are detectable in both models with different wave speeds depending on the presence of moisture. The wave amplitudes (in, e.g., vertical velocity) are many orders of magnitude stronger in the wet model. The waves have a definite transverse nature as precipitation (low-level convergence) patches tend to move systematically north and south across the equator. In the wet model the waves are distinctly nondispersive and the transit time for passage around the earth is about 50 days, consistent with the Madden–Julian frequency. The authors are also able to see most of the expected linear wave modes in spectral density plots in the frequency–wavenumber plant and compare them for the wet and dry cases.

Full access