Search Results

You are looking at 1 - 10 of 53 items for :

  • Author or Editor: Gerald R. North x
  • Refine by Access: All Content x
Clear All Modify Search
Gerald R. North

Abstract

Simple climate models employing diffusive heat transport and ice cap albedo feedback have equilibrium solutions with no stable ice cap smaller than a certain finite size. For the usual parameters used in these models the minimum cap has a radius of about 20 degrees on a great circle. Although it is traditional to remove this peculiar feature by various ad hoc mechanisms, it is of interest because of its relevance to ice age theories. This paper explains why the phenomenon occurs in these models by solving them in a physically appealing way. If an ice-free solution has a thermal minimum and if the minimum temperature is just above the critical value for formation of ice, then the artificial addition of a patch of ice leads to a widespread depression of the temperature below the critical freezing temperature; therefore, a second stable solution will exist whose spatial extent is determined by the range of the influence function of a point sink of heat, due to the albedo shift in the patch. The range of influence is determined by the characteristic length in the problem which in turn is determined by the distance a heat anomaly can be displaced by random walk during the characteristic time scale for radiative relaxation; this length is typically 20–30 degrees on a great circle. Mathematical detail is provided as well as a discussion of why the various mechanisms previously introduced to eliminate the phenomenon work. Finally, a discussion of the relevance of these results to nature is presented.

Full access
Gerald R. North

Abstract

An attempt to provide physical insight into the empirical orthogonal function (EOF) representation of data fields by the study of fields generated by linear stochastic models is presented in this paper. In a large class of these models, the EOFs at individual Fourier frequencies coincide with the orthogonal mechanical modes of the system-provided they exist. The precise mathematical criteria for this coincidence are derived and a physical interpretation is provided. A scheme possibly useful in forecasting is formally constructed for representing any stochastic field by a linear Hermitian model forced by noise.

Full access
Gerald R. North

Abstract

A class of mean annual, zonally averaged energy-balance climate models of the Budyko-Sellers type are studied by a spectral (expansion in Legendre polynomials) method. Models with constant thermal diffusion coefficient can be solved exactly, The solution is approached by a rapidly converging sequence with each succeeding approximant taking into account information from ever smaller space and time scales. The first two modes represent a good approximation to the exact solution as well as to the present climate. The two-mode approximation to a number of more general models are shown to be either formally or approximately equivalent to the same truncation in the constant diffusion case. In particular, the transport parameterization used by Budyko is precisely equivalent to the two-mode truncation of thermal diffusion. Details of the dynamics do not influence the first two modes which fortunately seem adequate for the study of global climate change. Estimated ice age temperatures and ice line latitude agree well with the model if the solar constant is reduced by 1.3%.

Full access
Gerald R. North

Abstract

A simple radiative balance climate model is presented which includes the ice feedback mechanism, zonal averaging, constant homogeneous cloudiness, and ordinary diffusive thermal heat transfer. The simplest version of the model with only one free parameter is solved explicitly in terms of hypergeometric functions and is used to study ice sheet latitude as a function of solar constant. A multiple branch structure of this function is found and discussed along with comparison to earlier results. A stability analysis about the equilibrium solutions shows that the present climate as well as an ice-covered earth are stable while an intermediate solution is unstable for small perturbations away from equilibrium.

Full access
Gregory R. Markowski
and
Gerald R. North

Abstract

Using a combination of statistical methods and monthly SST anomalies (SSTAs) from one or two ocean regions, relatively strong SSTA–precipitation relationships are found during much of the year in the United States: hindcast-bias-corrected correlation coefficients 0.2–0.4 and 0.3–0.6, on monthly and seasonal timescales, respectively. Improved rigor is central to these results: the most crucial procedure was a transform giving regression residuals meeting statistical validity requirements. Tests on 1994–99 out-of-sample data gave better results than expected: semiquantitative, mapped predictions, and quantitative, Heidke skills, are shown. Correlations are large enough to suggest that substantial skill can be obtained for one to several months' precipitation and climate forecasts using ocean circulation models, or statistical methods alone. Although this study was limited to the United States for simplicity, the methodology is intended as generally applicable. Previous work suggests that similar or better skills should be obtainable over much of earth's continental area. Ways likely to improve skills are noted.

Pacific SSTAs outside the Tropics showed substantial precipitation influence, but the main area of North Pacific variability, that along the subarctic front, did not. Instead, the east–west position of SSTAs appears important. The main variability is likely due to north–south changes in front position and will likely give PC analysis artifacts. SSTAs from some regions, the Gulf of Mexico in particular, gave very strong correlations over large U.S. areas. Tests indicated that they are likely caused by atmospheric forcing. Because unusually strong, they should be useful for testing coupled ocean–atmosphere GCMs. Investigation of differences between ENSO events noted by others showed that they are likely attributable to differing SSTA patterns.

Full access
Kyung-Sup Shin
and
Gerald R. North

Abstract

A parameter study of satellite orbits was performed to estimate sampling errors of area-time averaged rain rate due to temporal sampling by satellites. The sampling characteristics were investigated by accounting for varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates.

For a satellite of nominal TRMM orbit (30° inclination and 300 km altitude) carrying an ideal scanning microwave radiometer for direct precipitation measurements, sampling error would be about 8 to 12% of estimated monthly mean rain rates over a grid box of 5° × 5°. The effect of uneven sampling intervals with latitude tend to be offset by increasing sampling areas with latitude, therefore, the latitude dependence of sampling error was not important. Nomograms for sampling errors are presented for a range of orbital parameters centered at nominal TRMM orbit. An observation system based upon the low inclination satellite combined with a sunsynchronous satellite simultaneously would be especially promising for precipitation measurements from space. Sampling errors well below 10% can be achieved for this idealized system case for the monthly rain rate estimates for 5° × 5° boxes.

Full access
Gabriele C. Hegerl
and
Gerald R. North

Abstract

Three statistically optimal approaches, which have been proposed for detecting anthropogenic climate change, are intercompared. It is shown that the core of all three methods is identical. However, the different approaches help to better understand the properties of the optimal detection. Also, the analysis allows us to examine the problems in implementing these optimal techniques in a common framework. An overview of practical considerations necessary for applying such an optimal method for detection is given. Recent applications show that optimal methods present some basis for optimism toward progressively more significant detection of forced climate change. However, it is essential that good hypothesized signals and good information on climate variability be obtained since erroneous variability, especially on the timescale of decades to centuries, can lead to erroneous conclusions.

Full access
Gerald R. North
and
Mark J. Stevens

Abstract

Optimal signal detection theory has been applied in a search through 100 yr of surface temperature data for the climate response to four specific radiative forcings. The data used comes from 36 boxes on the earth and was restricted to the frequency band 0.06–0.13 cycles yr−1 (16.67–7.69 yr) in the analysis. Estimates were sought of the strengths of the climate response to solar variability, volcanic aerosols, greenhouse gases, and anthropogenic aerosols. The optimal filter was constructed with a signal waveform computed from a two-dimensional energy balance model (EBM). The optimal weights were computed from a 10000-yr control run of a noise-forced EBM and from 1000-yr control runs from coupled ocean–atmosphere models at Geophysical Fluid Dynamics Laboratory (GFDL) and Max-Planck Institute; the authors also used a 1000-yr run using the GFDL mixed layer model. Results are reasonably consistent across these four separate model formulations. It was found that the component of the volcanic response perpendicular to the other signals was very robust and highly significant. Similarly, the component of the greenhouse gas response perpendicular to the others was very robust and highly significant. When the sum of all four climate forcings was used, the climate response was more than three standard deviations above the noise level. These findings are considered to be powerful evidence of anthropogenically induced climate change.

Full access
Kwang-Y. Kim
and
Gerald R. North

Abstract

This study considers the theory of a general three-dimensional (space and time) statistical prediction/extrapolation algorithm. The predictor is in the form of a linear data filter. The prediction kernel is based on the minimization of prediction error and its construction requires the covariance statistics of a predictand field. The algorithm is formulated in terms of the spatiotemporal EOFs of the predictand field. This EOF representation facilitates the selection of useful physical modes for prediction. Limited tests have been conducted concerning the sensitivity of the prediction algorithm with respect to its construction parameters and the record length of available data for constructing a covariance matrix. Tests reveal that the performance of the predictor is fairly insensitive to a wide range of the construction parameters. The accuracy of the filter, however, depends strongly on the accuracy of the covariance matrix, which critically depends on the length of available data. This inaccuracy implies suboptimal performance of the prediction filter. Simple examples demonstrate the utility of the new algorithm.

Full access
Gerald R. North
and
Qigang Wu

Abstract

Estimates of the amplitudes of the forced responses of the surface temperature field over the last century are provided by a signal processing scheme utilizing space–time empirical orthogonal functions for several combinations of station sites and record intervals taken from the last century. These century-long signal fingerprints come mainly from energy balance model calculations, which are shown to be very close to smoothed ensemble average runs from a coupled ocean–atmosphere model (Hadley Centre Model). The space–time lagged covariance matrices of natural variability come from 100-yr control runs from several well-known coupled ocean–atmosphere models as well as a 10 000-yr run from the stochastic energy balance climate model (EBCM). Evidence is found for robust, but weaker than expected signals from the greenhouse [amplitude ∼65% of that expected for a rather insensitive model (EBCM: ΔT 2×CO2 ≈ 2.3°C)], volcanic (also about 65% expected amplitude), and even the 11-yr component of the solar signal (a most probable value of about 2.0 times that expected). In the analysis the anthropogenic aerosol signal is weak and the null hypothesis for this signal can only be rejected in a few sampling configurations involving the last 50 yr of the record. During the last 50 yr the full strength value (1.0) also lies within the 90% confidence interval. Some amplitude estimation results based upon the (temporally smoothed) Hadley fingerprints are included and the results are indistinguishable from those based on the EBCM. In addition, a geometrical derivation of the multiple regression formula from the filter point of view is provided, which shows how the signals “not of interest” are removed from the data stream in the estimation process. The criteria for truncating the EOF sequence are somewhat different from earlier analyses in that the amount of the signal variance accounted for at a given level of truncation is explicitly taken into account.

Full access