# Search Results

## You are looking at 21 - 30 of 62 items for

- Author or Editor: Gerald R. North x

- Refine by Access: All Content x

## Abstract

Low-frequency (<20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important sales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal. and the mixed lognormal (“mixed” here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notion of climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

## Abstract

Low-frequency (<20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important sales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal. and the mixed lognormal (“mixed” here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notion of climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.

## Abstract

In this paper point gauges are used in an analysis of hypothetical ground validation experiments for satellite-based estimates of precipitation rates. The ground and satellite measurements are fundamentally different since the gauge can sample continuously in time but at a discrete point, while the satellite samples an area average (typically 20 km across) but a snapshot in time. The design consists of comparing a sequence of pairs of measurements taken from the ground and from space. Since real rain has a large nonzero contribution at zero rain rate, the following ground truth designs are proposed: design 1 uses all pairs, design 2 uses the pairs only when the field-of-view satellite average has rain, and design 3 uses the pairs only when the gauge has rain. The error distribution of each design is derived theoretically for a Bernoulli spatial random field with different horizontal resolutions. It is found that design 3 cannot be used as a ground-truth design due to its large design bias. The mean-square error is used as an index of accuracy in estimating the ground measurement by satellite measurement. It is shown that there is a relationship between the mean-square error of design 1 and design 2 for the Bernoulli random field. Using this technique, the authors derive the number of satellite overpasses necessary to detect a satellite retrieval bias, which is as large as 10% of the natural variability.

## Abstract

In this paper point gauges are used in an analysis of hypothetical ground validation experiments for satellite-based estimates of precipitation rates. The ground and satellite measurements are fundamentally different since the gauge can sample continuously in time but at a discrete point, while the satellite samples an area average (typically 20 km across) but a snapshot in time. The design consists of comparing a sequence of pairs of measurements taken from the ground and from space. Since real rain has a large nonzero contribution at zero rain rate, the following ground truth designs are proposed: design 1 uses all pairs, design 2 uses the pairs only when the field-of-view satellite average has rain, and design 3 uses the pairs only when the gauge has rain. The error distribution of each design is derived theoretically for a Bernoulli spatial random field with different horizontal resolutions. It is found that design 3 cannot be used as a ground-truth design due to its large design bias. The mean-square error is used as an index of accuracy in estimating the ground measurement by satellite measurement. It is shown that there is a relationship between the mean-square error of design 1 and design 2 for the Bernoulli random field. Using this technique, the authors derive the number of satellite overpasses necessary to detect a satellite retrieval bias, which is as large as 10% of the natural variability.

The perception of the hypothesized greenhouse effect will differ dramatically depending upon the location on the earth at which the effect is analyzed. This is due mainly to two causes: 1) the warming signal depends upon the position on the earth, and 2) the natural variability of the warming has a strong position dependence. To demonstrate these phenomena, simulations were conducted of the surface temperature field with a simple stochastic climate model that has enough geographical resolution to see the geographic dependence. The model was tuned to reproduce the geographical distribution of the present climate, including its natural variability in both the variance and the space–time correlation structure. While such effects have been discussed elsewhere with even more realistic climate models, it is instructive to actually see simulations of time series laid side by side in order to easily compare their differences and similarities. Because of the model's simplicity, the causes of the variations are easy to analyze. Not surprisingly, some realizations of the temperature for some local areas show countertrends for a period of several decades in the presence of the greenhouse warming.

The perception of the hypothesized greenhouse effect will differ dramatically depending upon the location on the earth at which the effect is analyzed. This is due mainly to two causes: 1) the warming signal depends upon the position on the earth, and 2) the natural variability of the warming has a strong position dependence. To demonstrate these phenomena, simulations were conducted of the surface temperature field with a simple stochastic climate model that has enough geographical resolution to see the geographic dependence. The model was tuned to reproduce the geographical distribution of the present climate, including its natural variability in both the variance and the space–time correlation structure. While such effects have been discussed elsewhere with even more realistic climate models, it is instructive to actually see simulations of time series laid side by side in order to easily compare their differences and similarities. Because of the model's simplicity, the causes of the variations are easy to analyze. Not surprisingly, some realizations of the temperature for some local areas show countertrends for a period of several decades in the presence of the greenhouse warming.

## Abstract

Optimal signal detection theory has been applied in a search through 100 yr of surface temperature data for the climate response to four specific radiative forcings. The data used comes from 36 boxes on the earth and was restricted to the frequency band 0.06–0.13 cycles yr^{−1} (16.67–7.69 yr) in the analysis. Estimates were sought of the strengths of the climate response to solar variability, volcanic aerosols, greenhouse gases, and anthropogenic aerosols. The optimal filter was constructed with a signal waveform computed from a two-dimensional energy balance model (EBM). The optimal weights were computed from a 10000-yr control run of a noise-forced EBM and from 1000-yr control runs from coupled ocean–atmosphere models at Geophysical Fluid Dynamics Laboratory (GFDL) and Max-Planck Institute; the authors also used a 1000-yr run using the GFDL mixed layer model. Results are reasonably consistent across these four separate model formulations. It was found that the component of the volcanic response perpendicular to the other signals was very robust and highly significant. Similarly, the component of the greenhouse gas response perpendicular to the others was very robust and highly significant. When the sum of all four climate forcings was used, the climate response was more than three standard deviations above the noise level. These findings are considered to be powerful evidence of anthropogenically induced climate change.

## Abstract

Optimal signal detection theory has been applied in a search through 100 yr of surface temperature data for the climate response to four specific radiative forcings. The data used comes from 36 boxes on the earth and was restricted to the frequency band 0.06–0.13 cycles yr^{−1} (16.67–7.69 yr) in the analysis. Estimates were sought of the strengths of the climate response to solar variability, volcanic aerosols, greenhouse gases, and anthropogenic aerosols. The optimal filter was constructed with a signal waveform computed from a two-dimensional energy balance model (EBM). The optimal weights were computed from a 10000-yr control run of a noise-forced EBM and from 1000-yr control runs from coupled ocean–atmosphere models at Geophysical Fluid Dynamics Laboratory (GFDL) and Max-Planck Institute; the authors also used a 1000-yr run using the GFDL mixed layer model. Results are reasonably consistent across these four separate model formulations. It was found that the component of the volcanic response perpendicular to the other signals was very robust and highly significant. Similarly, the component of the greenhouse gas response perpendicular to the others was very robust and highly significant. When the sum of all four climate forcings was used, the climate response was more than three standard deviations above the noise level. These findings are considered to be powerful evidence of anthropogenically induced climate change.

## Abstract

Three statistically optimal approaches, which have been proposed for detecting anthropogenic climate change, are intercompared. It is shown that the core of all three methods is identical. However, the different approaches help to better understand the properties of the optimal detection. Also, the analysis allows us to examine the problems in implementing these optimal techniques in a common framework. An overview of practical considerations necessary for applying such an optimal method for detection is given. Recent applications show that optimal methods present some basis for optimism toward progressively more significant detection of forced climate change. However, it is essential that good hypothesized signals and good information on climate variability be obtained since erroneous variability, especially on the timescale of decades to centuries, can lead to erroneous conclusions.

## Abstract

Three statistically optimal approaches, which have been proposed for detecting anthropogenic climate change, are intercompared. It is shown that the core of all three methods is identical. However, the different approaches help to better understand the properties of the optimal detection. Also, the analysis allows us to examine the problems in implementing these optimal techniques in a common framework. An overview of practical considerations necessary for applying such an optimal method for detection is given. Recent applications show that optimal methods present some basis for optimism toward progressively more significant detection of forced climate change. However, it is essential that good hypothesized signals and good information on climate variability be obtained since erroneous variability, especially on the timescale of decades to centuries, can lead to erroneous conclusions.

## Abstract

Considered here are examples of statistical prediction based on the algorithm developed by Kim and North. The predictor is constructed in terms of space–time EOFs of data and prediction domains. These EOFs are essentially a different representation of the covariance matrix, which is derived from past observational data. The two sets of EOFs contain information on how to extend the data domain into prediction domain (i.e., statistical prediction) with minimum error variance. The performance of the predictor is similar to that of an optimal autoregressive model since both methods are based on the minimization of prediction error variance. Four different prediction techniques—canonical correlation analysis (CCA), maximum covariance analysis (MCA), principal component regression (PCR), and principal oscillation pattern (POP)—have been compared with the present method. A comparison shows that oscillation patterns in a dataset can faithfully be extended in terms of temporal EOFs, resulting in a slightly better performance of the present method than that of the predictors based on the maximum pattern correlations (CCA, MCA, and PCR) or the POP predictor. One-dimensional applications demonstrate the usefulness of the predictor. The NINO3 and the NINO3.4 sea surface temperature time series (3-month moving average) were forecasted reasonably up to the lead time of about 6 months. The prediction skill seems to be comparable to other more elaborate statistical methods. Two-dimensional prediction examples also demonstrate the utility of the new algorithm. The spatial patterns of SST anomaly field (3-month moving average) were forecasted reasonably up to about 6 months ahead. All these examples illustrate that the prediction algorithm is useful and computationally efficient for routine prediction practices.

## Abstract

Considered here are examples of statistical prediction based on the algorithm developed by Kim and North. The predictor is constructed in terms of space–time EOFs of data and prediction domains. These EOFs are essentially a different representation of the covariance matrix, which is derived from past observational data. The two sets of EOFs contain information on how to extend the data domain into prediction domain (i.e., statistical prediction) with minimum error variance. The performance of the predictor is similar to that of an optimal autoregressive model since both methods are based on the minimization of prediction error variance. Four different prediction techniques—canonical correlation analysis (CCA), maximum covariance analysis (MCA), principal component regression (PCR), and principal oscillation pattern (POP)—have been compared with the present method. A comparison shows that oscillation patterns in a dataset can faithfully be extended in terms of temporal EOFs, resulting in a slightly better performance of the present method than that of the predictors based on the maximum pattern correlations (CCA, MCA, and PCR) or the POP predictor. One-dimensional applications demonstrate the usefulness of the predictor. The NINO3 and the NINO3.4 sea surface temperature time series (3-month moving average) were forecasted reasonably up to the lead time of about 6 months. The prediction skill seems to be comparable to other more elaborate statistical methods. Two-dimensional prediction examples also demonstrate the utility of the new algorithm. The spatial patterns of SST anomaly field (3-month moving average) were forecasted reasonably up to about 6 months ahead. All these examples illustrate that the prediction algorithm is useful and computationally efficient for routine prediction practices.

## Abstract

Estimates of the amplitudes of the forced responses of the surface temperature field over the last century are provided by a signal processing scheme utilizing space–time empirical orthogonal functions for several combinations of station sites and record intervals taken from the last century. These century-long signal fingerprints come mainly from energy balance model calculations, which are shown to be very close to smoothed ensemble average runs from a coupled ocean–atmosphere model (Hadley Centre Model). The space–time lagged covariance matrices of natural variability come from 100-yr control runs from several well-known coupled ocean–atmosphere models as well as a 10 000-yr run from the stochastic energy balance climate model (EBCM). Evidence is found for robust, but weaker than expected signals from the greenhouse [amplitude ∼65% of that expected for a rather insensitive model (EBCM: *T*
_{2×CO2
}

## Abstract

Estimates of the amplitudes of the forced responses of the surface temperature field over the last century are provided by a signal processing scheme utilizing space–time empirical orthogonal functions for several combinations of station sites and record intervals taken from the last century. These century-long signal fingerprints come mainly from energy balance model calculations, which are shown to be very close to smoothed ensemble average runs from a coupled ocean–atmosphere model (Hadley Centre Model). The space–time lagged covariance matrices of natural variability come from 100-yr control runs from several well-known coupled ocean–atmosphere models as well as a 10 000-yr run from the stochastic energy balance climate model (EBCM). Evidence is found for robust, but weaker than expected signals from the greenhouse [amplitude ∼65% of that expected for a rather insensitive model (EBCM: *T*
_{2×CO2
}

## Abstract

A parameter study of satellite orbits was performed to estimate sampling errors of area-time averaged rain rate due to temporal sampling by satellites. The sampling characteristics were investigated by accounting for varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates.

For a satellite of nominal TRMM orbit (30° inclination and 300 km altitude) carrying an ideal scanning microwave radiometer for direct precipitation measurements, sampling error would be about 8 to 12% of estimated monthly mean rain rates over a grid box of 5° × 5°. The effect of uneven sampling intervals with latitude tend to be offset by increasing sampling areas with latitude, therefore, the latitude dependence of sampling error was not important. Nomograms for sampling errors are presented for a range of orbital parameters centered at nominal TRMM orbit. An observation system based upon the low inclination satellite combined with a sunsynchronous satellite simultaneously would be especially promising for precipitation measurements from space. Sampling errors well below 10% can be achieved for this idealized system case for the monthly rain rate estimates for 5° × 5° boxes.

## Abstract

A parameter study of satellite orbits was performed to estimate sampling errors of area-time averaged rain rate due to temporal sampling by satellites. The sampling characteristics were investigated by accounting for varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates.

For a satellite of nominal TRMM orbit (30° inclination and 300 km altitude) carrying an ideal scanning microwave radiometer for direct precipitation measurements, sampling error would be about 8 to 12% of estimated monthly mean rain rates over a grid box of 5° × 5°. The effect of uneven sampling intervals with latitude tend to be offset by increasing sampling areas with latitude, therefore, the latitude dependence of sampling error was not important. Nomograms for sampling errors are presented for a range of orbital parameters centered at nominal TRMM orbit. An observation system based upon the low inclination satellite combined with a sunsynchronous satellite simultaneously would be especially promising for precipitation measurements from space. Sampling errors well below 10% can be achieved for this idealized system case for the monthly rain rate estimates for 5° × 5° boxes.

## Abstract

A simple Budyko-Sellers mean annual energy balance climate model with diffusive transport (North, 1975b) is extended to include a seasonal cycle. In the model the latitudinal distribution of the zonal average surface temperature is represented by a series of Legendre polynomials, while its time-dependence is represented by a Fourier sine-cosine series. The model has three parameters which are adjusted so that the observed amplitudes of the Northern Hemisphere's zonal mean surface temperature are recovered. In order to obtain the correct amplitude and phase of the surface temperature's seasonal oscillation, allowance must be made for the disparity between the thermal inertia of the atmosphere over continents and that of the ocean's mixed layer. Although the model parameters are adjusted to recover the surface temperature fields of the Northern Hemisphere, a test of the model's ability to produce the fields of the Southern Hemisphere indicates that the model responds properly to changes in boundary conditions.

The seasonal model is used to reveal how the annual mean climate and its sensitivity to changes in incident radiation differ from the predictions obtained with the corresponding mean annual model. Although the zonal temperatures obtained with the seasonal model are 1–3°C higher than those obtained with the mean annual model, the changes in the global average annual mean surface temperatures calculated with the two models are practically identical for a 1% decrease in solar constant. Furthermore, because the albedo changes in them are linked mainly to changes in surface temperature, both models respond in the same manner to changes in the incident solar radiation caused by changes in the earth's orbit. The distribution of the incident solar radiation in the models is shown to be insensitive to changes in the eccentricity and the longitude of perihelion and sensitive only to changes in the obliquity of the earth. For past orbital changes, both the seasonal and the mean annual model fail to produce glacial advances of the magnitude that are thought to have occurred.

## Abstract

A simple Budyko-Sellers mean annual energy balance climate model with diffusive transport (North, 1975b) is extended to include a seasonal cycle. In the model the latitudinal distribution of the zonal average surface temperature is represented by a series of Legendre polynomials, while its time-dependence is represented by a Fourier sine-cosine series. The model has three parameters which are adjusted so that the observed amplitudes of the Northern Hemisphere's zonal mean surface temperature are recovered. In order to obtain the correct amplitude and phase of the surface temperature's seasonal oscillation, allowance must be made for the disparity between the thermal inertia of the atmosphere over continents and that of the ocean's mixed layer. Although the model parameters are adjusted to recover the surface temperature fields of the Northern Hemisphere, a test of the model's ability to produce the fields of the Southern Hemisphere indicates that the model responds properly to changes in boundary conditions.

The seasonal model is used to reveal how the annual mean climate and its sensitivity to changes in incident radiation differ from the predictions obtained with the corresponding mean annual model. Although the zonal temperatures obtained with the seasonal model are 1–3°C higher than those obtained with the mean annual model, the changes in the global average annual mean surface temperatures calculated with the two models are practically identical for a 1% decrease in solar constant. Furthermore, because the albedo changes in them are linked mainly to changes in surface temperature, both models respond in the same manner to changes in the incident solar radiation caused by changes in the earth's orbit. The distribution of the incident solar radiation in the models is shown to be insensitive to changes in the eccentricity and the longitude of perihelion and sensitive only to changes in the obliquity of the earth. For past orbital changes, both the seasonal and the mean annual model fail to produce glacial advances of the magnitude that are thought to have occurred.

## Abstract

Tropical wave phenomena have been examined in the last 520 days of two 15-year runs of a low-resolution general circulation model (CCMO). The model boundary conditions were simplified to all-land, perpetual equinox, and no topography. The two runs were for fixed soil moisture at 75% and 0% , the so-called “wet” and “dry” models. Both models develop well-defined ITCZs with low-level convergence erratically concentrated along the equator. Highly organized eastward-propagating waves are detectable in both models with different wave speeds depending on the presence of moisture. The wave amplitudes (in, e.g., vertical velocity) are many orders of magnitude stronger in the wet model. The waves have a definite transverse nature as precipitation (low-level convergence) patches tend to move systematically north and south across the equator. In the wet model the waves are distinctly nondispersive and the transit time for passage around the earth is about 50 days, consistent with the Madden–Julian frequency. The authors are also able to see most of the expected linear wave modes in spectral density plots in the frequency–wavenumber plant and compare them for the wet and dry cases.

## Abstract

Tropical wave phenomena have been examined in the last 520 days of two 15-year runs of a low-resolution general circulation model (CCMO). The model boundary conditions were simplified to all-land, perpetual equinox, and no topography. The two runs were for fixed soil moisture at 75% and 0% , the so-called “wet” and “dry” models. Both models develop well-defined ITCZs with low-level convergence erratically concentrated along the equator. Highly organized eastward-propagating waves are detectable in both models with different wave speeds depending on the presence of moisture. The wave amplitudes (in, e.g., vertical velocity) are many orders of magnitude stronger in the wet model. The waves have a definite transverse nature as precipitation (low-level convergence) patches tend to move systematically north and south across the equator. In the wet model the waves are distinctly nondispersive and the transit time for passage around the earth is about 50 days, consistent with the Madden–Julian frequency. The authors are also able to see most of the expected linear wave modes in spectral density plots in the frequency–wavenumber plant and compare them for the wet and dry cases.