## 1. Introduction

Obtaining skillful climate predictions is a very difficult problem, but the potential benefits are enormous. These benefits are truly triple bottom line in nature as climate impacts society, the environment, and economies around the globe. Climate prediction therefore rightly receives considerable scientific attention. At its heart, climate prediction is about understanding the physical forces that influence the global climate system. However, it has thus far proven to be impossible to derive reliable predictions based purely on our understanding of physics. Indeed, because of the chaotic nature of the atmosphere, uncertainty is an essential feature of the climate system. This means that the science of climate prediction is at times an uneasy alliance of deterministic modeling based on physics and statistical modeling based on observations. Ideally, we are able to learn about the climate system through physical thinking applied to the analysis of observed data.

The question naturally arises then as to which statistical models might be appropriate, and a brief survey of the climate literature is enough to extract the key themes. There are many articles discussing the nonlinear behavior of climate, such as Bell (1994), Corti et al. (1999), Graf and Castanheira (2001), Hsieh et al. (1999), Kawamurra, et al. (1998), Latif et al. (1998), Palmer (1999), Rajagopalan et al. (2000), Selvam and Fadnavis (1998), Timmerman (1999), and Zwiers and von Storch (1990), to name a few. A fundamental observation here is that applying linear statistical methods for analysis, such as regression models and correlations, can be misleading in the presence of nonlinear dynamics. We cite specifically Palmer (1999) and Graf and Castanheira (2001) in this regard.

Graf and Castanheira (2001) examine modes of variability of the climate system, in the context of climate change in the Northern Hemisphere. It is pointed out that “tropospheric planetary waves can propagate vertically only in westwind below a critical value which depends on latitude and wavenumber.” That is, there is a threshold response in the system. Ignoring the threshold will essentially collapse the data over different regimes, yielding a seemingly very noisy set of observations. Palmer (1999) also makes this point, while examining some simple dynamical systems. Fundamentally, the climate system is dynamic, and changes in time are commonplace (e.g., Handorf et al. 1999; Thompson 1999).

The clear conclusion is that for statistical models to be useful in extending our knowledge of the climate system, they must be able to capture regime-dependent and other nonlinear behaviors. In this paper, we examine the contribution made by the widespread use of anomalies, coupled with linear statistical models, in the climate prediction literature. Our conclusion is that anomalies are a useful descriptive device but are not suitable in general for modeling purposes.

In the next section, we examine the motivation for using anomalies in statistical modeling work and consider the implications. In section 3, we explore the impact of anomalies in two simple nonlinear models. We first develop a simple regime-dependent model and then investigate anomalies in the context of the Lorenz system with periodic forcing. Section 3 is concluded with a consideration of alternatives to modeling using anomalies, arguing that a way forward is provided by so-called physical–statistical models, which couple physical and statistical models together. The focus is on modeling forcing factors, rather than applying empirical corrections for them with the intention that standard statistical methods may be applied. The approach is developed and illustrated in section 4 using a nonlinear conceptual model for ENSO. The paper is completed with a discussion and some conclusions.

## 2. Why anomalies?

**Y**, the corresponding vector of climatological averages is represented by

**C**, and 𝗩 is a diagonal matrix of variances used to standardize the anomalies. The anomaly series would then be modeled in terms of other predictors using statistical methods allowing for noise

*N:*where

*f*is some function of the matrix of predictors 𝗫. Typically, in the literature

*f*would be assumed to follow some linear form, such as for correlation analysis, but this is not essential. In general, we should allow for nonlinear predictor effects. By implication the absolute magnitude of the original observations

**Y**is considered irrelevant; only the size of the anomaly is important. That is, there is no state dependence.

Implicitly, this model also assumes the presence of a stationary seasonal cycle, at least in the way it is normally estimated via seasonal averages. As noted above, this is a dubious assumption in general. Huybers and Wunsch (2003) consider the influence of Milankovitch cycles on paleoclimate records, concluding that “seasonal-cycle rectifiers” are required to allow for variations in the earth’s obliquity. That is, it is necessary to measure and model the seasonal cycle to correctly interpret climate data. A model-based approach would be to fit (2) directly, developing a suitable model for the seasonal cycle *C*. Christiansen (2003) for example opts simply not to remove long-term trends in a study of large-scale stratospheric flow in the boreal winter.

We have worked from the definition of anomalies to identify an underlying model and have shown that an additive and stationary seasonal cycle is required. In the next section, we explore two nonlinear systems and examine the performance of anomalies.

## 3. Anomalies in nonlinear systems

### a. A simple nonstationary model

*S*∈ {1, 2}, and we assume that there are

*k*climate regimes with different means of the climate series of interest

**Y**(ṡ). The regime in force at time

*t*is

*U*∈ {1, 2, . . . ,

_{t}*k*}, which is not observable. Assume that the regime occupation probabilities areso that Σ

_{l}τ_{l}^{(s)}= 1 for

*s*= 1, 2. We model the regime-dependent climatology as follows:That is, simple scaling rules apply between regimes (defined by the coefficients

*β*, subscripted by regime) and seasons (defined by

*α*).

*m*, the seasonal biases areThis result is illustrated in Fig. 1 for season 1 assuming there are two regimes, with the overall mean calculated according to (4). A similar figure would follow for season 2.

We note that if *β _{m}* ≡ 1 or

*τ*

^{(s)}

_{l}≡

*k*

^{−1}, then the bias in each season will disappear. In the first case this, is a degenerate, single-regime system. The second case corresponds to regime-dependent behavior with equal mixing of regimes. This demonstrates analytically, for a simple representation of a nonlinear system, a problem of ignoring nonlinear behavior in calculating anomalies. In practical terms, we would expect this bias to introduce additional noise into any predictive scheme, limiting potential predictability. An anomaly approach may be reasonable if seasonal effects are additive within regimes—anomalies would be calculated within each regime to eliminate the bias. This would require that each observation first be allocated to a regime, which may be problematic and a source of uncertainty in itself.

### b. The Lorenz system

*X*variable (

*r*= 28,

*b*= 8/3, and

*s*= 10) is shown in Fig. 2. There are complex behaviors evident in this time series, with apparent regime and amplitude phase dependence.

The nonlinearity of the Lorenz system arises because of the interaction terms *XZ* and *XY*; without these interactions, the system becomes trivially linear and analytically tractable. Interactions are common to many well-known nonlinear systems, such as the Rössler system with one interaction term (Middleton et al. 1995, 38–39) and the van der Pol oscillator (Tong 1983, p. 25). Interactions are also pervasive in the climate system.

These equations have been solved numerically with amplitudes *a _{X}* =

*a*= 20 for time

_{Y}*t*= 1, . . . , 480 representing 40 yr of monthly data. Time series and autocorrelation plots for

*X*are shown in Fig. 3; the effect of the periodic forcing is immediately obvious. Without periodic forcing, there would be no significant autocorrelations present.

The attractor reconstructed from the “data” without periodic forcing (*a _{X}* =

*a*= 0) is shown in Fig. 4, and the familiar butterfly shape is evident, as well as a clustering of points along a 45° line through the origin. The attractor constructed from the data with periodic forcing is shown in Fig. 5, and the fundamental characteristics are similar but with less well defined detail. The attractor reconstructed from monthly anomalies is shown in Fig. 6, and clearly this is not a good reconstruction of the Lorenz system without periodic forcing. Note that the attractor reconstructed from normalized monthly anomalies is very similar.

_{Y}The irresistible conclusion is that modeling using anomalies from periodically forced nonlinear systems is likely to be ineffective. Anomalies are ineffective for the Lorenz system because the periodic forcing does not result in a simple additive periodic effect on the scale of the system variables *X*, *Y*, and *Z*.

### c. Modeling without anomalies

There is a key philosophical point from the above. The dynamical system driven by the Lorenz equations is governed by the physical laws described by (7). It is not the case that different physics apply for different segments in time. The system is also clearly state dependent. In contrast, the empirical model (1) using anomaly data is built on the implicit assumption of no state dependence because we may simply subtract the seasonal cycle. Thus an anomaly of a given magnitude is assumed to have the same influence regardless of the time of year. In practice, this is typically mitigated by building models for each season of interest. This approach could be viewed as a simple way of incorporating state dependence. However, this approach is not supported by the evidence of even simple dynamical systems, where physical laws do not change with the time of year, although external forcing (such as solar radiation) clearly does vary in time. The risk of building models separately for each season is that we may be able to reproduce some local features of the data, but we will not explain the key physical mechanisms driving the system.

*h*[·], we risk destroying the relationship between the series

*Y*(·) and the potential predictors

**X**(·). The general data-based climate prediction problem may be summarized by (8), and we may build on this as follows. If we assume that each predictor in 𝗫 is made up of a seasonal component with a predictor effect superimposed, then the

*i*th predictor may be written as

*X*=

_{i}*C*∘

_{i}*Z*(

_{i}*i*= 1, . . . ,

*p*, say), where “∘” denotes an unknown operator. Thus (8) becomes

*Z*} additional to the seasonal cycle {

_{i}*C*} for each predictor, assuming the operator ∘ to be a simple addition. Calculating anomalies of

_{i}*Y*(denoted

*Ỹ*) and the {

*X*}, the intended model iswith the noise term

_{i}*N*suitably rescaled. To proceed beyond this point, we need to make some reasonable assumptions about the form of

*h*and the nature of any forcing factors. The more physical insight we can apply the better. The work cited in the introduction suggests that we should at least consider the possibility of nonlinear behavior.

In general, it is not at all clear that the dynamical properties of {*X _{i}*} and {

*Z*} will even be similar. An approach based on anomalies only seems reasonable if we can make this assumption. The evidence supplied by our analysis of the Lorenz system casts significant doubt on this. An approach inspired by dynamical systems thinking is to ensure that statistical models are built using variables that measure the seasonal cycle directly (Huybers and Wunsch 2003). An objection that could be raised to this is that we will be building models using correlated predictors, which could lead to poor predictive skill. This would be a concern if regression methods were used, but time series models incorporate correlation through time, thus avoiding these difficulties.

_{i}It may reasonably be argued that statistical modeling should be focused on identifying the most important interactions and nonlinear behaviors in a climate dataset. We would then seek to understand the physical principles underlying these interactions, using a variety of statistical models and physical reasoning. We are led to the general conclusion that we require models with both physical and statistical components. The statistical component is required to model the observation process and capture uncertainties, but ideally physical models are used to capture nonlinear behaviors and forcing factors, including seasonal forcing. In the absence of a suitable physical model, nonlinear statistical methods could be used (e.g., Lewis and Stevens 1991) to estimate the unknown (possibly nonlinear) dynamics.

## 4. Physical–statistical models

An exciting recent development is the use of Bayesian hierarchical methods to develop hybrid physical–statistical models, which provide for a sophisticated balance between physical and statistical modeling. The idea driving these methods is that there are many sources of information available to aid understanding of physical systems. We may make use of observations of various kinds, as well as models of various subsystems. The Bayesian hierarchical approach allows us to integrate these sources of information, including the uncertainty in each component. For a general introduction, see Wikle (2003). Some examples of applying this thinking to physical processes may be found in Berliner et al. (2003), Berliner (2003), and Berliner et al. (2000).

*P*, which may be a collection of subprocesses, with physical parameters

*η*. In observing the process

*P*, we generate data

*D*and hence statistical parameters

*θ*. We assume that all of these elements are subject to uncertainty and seek to develop a model for the joint probability distribution denoted [

*D, P, η, θ*]. We may apply Bayes’ theorem (Bernardo and Smith 1994, p. 2) to factorize this joint probability model as

*P*and

*θ*, there is no further information in the physical parameters

*η*about the data

*D*. Similarly for the second term, given

*η*, there is no further information in the statistical parameters

*θ*on the physical process

*P*. We may therefore simplify (11) to

We see that the joint probability model is the product of a model for the data, a process model, and a prior parameters model. The prior parameters model captures available information on the parameters before the data are collected. For more details see Berliner (2003). A key point to note about this so-called physical–statistical model is the interconnection between the data and process models. The physical and statistical components are coupled by conditioning the data model on the physical process *P*.

Algorithms for fitting physical–statistical models represent an active area of research. We use the importance-sampling Monte Carlo approach of Berliner et al. (2003). This requires us to generate a relatively small ensemble from the prior parameters model and to pass each member of the ensemble through the physical process model. This physical process ensemble is then resampled so that a much larger sample drawn approximately from the posterior distribution (13) is obtained. This is done by assigning probabilities to each member of the ensemble, calculated using the observed data, and then sampling from them with replacement. Ensemble members close to the observed data will be assigned a relatively high probability. In our example, we use 100 members, which are resampled 1000 times.

### a. A physical–statistical conceptual model for ENSO

To illustrate physical–statistical modeling, we examine the nonlinear conceptual model for ENSO developed by Suarez and Schopf (1988), incorporating periodic forcing (Minobe and Jin 2004). We assume that the periodic forcing has unknown amplitude and period, so an approach based on anomalies is not available. The modeling will provide estimates of these parameters, as well as joint uncertainty statements with the other parameters. This enables us to model periodic effects, rather than attempting to remove them via the calculation of anomalies.

#### 1) The physical process model

*f*represents the amplitude of a growing ENSO disturbance. The periodic forcing has amplitude

*A*and period

_{f}*T*; the delayed damping term

_{f}*f*(

*t*−

*τ*) has amplitude

*α*. We assume that the delay term

*τ*is known (if this parameter is incorrect, it tends to be very obvious), but the other parameters are unknown. We do not incorporate any further sources of error, but this could be done (e.g., Royle et al. 1999). It is also possible to incorporate an unknown number of subharmonics, to be determined by available data. Note in particular that it is possible to incorporate an uncertain initial condition, which becomes another physical parameter in the model. Wikle et al. (2003) examine the general problem of incorporating stochastic boundary conditions within a hierarchical framework. A detailed case study using these methods is provided by Berliner et al. (2003).

#### 2) The data model

We assume that only measurement error applies, so for ease of illustration we use data simulated as truth + noise, with noise modeled as independent normally distributed errors having a known standard deviation of 0.5. If it were unreasonable to assume the noise standard deviation to be known, such as the use of new recording equipment, for example, this parameter could be estimated as well. We could readily model autocorrelated errors and account for more sophisticated features such as known trends, but this is not our present objective.

#### 3) The prior parameters model

We assume a priori independence of all parameters, which is not essential. This does not imply posterior independence because the degree of parameter correlation will be influenced by the data. The prior uncertainties are modeled using normal distributions for each parameter, for convenience. The prior means were set to the true values of 0.75, 1.0, and 4.0 for *α*, *A _{f}*, and

*T*, respectively, with standard deviations of 0.05, 0.05, and 0.2. The delay parameter

_{f}*τ*was taken to be 6.

### b. Results

The simulated data are shown in Fig. 7a up to time 60, representing 5 yr of monthly data, with the physical process ensemble in Fig. 7b. Recall that the physical process ensemble is derived by passing each member of the prior parameters ensemble through the physical model and then resampling from these. We see that the posterior ensemble passes through the center of the physical ensemble. It is worth noting that without the physical process model, it would be a difficult task to develop a meaningful model using these data via purely statistical means.

The posterior distribution for each of the unknown parameters is summarized using a boxplot in Fig. 8. In each case, it is clear that there is less posterior uncertainty, but the uncertainty reduction in the forcing period *T _{f}* is particularly noteworthy.

We have experimented with a variety of parameter combinations, particularly when prior specifications are very inaccurate compared to the true values. In such cases, the prior information is downweighted in favor of the data, which may leave only a small number of distinct ensemble members to explore the posterior distribution when using the importance sampling approach. Note that we may also explore parameter correlations, or any other feature of interest, using the posterior ensemble.

### c. Forecasting

To obtain a forecast, we simply pass each member of the posterior ensemble though the physical model and run time forward, summarizing the resulting probability distributions at each time step of interest. The results for times 61 to 63 are shown in Fig. 9 below. An interesting feature here is the decline in uncertainty as the lead time is increased, which seems counterintuitive. However, the process is about to enter a steep increasing phase before a peak (see Fig. 7), so while there is some uncertainty as to the path the process will take to the next peak, the timing and magnitude of the peak is much less uncertain. The influence of the physical model is very clear.

In practice, using all members of the posterior ensemble to form a forecast may not be computationally feasible, in which case we would draw a representative sample from the posterior ensemble to project forward.

## 5. Discussion and conclusions

If the seasonal cycle may be considered additive and stationary on the scale of the observed variables, then our results show that a conventional approach based on anomalies is justified. It is well established in the literature that climate is not stationary (e.g., Thompson 1999), but if the seasonal cycle is additive then a more sophisticated approach, such as complex demodulation (Meyers and O’Brien, 1994; Thompson 1999), could be used to subtract the seasonal cycle. The resulting anomalies can be modeled using either linear or nonlinear statistical models. It is good statistical practice to assume that nonlinear relationships exist and to assess whether simpler (linear) relationships are better supported by available data. It is, however, very common in the climate literature to simply seek linear relationships, using correlation analysis, for example. This means that potentially skillful relationships will be missed.

In examining the literature, we see that statistical modeling of climate processes should be preceded by consideration of nonlinearity of the processes involved. If we assume simple linearity when this is not true, then we can expect to produce little skill in climate prediction. At the least, we should look for evidence of regime-dependent behavior in the data, if physical understanding is not sufficient. We developed a very simple regime-switching model, which shows that conventional anomalies introduce a bias when there is in fact regime dependence. We were able to quantify this bias, and show that it will be present unless the regimes occur in equal proportions. If it is reasonable to assume an additive seasonal cycle within regimes, then perhaps modeling using anomalies calculated within each regime will not distort the underlying climate process unduly. This does not explain the mechanism for switching between regimes, however, which is arguably most fundamental.

In many cases, it seems unlikely that the assumption of an additive, stationary seasonal cycle will be justified, and an approach that incorporates modeling of seasonal forcing is to be preferred. We have seen this explicitly in relation to the Lorenz system. A potentially insightful modeling approach is suggested by (9). In this representation, we allow the potential predictors of the system to carry the seasonal cycle into the model and use appropriate techniques to reconstruct the functional relationship between the climate response of interest and the potential predictors. There are many data-based approaches available that could be used. Spline methods are potentially very useful (e.g., Lewis and Stevens 1991), as are threshold models (Campbell 2004) for exploring regime-dependent behavior. Splines offer a natural means to explore for key interactions in a climate dataset, an important feature we identified in relation to some simple dynamical systems. Even if we are not able to identify the underlying dynamics of a system, we will be able to identify nonlinear behavior and key interactions. This may at least inspire some interesting physical questions.

Anomalies have a useful role to play in graphical depictions of climate phenomena, attracting attention to deviations from climatology. They may also be useful in exploring natural variability of climate variables, and in some cases may have a physical interpretation that renders them useful from a modeling perspective. An example of this is provided by Berliner et al. (2000), where sea surface temperature anomalies are used within a dynamical framework. Where anomalies are used, the implicit assumptions should be made explicit and tested. Fundamentally, we seek to build statistical models that adhere as closely as possible to the physical principles of the system being studied.

We have explored so-called physical–statistical modeling as a means to develop models using data from nonlinear systems, illustrating the approach using a nonlinear conceptual model for ENSO. While we have focused on a univariate time series, it is possible to build such models for multivariate data and to incorporate dimension reduction (Berliner et al. 2000). We used simulated data so that the true system is known, and we found that the physical–statistical model was able to recover the underlying system. The forecasts from the system were shown to borrow strength from both the physical and statistical components of the model. Research is in progress on methods to combine conceptual models with observations to model and forecast climate processes. There are the foundations here for a new way of modeling, bringing together physical and statistical modeling in a single framework. This is essential if we are to learn about nonlinear systems using observations.

## Acknowledgments

The Western Australian State Government provided funding for this research through the Indian Ocean Climate Initiative. The author is grateful to Bryson Bates for helpful discussions during the development of the ideas described in this paper, to Brent Henderson for comments on an early draft of this paper, and to Lenny Smith for a stimulating early conversation. I am indebted to Andreas Hense and an anonymous referee for very thoughtful comments on earlier versions, which led to a substantial improvement in both the content and presentation of the paper.

## REFERENCES

Bell, G. D., 1994: Midtropospheric closed cyclone formation over the southwestern United States, the eastern United States, and the Alps.

,*Mon. Wea. Rev.***122****,**791–813.Berliner, L. M., 2003: Physical–statistical modeling in geophysics.

,*J. Geophys. Res.***108****.**8776, doi:10.1029/2002JD002865.Berliner, L. M., , C. K. Wikle, , and N. Cressie, 2000: Long-lead prediction of Pacific SST via Bayesian dynamic modeling.

,*J. Climate***13****,**3953–3968.Berliner, L. M., , R. F. Milliff, , and C. K. Wikle, 2003: Bayesian hierarchical modeling of air–sea interaction.

,*J. Geophys. Res.***108****.**3104, doi:10.1029/2002JC001413.Bernardo, J. M., , and A. F. M. Smith, 1994:

*Bayesian Theory*. John Wiley & Sons, 586 pp.Campbell, E. P., 2004: Bayesian selection of threshold autoregressive models.

,*J. Time Series Anal.***25****,**467–482.Christiansen, B., 2003: Evidence for nonlinear climate change: Two stratospheric regimes and a regime shift.

,*J. Climate***16****,**3681–3690.Corti, S., , F. Molteni, , and N. Palmer, 1999: Signature of recent climate change in frequencies of natural atmospheric circulation regimes.

,*Nature***398****,**799–802.Graf, H-F., , and J. M. Castanheira, 2001: Structural changes of climate variability. Max-Plank-Institut für Meteorologie Rep. 330, 12 pp.

Handorf, D., , V. K. Petoukhov, , K. Dethloff, , A. V. Eliseev, , A. Weisheimer, , and I. I. Mokhov, 1999: Decadal climate variability in a coupled atmosphere–ocean climate model of moderate complexity.

,*J. Geophys. Res.***104****,**27253–27275.Hsieh, W. W., , B. Tang, , and E. R. Garnett, 1999: Teleconnections between Pacific sea surface temperatures and Canadian prairie wheat yield.

,*Agric. For. Meteor.***96****,**209–217.Huybers, P., , and C. Wunsch, 2003: Rectification and precession signals in the climate system.

,*Geophys. Res. Lett.***30****,**1–4.Kawamurra, A., , A. I. McKerchar, , R. H. Spigel, , and K. Jinno, 1998: Chaotic characteristics of the Southern Oscillation index time series.

,*J. Hydrol.***204****,**168–181.Latif, M., and Coauthors, 1998: A review of the predictability and prediction of ENSO.

,*J. Geophys. Res.***103****,**14375–14393.Lewis, P. A. W., , and J. G. Stevens, 1991: Nonlinear modeling of time series using Multivariate Adaptive Regression Splines (MARS).

,*J. Amer. Stat. Assoc.***86****,**864–877.Lorenz, E. N., 1963: Deterministic nonperiodic flow.

,*J. Atmos. Sci.***20****,**130–141.Meyers, S. D., , and J. J. O’Brien, 1994: Spatial and temporal 26-day SST variations in the equatorial Indian Ocean using wavelet analysis.

,*Geophys. Res. Lett.***21****,**777–780.Middleton, G. V., , R. E. Plotnick, , and D. M. Rubin, 1995: Introduction to nonlinear models.

*Nonlinear Dynamics and Fractals—New Techniques for Sedimentary Data*, Short Course No. 36, SEPM, 29–46.Minobe, S., , and F. F. Jin, 2004: Generation of interannual and interdecadal climate oscillations through nonlinear subharmonic resonance in delayed oscillators.

,*Geophys. Res. Lett.***31****.**L16206, doi:10.1029/2004GL019776.Palmer, T. N., 1999: A nonlinear dynamical perspective on climate prediction.

,*J. Climate***12****,**575–591.Rajagopalan, B., , E. Cook, , U. Lall, , and B. K. Ray, 2000: Spatiotemporal variability of ENSO and SST teleconnections to summer drought over the United States during the twentieth century.

,*J. Climate***13****,**4244–4255.Royle, J. A., , L. M. Berliner, , C. K. Wikle, , and R. F. Milliff, 1999: A hierarchical spatial model for constructing wind fields from scatterometer data in the Labrador Sea.

*Case Studies in Bayesian Statistics IV*, C. Gatsonis, Ed., Springer-Verlag, 367–382.Selvam, A. M., , and S. Fadnavis, 1998: Signatures of a universal spectrum for atmospheric interannual variability in some disparate climatic regimes.

,*Meteor. Atmos. Phys.***66****,**87–112.Suarez, M. J., , and P. S. Schopf, 1988: A delayed action oscillator for ENSO.

,*J. Atmos. Sci.***45****,**3283–3287.Thompson, R., 1999: A time-series analysis of the changing seasonality of precipitation in the British Isles and neighbouring areas.

,*J. Hydrol.***224****,**169–183.Timmerman, A., 1999: Detecting the nonstationary response of ENSO to greenhouse warming.

,*J. Atmos. Sci.***56****,**2313–2325.Tong, H., 1983:

*Threshold Models in Non-Linear Time Series Analysis*. Vol. 21,*Lecture Notes in Statistics*, Springer-Verlag, 323 pp.Wikle, C. K., 2003: Hierarchical models in environmental science.

,*Int. Stat. Rev.***71****,**181–199.Wikle, C. K., , L. M. Berliner, , and R. F. Milliff, 2003: Hierarchical Bayesian approach to boundary value problems with stochastic boundary conditions.

,*Mon. Wea. Rev.***131****,**1051–1062.Zwiers, F., , and H. von Storch, 1990: Regime-dependent autoregressive time series modeling of the Southern Oscillation.

,*J. Climate***3****,**1347–1363.