Search Results

You are looking at 1 - 10 of 82 items for

  • Author or Editor: Michael K. Tippett x
  • Refine by Access: All Content x
Clear All Modify Search
Michael K. Tippett
Full access
Michael K. Tippett

Abstract

The Madden–Julian oscillation (MJO) is the leading mode of tropical variability on subseasonal time scales and has predictable impacts in the extratropics. Whether or not the MJO has a discernible influence on U.S. tornado occurrence has important implications for the feasibility of extended-range forecasting of tornado activity. Interpretation and comparison of previous studies is difficult because of differing data periods, methods, and tornado activity metrics. Here, a previously described modulation of the frequency of violent tornado outbreaks (days with six or more tornadoes reported rated EF2 or greater) by the MJO is shown to be fairly robust to the addition or removal of years to the analysis period and to changes in the number of tornadoes used to define outbreak days, but is less robust to the choice of MJO index. Earlier findings of a statistically significant MJO signal in the frequency of days with at least one tornado report are shown to be incorrect. The reduction of the frequency of days with tornadoes rated EF1 and greater when MJO convection is present in the Maritime Continent and western Pacific is statistically significant in April and robust across varying thresholds of reliably reported tornado numbers and MJO indices.

Open access
Mathew A. Barlow
and
Michael K. Tippett

Abstract

Warm season river flows in central Asia, which play an important role in local water resources and agriculture, are shown to be closely related to the regional-scale climate variability of the preceding cold season. The peak river flows occur in the warm season (April–August) and are highly correlated with the regional patterns of precipitation, moisture transport, and jet-level winds of the preceding cold season (November–March), demonstrating the importance of regional-scale variability in determining the snowpack that eventually drives the rivers. This regional variability is, in turn, strongly linked to large-scale climate variability and tropical sea surface temperatures (SSTs), with the circulation anomalies influencing precipitation through changes in moisture transport. The leading pattern of regional climate variability, as resolved in the operationally updated NCEP–NCAR reanalysis, can be used to make a skillful seasonal forecast for individual river flow stations. This ability to make predictions based on regional-scale climate data is of particular use in this data-sparse area of the world.

The river flow is considered in terms of 24 stations in Uzbekistan and Tajikistan available for 1950–85, with two additional stations available for 1958–2003. These stations encompass the headwaters of the Amu Darya and Syr Darya, two of the main rivers of central Asia and the primary feeders of the catastrophically shrinking Aral Sea. Canonical correlation analysis (CCA) is used to forecast April–August flows based on the period 1950–85; cross-validated correlations exceed 0.5 for 10 of the stations, with a maximum of 0.71. Skill remains high even after 1985 for two stations withheld from the CCA: the correlation for 1986–2002 for the Syr Darya at Chinaz is 0.71, and the correlation for the Amu Darya at Kerki is 0.77. The forecast is also correlated to the normalized difference vegetation index (NDVI); maximum values exceed 0.8 at 8-km resolution, confirming the strong connection between hydrology and growing season vegetation in the region and further validating the forecast methodology.

Full access
Michael K. Tippett
and
Anthony G. Barnston

Abstract

The cross-validated hindcast skills of various multimodel ensemble combination strategies are compared for probabilistic predictions of monthly SST anomalies in the ENSO-related Niño-3.4 region of the tropical Pacific Ocean. Forecast data from seven individual models of the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) project are used, spanning the 22-yr period of 1980–2001. Skill of the probabilistic forecasts is measured using the ranked probability skill score and rate of return, the latter being an information theory–based measure. Although skill is generally low during boreal summer relative to other times of the year, the advantage of the model forecasts over simple historical frequencies is greatest at this time. Multimodel ensemble predictions, even those using simple combination methods, generally have higher skill than single model predictions, and this advantage is greater than that expected as a result of an increase in ensemble size. Overall, slightly better performance was obtained using combination methods based on individual model skill relative to methods based on the complete joint behavior of the models. This finding is attributed to the comparatively large expected sampling error in the estimation of the relations between model errors based on the short history. A practical conclusion is that, unless some models have grossly low skill relative to the others, and until the history is much longer than two to three decades, equal, independent, or constrained joint weighting are reasonable courses.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper shows that if a measure of predictability is invariant to affine transformations and monotonically related to forecast uncertainty, then the component that maximizes this measure for normally distributed variables is independent of the detailed form of the measure. This result explains why different measures of predictability such as anomaly correlation, signal-to-noise ratio, predictive information, and the Mahalanobis error are each maximized by the same components. These components can be determined by applying principal component analysis to a transformed forecast ensemble, a procedure called predictable component analysis (PrCA). The resulting vectors define a complete set of components that can be ordered such that the first maximizes predictability, the second maximizes predictability subject to being uncorrelated of the first, and so on. The transformation in question, called the whitening transformation, can be interpreted as changing the norm in principal component analysis. The resulting norm renders noise variance analysis equivalent to signal variance analysis, whereas these two analyses lead to inconsistent results if other norms are chosen to define variance. Predictable components also can be determined by applying singular value decomposition to a whitened propagator in linear models. The whitening transformation is tantamount to changing the initial and final norms in the singular vector calculation. The norm for measuring forecast uncertainty has not appeared in prior predictability studies. Nevertheless, the norms that emerge from this framework have several attractive properties that make their use compelling. This framework generalizes singular vector methods to models with both stochastic forcing and initial condition error. These and other components of interest to predictability are illustrated with an empirical model for sea surface temperature.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper introduces the average predictability time (APT) for characterizing the overall predictability of a system. APT is the integral of a predictability measure over all lead times. The underlying predictability measure is based on the Mahalanobis metric, which is invariant to linear transformation of the prediction variables and hence gives results that are independent of the (arbitrary) basis set used to represent the state. The APT is superior to some integral time scales used to characterize the time scale of a random process because the latter vanishes in situations when it should not, whereas the APT converges to reasonable values. The APT also can be written in terms of the power spectrum, thereby clarifying the connection between predictability and the power spectrum. In essence, predictability is related to the width of spectral peaks, with strong, narrow peaks associated with high predictability and nearly flat spectra associated with low predictability. Closed form expressions for the APT for linear stochastic models are derived. For a given dynamical operator, the stochastic forcing that minimizes APT is one that allows transformation of the original stochastic model into a set of uncoupled, independent stochastic models. Loosely speaking, coupling enhances predictability. A rigorous upper bound on the predictability of linear stochastic models is derived, which clarifies the connection between predictability at short and long lead times, as well as the choice of norm for measuring error growth. Surprisingly, APT can itself be interpreted as the “total variance” of an alternative stochastic model, which means that generalized stability theory and dynamical systems theory can be used to understand APT. The APT can be decomposed into an uncorrelated set of components that maximize predictability time, analogous to the way principle component analysis decomposes variance. Part II of this paper develops a practical method for performing this decomposition and applies it to meteorological data.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper proposes a new method for diagnosing predictability on multiple time scales without time averaging. The method finds components that maximize the average predictability time (APT) of a system, where APT is defined as the integral of the average predictability over all lead times. Basing the predictability measure on the Mahalanobis metric leads to a complete, uncorrelated set of components that can be ordered by their contribution to APT, analogous to the way principal components decompose variance. The components and associated APTs are invariant to nonsingular linear transformations, allowing variables with different units and natural variability to be considered in a single state vector without normalization. For prediction models derived from linear regression, maximizing APT is equivalent to maximizing the sum of squared multiple correlations between the component and the time-lagged state vector. The new method is used to diagnose predictability of 1000-hPa zonal velocity on time scales from 6 h to decades. The leading predictable component is dominated by a linear trend and presumably identifies a climate change signal. The next component is strongly correlated with ENSO indices and hence is identified with seasonal-to-interannual predictability. The third component is related to annular modes and presents decadal variability as well as a trend. The next few components have APTs exceeding 10 days. A reconstruction of the tropical zonal wind field based on the leading seven components reveals eastward propagation of anomalies with time scales consistent with the Madden–Julian oscillation. The remaining components have time scales less than a week and hence are identified with weather predictability. The detection of predictability on these time scales without time averaging is possible by virtue of the fact that predictability on different time scales is characterized by different spatial structures, which can be optimally extracted by suitable projections.

Full access
Anthony G. Barnston
and
Michael K. Tippett

Abstract

Canonical correlation analysis (CCA)-based statistical corrections are applied to seasonal mean precipitation and temperature hindcasts of the individual models from the North American Multimodel Ensemble project to correct biases in the positions and amplitudes of the predicted large-scale anomaly patterns. Corrections are applied in 15 individual regions and then merged into globally corrected forecasts. The CCA correction dramatically improves the RMS error skill score, demonstrating that model predictions contain correctable systematic biases in mean and amplitude. However, the corrections do not materially improve the anomaly correlation skills of the individual models for most regions, seasons, and lead times, with the exception of October–December precipitation in Indonesia and eastern Africa. Models with lower uncorrected correlation skill tend to benefit more from the correction, suggesting that their lower skills may be due to correctable systematic errors. Unexpectedly, corrections for the globe as a single region tend to improve the anomaly correlation at least as much as the merged corrections to the individual regions for temperature, and more so for precipitation, perhaps due to better noise filtering. The lack of overall improvement in correlation may imply relatively mild errors in large-scale anomaly patterns. Alternatively, there may be such errors, but the period of record is too short to identify them effectively but long enough to find local biases in mean and amplitude. Therefore, statistical correction methods treating individual locations (e.g., multiple regression or principal component regression) may be recommended for today’s coupled climate model forecasts. The findings highlight that the performance of statistical postprocessing can be grossly overestimated without thorough cross validation or evaluation on independent data.

Full access
Timothy DelSole
and
Michael K. Tippett

Abstract

This paper proposes a new method for representing data in a general domain on a sphere. The method is based on the eigenfunctions of the Laplace operator, which form an orthogonal basis set that can be ordered by a measure of length scale. Representing data with Laplacian eigenfunctions is attractive if one wants to reduce the dimension of a dataset by filtering out small-scale variability. Although Laplacian eigenfunctions are ubiquitous in climate modeling, their use in arbitrary domains, such as over continents, is not common because of the numerical difficulties associated with irregular boundaries. Recent advances in machine learning and computational sciences are exploited to derive eigenfunctions of the Laplace operator over an arbitrary domain on a sphere. The eigenfunctions depend only on the geometry of the domain and hence require no training data from models or observations, a feature that is especially useful in small sample sizes. Another novel feature is that the method produces reasonable eigenfunctions even if the domain is disconnected, such as a land domain comprising isolated continents and islands. The eigenfunctions are illustrated by quantifying variability of monthly mean temperature and precipitation in climate models and observations. This analysis extends previous studies by showing that climate models have significant biases not only in global-scale spatial averages but also in global-scale dipoles and other physically important structures. MATLAB and R codes for deriving Laplacian eigenfunctions are available upon request.

Full access
Timothy Hall
and
Michael K. Tippett

Abstract

A statistical model of northeastern Pacific Ocean tropical cyclones (TCs) is developed and used to estimate hurricane landfall rates along the coast of Mexico. Mean annual landfall rates for 1971–2014 are compared with mean rates for the extremely high northeastern Pacific sea surface temperature (SST) of 2015. Over the full coast, the mean rate and 5%–95% uncertainty range (in parentheses) for TCs that are category 1 and higher on the Saffir–Simpson scale (C1+ TCs) are 1.24 (1.05, 1.33) yr−1 for 1971–2014 and 1.69 (0.89, 2.08) yr−1 for 2015—a difference that is not significant. The increase for the most intense landfalls (category-5 TCs) is significant: 0.009 (0.006, 0.011) yr−1 for 1971–2014 and 0.031 (0.016, 0.036) yr−1 for 2015. The SST impact on the rate of category-5 TC landfalls is largest on the northern Mexican coast. The increased landfall rates for category-5 TCs are consistent with independent analysis showing that SST has its greatest impact on the formation rates of the most intense northeastern Pacific TCs. Landfall rates on Hawaii [0.033 (0.019, 0.045) yr−1 for C1+ TCs and 0.010 (0.005, 0.016) yr−1 for C3+ TCs for 1971–2014] show increases in the best estimates for 2015 conditions, but the changes are statistically insignificant.

Full access