Search Results
You are looking at 1 - 10 of 83 items for
- Author or Editor: Michael K. Tippett x
- Refine by Access: All Content x
Abstract
The Madden–Julian oscillation (MJO) is the leading mode of tropical variability on subseasonal time scales and has predictable impacts in the extratropics. Whether or not the MJO has a discernible influence on U.S. tornado occurrence has important implications for the feasibility of extended-range forecasting of tornado activity. Interpretation and comparison of previous studies is difficult because of differing data periods, methods, and tornado activity metrics. Here, a previously described modulation of the frequency of violent tornado outbreaks (days with six or more tornadoes reported rated EF2 or greater) by the MJO is shown to be fairly robust to the addition or removal of years to the analysis period and to changes in the number of tornadoes used to define outbreak days, but is less robust to the choice of MJO index. Earlier findings of a statistically significant MJO signal in the frequency of days with at least one tornado report are shown to be incorrect. The reduction of the frequency of days with tornadoes rated EF1 and greater when MJO convection is present in the Maritime Continent and western Pacific is statistically significant in April and robust across varying thresholds of reliably reported tornado numbers and MJO indices.
Abstract
The Madden–Julian oscillation (MJO) is the leading mode of tropical variability on subseasonal time scales and has predictable impacts in the extratropics. Whether or not the MJO has a discernible influence on U.S. tornado occurrence has important implications for the feasibility of extended-range forecasting of tornado activity. Interpretation and comparison of previous studies is difficult because of differing data periods, methods, and tornado activity metrics. Here, a previously described modulation of the frequency of violent tornado outbreaks (days with six or more tornadoes reported rated EF2 or greater) by the MJO is shown to be fairly robust to the addition or removal of years to the analysis period and to changes in the number of tornadoes used to define outbreak days, but is less robust to the choice of MJO index. Earlier findings of a statistically significant MJO signal in the frequency of days with at least one tornado report are shown to be incorrect. The reduction of the frequency of days with tornadoes rated EF1 and greater when MJO convection is present in the Maritime Continent and western Pacific is statistically significant in April and robust across varying thresholds of reliably reported tornado numbers and MJO indices.
Abstract
Warm season river flows in central Asia, which play an important role in local water resources and agriculture, are shown to be closely related to the regional-scale climate variability of the preceding cold season. The peak river flows occur in the warm season (April–August) and are highly correlated with the regional patterns of precipitation, moisture transport, and jet-level winds of the preceding cold season (November–March), demonstrating the importance of regional-scale variability in determining the snowpack that eventually drives the rivers. This regional variability is, in turn, strongly linked to large-scale climate variability and tropical sea surface temperatures (SSTs), with the circulation anomalies influencing precipitation through changes in moisture transport. The leading pattern of regional climate variability, as resolved in the operationally updated NCEP–NCAR reanalysis, can be used to make a skillful seasonal forecast for individual river flow stations. This ability to make predictions based on regional-scale climate data is of particular use in this data-sparse area of the world.
The river flow is considered in terms of 24 stations in Uzbekistan and Tajikistan available for 1950–85, with two additional stations available for 1958–2003. These stations encompass the headwaters of the Amu Darya and Syr Darya, two of the main rivers of central Asia and the primary feeders of the catastrophically shrinking Aral Sea. Canonical correlation analysis (CCA) is used to forecast April–August flows based on the period 1950–85; cross-validated correlations exceed 0.5 for 10 of the stations, with a maximum of 0.71. Skill remains high even after 1985 for two stations withheld from the CCA: the correlation for 1986–2002 for the Syr Darya at Chinaz is 0.71, and the correlation for the Amu Darya at Kerki is 0.77. The forecast is also correlated to the normalized difference vegetation index (NDVI); maximum values exceed 0.8 at 8-km resolution, confirming the strong connection between hydrology and growing season vegetation in the region and further validating the forecast methodology.
Abstract
Warm season river flows in central Asia, which play an important role in local water resources and agriculture, are shown to be closely related to the regional-scale climate variability of the preceding cold season. The peak river flows occur in the warm season (April–August) and are highly correlated with the regional patterns of precipitation, moisture transport, and jet-level winds of the preceding cold season (November–March), demonstrating the importance of regional-scale variability in determining the snowpack that eventually drives the rivers. This regional variability is, in turn, strongly linked to large-scale climate variability and tropical sea surface temperatures (SSTs), with the circulation anomalies influencing precipitation through changes in moisture transport. The leading pattern of regional climate variability, as resolved in the operationally updated NCEP–NCAR reanalysis, can be used to make a skillful seasonal forecast for individual river flow stations. This ability to make predictions based on regional-scale climate data is of particular use in this data-sparse area of the world.
The river flow is considered in terms of 24 stations in Uzbekistan and Tajikistan available for 1950–85, with two additional stations available for 1958–2003. These stations encompass the headwaters of the Amu Darya and Syr Darya, two of the main rivers of central Asia and the primary feeders of the catastrophically shrinking Aral Sea. Canonical correlation analysis (CCA) is used to forecast April–August flows based on the period 1950–85; cross-validated correlations exceed 0.5 for 10 of the stations, with a maximum of 0.71. Skill remains high even after 1985 for two stations withheld from the CCA: the correlation for 1986–2002 for the Syr Darya at Chinaz is 0.71, and the correlation for the Amu Darya at Kerki is 0.77. The forecast is also correlated to the normalized difference vegetation index (NDVI); maximum values exceed 0.8 at 8-km resolution, confirming the strong connection between hydrology and growing season vegetation in the region and further validating the forecast methodology.
Abstract
The cross-validated hindcast skills of various multimodel ensemble combination strategies are compared for probabilistic predictions of monthly SST anomalies in the ENSO-related Niño-3.4 region of the tropical Pacific Ocean. Forecast data from seven individual models of the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) project are used, spanning the 22-yr period of 1980–2001. Skill of the probabilistic forecasts is measured using the ranked probability skill score and rate of return, the latter being an information theory–based measure. Although skill is generally low during boreal summer relative to other times of the year, the advantage of the model forecasts over simple historical frequencies is greatest at this time. Multimodel ensemble predictions, even those using simple combination methods, generally have higher skill than single model predictions, and this advantage is greater than that expected as a result of an increase in ensemble size. Overall, slightly better performance was obtained using combination methods based on individual model skill relative to methods based on the complete joint behavior of the models. This finding is attributed to the comparatively large expected sampling error in the estimation of the relations between model errors based on the short history. A practical conclusion is that, unless some models have grossly low skill relative to the others, and until the history is much longer than two to three decades, equal, independent, or constrained joint weighting are reasonable courses.
Abstract
The cross-validated hindcast skills of various multimodel ensemble combination strategies are compared for probabilistic predictions of monthly SST anomalies in the ENSO-related Niño-3.4 region of the tropical Pacific Ocean. Forecast data from seven individual models of the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction (DEMETER) project are used, spanning the 22-yr period of 1980–2001. Skill of the probabilistic forecasts is measured using the ranked probability skill score and rate of return, the latter being an information theory–based measure. Although skill is generally low during boreal summer relative to other times of the year, the advantage of the model forecasts over simple historical frequencies is greatest at this time. Multimodel ensemble predictions, even those using simple combination methods, generally have higher skill than single model predictions, and this advantage is greater than that expected as a result of an increase in ensemble size. Overall, slightly better performance was obtained using combination methods based on individual model skill relative to methods based on the complete joint behavior of the models. This finding is attributed to the comparatively large expected sampling error in the estimation of the relations between model errors based on the short history. A practical conclusion is that, unless some models have grossly low skill relative to the others, and until the history is much longer than two to three decades, equal, independent, or constrained joint weighting are reasonable courses.
Abstract
The effect of the El Niño–Southern Oscillation (ENSO) teleconnection and climate change trends on observed North American wintertime daily 2-m temperature is investigated for 1960–2022 with a quantile regression model, which represents the variability of the full distribution of daily temperature, including extremes and changes in spread. Climate change trends are included as a predictor in the regression model to avoid the potentially confounding effect on ENSO teleconnections. Based on prior evidence of asymmetric impacts from El Niño and La Niña, the ENSO response is taken to be piecewise linear, and the regression model contains separate predictors for warm and cool ENSO. The relationship between these predictors and shifts in median, interquartile range, skewness, and kurtosis of daily 2-m temperature are summarized through Legendre polynomials. Warm ENSO conditions result in significant warming shifts in the median and contraction of the interquartile range in central-northern North America, while no opposite effect is found for cool ENSO conditions in this region. In the southern United States, cool ENSO conditions produce a warming shift in the median, while warm ENSO conditions have little impact on the median, but contracts the interquartile range. Climate change trends are present as a near-uniform warming in the median and across quantiles and have no discernable impact on interquartile range or higher-order moments. Trends and ENSO together explain a substantial fraction of the interannual variability of daily temperature distribution shifts across much of North America and, to a lesser extent, changes of the interquartile range.
Abstract
The effect of the El Niño–Southern Oscillation (ENSO) teleconnection and climate change trends on observed North American wintertime daily 2-m temperature is investigated for 1960–2022 with a quantile regression model, which represents the variability of the full distribution of daily temperature, including extremes and changes in spread. Climate change trends are included as a predictor in the regression model to avoid the potentially confounding effect on ENSO teleconnections. Based on prior evidence of asymmetric impacts from El Niño and La Niña, the ENSO response is taken to be piecewise linear, and the regression model contains separate predictors for warm and cool ENSO. The relationship between these predictors and shifts in median, interquartile range, skewness, and kurtosis of daily 2-m temperature are summarized through Legendre polynomials. Warm ENSO conditions result in significant warming shifts in the median and contraction of the interquartile range in central-northern North America, while no opposite effect is found for cool ENSO conditions in this region. In the southern United States, cool ENSO conditions produce a warming shift in the median, while warm ENSO conditions have little impact on the median, but contracts the interquartile range. Climate change trends are present as a near-uniform warming in the median and across quantiles and have no discernable impact on interquartile range or higher-order moments. Trends and ENSO together explain a substantial fraction of the interannual variability of daily temperature distribution shifts across much of North America and, to a lesser extent, changes of the interquartile range.
Abstract
Tornado outbreaks—when multiple tornadoes occur within a short period of time—are rare yet impactful events. Here we developed a two-part stochastic tornado outbreak index for the contiguous United States (CONUS). The first component produces a probability map for outbreak tornado occurrence based on spatially resolved values of convective precipitation, storm relative helicity (SRH), and convective available potential energy. The second part of the index provides a probability distribution for the total number of tornadoes given the outbreak tornado probability map. Together these two components allow stochastic simulation of location and number of tornadoes that is consistent with environmental conditions. Storm report data from the Storm Prediction Center for the 1979–2021 period are used to train the model and evaluate its performance. In the first component, the probability of an outbreak-level tornado is most sensitive to SRH changes. In the second component, the total number of CONUS tornadoes depends on the sum and gridpoint maximum of the probability map. Overall, the tornado outbreak index represents the climatology, seasonal cycle, and interannual variability of tornado outbreak activity well, particularly over regions and seasons when tornado outbreaks occur most often. We found that El Niño–Southern Oscillation (ENSO) modulates the tornado outbreak index such that La Niña is associated with enhanced U.S. tornado outbreak activity over the Ohio River Valley and Tennessee River Valley regions during January–March, similar to the behavior seen in storm report data. We also found an upward trend in U.S. tornado outbreak activity during winter and spring for the 1979–2021 period using both observations and the index.
Significance Statement
Tornado outbreaks are when multiple tornadoes happen in a short time span. Because of the rare, sporadic nature of tornadoes, it can be challenging to use observational tornado reports directly to assess how climate affects tornado and tornado outbreak activity. Here, we developed a statistical model that produces a U.S. map of the likelihood that an outbreak-level tornado would occur based on environmental conditions. In addition, using that likelihood map, the model predicts a range of how many tornadoes could occur in these events. We found that “storm relative helicity” (a proxy for potential rotation in a storm’s updraft) is especially important for predicting outbreak tornado likelihood, and the sum and maximum value of the likelihood map is important for predicting total numbers for an event. Overall, this model can represent the typical behavior and fluctuations in tornado outbreak activity well. Both the tornado outbreak model and observations agree that the state of sea surface temperature in the tropical Pacific (El Niño–Southern Oscillation) is linked to tornado outbreak activity over the Ohio River Valley and Tennessee River Valley in winter through early spring and that there are upward trends in tornado outbreak activity.
Abstract
Tornado outbreaks—when multiple tornadoes occur within a short period of time—are rare yet impactful events. Here we developed a two-part stochastic tornado outbreak index for the contiguous United States (CONUS). The first component produces a probability map for outbreak tornado occurrence based on spatially resolved values of convective precipitation, storm relative helicity (SRH), and convective available potential energy. The second part of the index provides a probability distribution for the total number of tornadoes given the outbreak tornado probability map. Together these two components allow stochastic simulation of location and number of tornadoes that is consistent with environmental conditions. Storm report data from the Storm Prediction Center for the 1979–2021 period are used to train the model and evaluate its performance. In the first component, the probability of an outbreak-level tornado is most sensitive to SRH changes. In the second component, the total number of CONUS tornadoes depends on the sum and gridpoint maximum of the probability map. Overall, the tornado outbreak index represents the climatology, seasonal cycle, and interannual variability of tornado outbreak activity well, particularly over regions and seasons when tornado outbreaks occur most often. We found that El Niño–Southern Oscillation (ENSO) modulates the tornado outbreak index such that La Niña is associated with enhanced U.S. tornado outbreak activity over the Ohio River Valley and Tennessee River Valley regions during January–March, similar to the behavior seen in storm report data. We also found an upward trend in U.S. tornado outbreak activity during winter and spring for the 1979–2021 period using both observations and the index.
Significance Statement
Tornado outbreaks are when multiple tornadoes happen in a short time span. Because of the rare, sporadic nature of tornadoes, it can be challenging to use observational tornado reports directly to assess how climate affects tornado and tornado outbreak activity. Here, we developed a statistical model that produces a U.S. map of the likelihood that an outbreak-level tornado would occur based on environmental conditions. In addition, using that likelihood map, the model predicts a range of how many tornadoes could occur in these events. We found that “storm relative helicity” (a proxy for potential rotation in a storm’s updraft) is especially important for predicting outbreak tornado likelihood, and the sum and maximum value of the likelihood map is important for predicting total numbers for an event. Overall, this model can represent the typical behavior and fluctuations in tornado outbreak activity well. Both the tornado outbreak model and observations agree that the state of sea surface temperature in the tropical Pacific (El Niño–Southern Oscillation) is linked to tornado outbreak activity over the Ohio River Valley and Tennessee River Valley in winter through early spring and that there are upward trends in tornado outbreak activity.
Abstract
This paper shows that if a measure of predictability is invariant to affine transformations and monotonically related to forecast uncertainty, then the component that maximizes this measure for normally distributed variables is independent of the detailed form of the measure. This result explains why different measures of predictability such as anomaly correlation, signal-to-noise ratio, predictive information, and the Mahalanobis error are each maximized by the same components. These components can be determined by applying principal component analysis to a transformed forecast ensemble, a procedure called predictable component analysis (PrCA). The resulting vectors define a complete set of components that can be ordered such that the first maximizes predictability, the second maximizes predictability subject to being uncorrelated of the first, and so on. The transformation in question, called the whitening transformation, can be interpreted as changing the norm in principal component analysis. The resulting norm renders noise variance analysis equivalent to signal variance analysis, whereas these two analyses lead to inconsistent results if other norms are chosen to define variance. Predictable components also can be determined by applying singular value decomposition to a whitened propagator in linear models. The whitening transformation is tantamount to changing the initial and final norms in the singular vector calculation. The norm for measuring forecast uncertainty has not appeared in prior predictability studies. Nevertheless, the norms that emerge from this framework have several attractive properties that make their use compelling. This framework generalizes singular vector methods to models with both stochastic forcing and initial condition error. These and other components of interest to predictability are illustrated with an empirical model for sea surface temperature.
Abstract
This paper shows that if a measure of predictability is invariant to affine transformations and monotonically related to forecast uncertainty, then the component that maximizes this measure for normally distributed variables is independent of the detailed form of the measure. This result explains why different measures of predictability such as anomaly correlation, signal-to-noise ratio, predictive information, and the Mahalanobis error are each maximized by the same components. These components can be determined by applying principal component analysis to a transformed forecast ensemble, a procedure called predictable component analysis (PrCA). The resulting vectors define a complete set of components that can be ordered such that the first maximizes predictability, the second maximizes predictability subject to being uncorrelated of the first, and so on. The transformation in question, called the whitening transformation, can be interpreted as changing the norm in principal component analysis. The resulting norm renders noise variance analysis equivalent to signal variance analysis, whereas these two analyses lead to inconsistent results if other norms are chosen to define variance. Predictable components also can be determined by applying singular value decomposition to a whitened propagator in linear models. The whitening transformation is tantamount to changing the initial and final norms in the singular vector calculation. The norm for measuring forecast uncertainty has not appeared in prior predictability studies. Nevertheless, the norms that emerge from this framework have several attractive properties that make their use compelling. This framework generalizes singular vector methods to models with both stochastic forcing and initial condition error. These and other components of interest to predictability are illustrated with an empirical model for sea surface temperature.
Abstract
This paper introduces the average predictability time (APT) for characterizing the overall predictability of a system. APT is the integral of a predictability measure over all lead times. The underlying predictability measure is based on the Mahalanobis metric, which is invariant to linear transformation of the prediction variables and hence gives results that are independent of the (arbitrary) basis set used to represent the state. The APT is superior to some integral time scales used to characterize the time scale of a random process because the latter vanishes in situations when it should not, whereas the APT converges to reasonable values. The APT also can be written in terms of the power spectrum, thereby clarifying the connection between predictability and the power spectrum. In essence, predictability is related to the width of spectral peaks, with strong, narrow peaks associated with high predictability and nearly flat spectra associated with low predictability. Closed form expressions for the APT for linear stochastic models are derived. For a given dynamical operator, the stochastic forcing that minimizes APT is one that allows transformation of the original stochastic model into a set of uncoupled, independent stochastic models. Loosely speaking, coupling enhances predictability. A rigorous upper bound on the predictability of linear stochastic models is derived, which clarifies the connection between predictability at short and long lead times, as well as the choice of norm for measuring error growth. Surprisingly, APT can itself be interpreted as the “total variance” of an alternative stochastic model, which means that generalized stability theory and dynamical systems theory can be used to understand APT. The APT can be decomposed into an uncorrelated set of components that maximize predictability time, analogous to the way principle component analysis decomposes variance. Part II of this paper develops a practical method for performing this decomposition and applies it to meteorological data.
Abstract
This paper introduces the average predictability time (APT) for characterizing the overall predictability of a system. APT is the integral of a predictability measure over all lead times. The underlying predictability measure is based on the Mahalanobis metric, which is invariant to linear transformation of the prediction variables and hence gives results that are independent of the (arbitrary) basis set used to represent the state. The APT is superior to some integral time scales used to characterize the time scale of a random process because the latter vanishes in situations when it should not, whereas the APT converges to reasonable values. The APT also can be written in terms of the power spectrum, thereby clarifying the connection between predictability and the power spectrum. In essence, predictability is related to the width of spectral peaks, with strong, narrow peaks associated with high predictability and nearly flat spectra associated with low predictability. Closed form expressions for the APT for linear stochastic models are derived. For a given dynamical operator, the stochastic forcing that minimizes APT is one that allows transformation of the original stochastic model into a set of uncoupled, independent stochastic models. Loosely speaking, coupling enhances predictability. A rigorous upper bound on the predictability of linear stochastic models is derived, which clarifies the connection between predictability at short and long lead times, as well as the choice of norm for measuring error growth. Surprisingly, APT can itself be interpreted as the “total variance” of an alternative stochastic model, which means that generalized stability theory and dynamical systems theory can be used to understand APT. The APT can be decomposed into an uncorrelated set of components that maximize predictability time, analogous to the way principle component analysis decomposes variance. Part II of this paper develops a practical method for performing this decomposition and applies it to meteorological data.
Abstract
This paper proposes a new method for diagnosing predictability on multiple time scales without time averaging. The method finds components that maximize the average predictability time (APT) of a system, where APT is defined as the integral of the average predictability over all lead times. Basing the predictability measure on the Mahalanobis metric leads to a complete, uncorrelated set of components that can be ordered by their contribution to APT, analogous to the way principal components decompose variance. The components and associated APTs are invariant to nonsingular linear transformations, allowing variables with different units and natural variability to be considered in a single state vector without normalization. For prediction models derived from linear regression, maximizing APT is equivalent to maximizing the sum of squared multiple correlations between the component and the time-lagged state vector. The new method is used to diagnose predictability of 1000-hPa zonal velocity on time scales from 6 h to decades. The leading predictable component is dominated by a linear trend and presumably identifies a climate change signal. The next component is strongly correlated with ENSO indices and hence is identified with seasonal-to-interannual predictability. The third component is related to annular modes and presents decadal variability as well as a trend. The next few components have APTs exceeding 10 days. A reconstruction of the tropical zonal wind field based on the leading seven components reveals eastward propagation of anomalies with time scales consistent with the Madden–Julian oscillation. The remaining components have time scales less than a week and hence are identified with weather predictability. The detection of predictability on these time scales without time averaging is possible by virtue of the fact that predictability on different time scales is characterized by different spatial structures, which can be optimally extracted by suitable projections.
Abstract
This paper proposes a new method for diagnosing predictability on multiple time scales without time averaging. The method finds components that maximize the average predictability time (APT) of a system, where APT is defined as the integral of the average predictability over all lead times. Basing the predictability measure on the Mahalanobis metric leads to a complete, uncorrelated set of components that can be ordered by their contribution to APT, analogous to the way principal components decompose variance. The components and associated APTs are invariant to nonsingular linear transformations, allowing variables with different units and natural variability to be considered in a single state vector without normalization. For prediction models derived from linear regression, maximizing APT is equivalent to maximizing the sum of squared multiple correlations between the component and the time-lagged state vector. The new method is used to diagnose predictability of 1000-hPa zonal velocity on time scales from 6 h to decades. The leading predictable component is dominated by a linear trend and presumably identifies a climate change signal. The next component is strongly correlated with ENSO indices and hence is identified with seasonal-to-interannual predictability. The third component is related to annular modes and presents decadal variability as well as a trend. The next few components have APTs exceeding 10 days. A reconstruction of the tropical zonal wind field based on the leading seven components reveals eastward propagation of anomalies with time scales consistent with the Madden–Julian oscillation. The remaining components have time scales less than a week and hence are identified with weather predictability. The detection of predictability on these time scales without time averaging is possible by virtue of the fact that predictability on different time scales is characterized by different spatial structures, which can be optimally extracted by suitable projections.
Abstract
Canonical correlation analysis (CCA)-based statistical corrections are applied to seasonal mean precipitation and temperature hindcasts of the individual models from the North American Multimodel Ensemble project to correct biases in the positions and amplitudes of the predicted large-scale anomaly patterns. Corrections are applied in 15 individual regions and then merged into globally corrected forecasts. The CCA correction dramatically improves the RMS error skill score, demonstrating that model predictions contain correctable systematic biases in mean and amplitude. However, the corrections do not materially improve the anomaly correlation skills of the individual models for most regions, seasons, and lead times, with the exception of October–December precipitation in Indonesia and eastern Africa. Models with lower uncorrected correlation skill tend to benefit more from the correction, suggesting that their lower skills may be due to correctable systematic errors. Unexpectedly, corrections for the globe as a single region tend to improve the anomaly correlation at least as much as the merged corrections to the individual regions for temperature, and more so for precipitation, perhaps due to better noise filtering. The lack of overall improvement in correlation may imply relatively mild errors in large-scale anomaly patterns. Alternatively, there may be such errors, but the period of record is too short to identify them effectively but long enough to find local biases in mean and amplitude. Therefore, statistical correction methods treating individual locations (e.g., multiple regression or principal component regression) may be recommended for today’s coupled climate model forecasts. The findings highlight that the performance of statistical postprocessing can be grossly overestimated without thorough cross validation or evaluation on independent data.
Abstract
Canonical correlation analysis (CCA)-based statistical corrections are applied to seasonal mean precipitation and temperature hindcasts of the individual models from the North American Multimodel Ensemble project to correct biases in the positions and amplitudes of the predicted large-scale anomaly patterns. Corrections are applied in 15 individual regions and then merged into globally corrected forecasts. The CCA correction dramatically improves the RMS error skill score, demonstrating that model predictions contain correctable systematic biases in mean and amplitude. However, the corrections do not materially improve the anomaly correlation skills of the individual models for most regions, seasons, and lead times, with the exception of October–December precipitation in Indonesia and eastern Africa. Models with lower uncorrected correlation skill tend to benefit more from the correction, suggesting that their lower skills may be due to correctable systematic errors. Unexpectedly, corrections for the globe as a single region tend to improve the anomaly correlation at least as much as the merged corrections to the individual regions for temperature, and more so for precipitation, perhaps due to better noise filtering. The lack of overall improvement in correlation may imply relatively mild errors in large-scale anomaly patterns. Alternatively, there may be such errors, but the period of record is too short to identify them effectively but long enough to find local biases in mean and amplitude. Therefore, statistical correction methods treating individual locations (e.g., multiple regression or principal component regression) may be recommended for today’s coupled climate model forecasts. The findings highlight that the performance of statistical postprocessing can be grossly overestimated without thorough cross validation or evaluation on independent data.