Search Results

You are looking at 1 - 8 of 8 items for :

  • Author or Editor: M. K. Tippett x
  • All content x
Clear All Modify Search
N. Vigaud, M. K. Tippett, and A. W. Robertson

Abstract

The skill of submonthly forecasts of rainfall over the East Africa–West Asia sector is examined for starts during the extended boreal winter season (September–April) using three ensemble prediction systems (EPSs) from the Subseasonal-to-Seasonal (S2S) project. Forecasts of tercile category probabilities over the common period 1999–2010 are constructed using extended logistic regression (ELR), and a multimodel forecast is formed by averaging individual model probabilities. The calibration of each model separately produces reliable probabilistic weekly forecasts, but these lack sharpness beyond a week lead time. Multimodel ensembling generally improves skill by removing negative skill scores present in individual models. In addition, the multimodel ensemble week-3–4 forecasts have a higher ranked probability skill score and reliability compared to week-3 or week-4 forecasts for starts in February–April, while skill gain is less pronounced for other seasons. During the 1999–2010 period, skill over continental subregions is the highest for starts in February–April and for starts during El Niño conditions and MJO phase 7, which coincides with enhanced forecast probabilities of above-normal rainfall. Overall, these results indicate notable opportunities for the application of skillful subseasonal predictions over the East Africa–West Asia sector during the extended boreal winter season.

Full access
N. Vigaud, A.W. Robertson, and M. K. Tippett

Abstract

Four recurrent weather regimes are identified over North America from October to March through a k-means clustering applied to MERRA daily 500-hPa geopotential heights over the 1982–2014 period. Three regimes resemble Rossby wave train patterns with some baroclinicity, while one is related to an NAO-like meridional pressure gradient between eastern North America and western regions of the North Atlantic. All regimes are associated with distinct rainfall and surface temperature anomalies over North America. The four-cluster partition is well reproduced by ECMWF week-1 reforecasts over the 1995–2014 period in terms of spatial structures, daily regime occurrences, and seasonal regime counts. The skill in forecasting daily regime sequences and weekly regime counts is largely limited to 2 weeks. However, skill relationships with the MJO, ENSO, and SST variability in the Atlantic and Indian Oceans suggest further potential for subseasonal predictability based on wintertime large-scale weather regimes.

Full access
N. Vigaud, A. W. Robertson, and M. K. Tippett

Abstract

Probabilistic forecasts of weekly and week 3–4 averages of precipitation are constructed using extended logistic regression (ELR) applied to three models (ECMWF, NCEP, and CMA) from the Subseasonal-to-Seasonal (S2S) project. Individual and multimodel ensemble (MME) forecasts are verified over the common period 1999–2010. The regression parameters are fitted separately at each grid point and lead time for the three ensemble prediction system (EPS) reforecasts with starts during January–March and July–September. The ELR produces tercile category probabilities for each model that are then averaged with equal weighting. The resulting MME forecasts are characterized by good reliability but low sharpness. A clear benefit of multimodel ensembling is to largely remove negative skill scores present in individual forecasts. The forecast skill of weekly averages is higher in winter than summer and decreases with lead time, with steep decreases after one and two weeks. Week 3–4 forecasts have more skill along the U.S. East Coast and the southwestern United States in winter, as well as over west/central U.S. regions and the intra-American sea/east Pacific during summer. Skill is also enhanced when the regression parameters are fit using spatially smoothed observations and forecasts. The skill of week 3–4 precipitation outlooks has a modest, but statistically significant, relation with ENSO and the MJO, particularly in winter over the southwestern United States.

Full access
N. Vigaud, M. K. Tippett, J. Yuan, A. W. Robertson, and N. Acharya

Abstract

The skill of surface temperature forecasts up to 4 weeks ahead is examined for weekly tercile category probabilities constructed using extended logistic regression (ELR) applied to three ensemble prediction systems (EPSs) from the Subseasonal-to-Seasonal (S2S) project (ECMWF, NCEP, and CMA), which are verified over the common period 1999–2010 and averaged with equal weighting to form a multimodel ensemble (MME). Over North America, the resulting forecasts are characterized by good reliability and varying degrees of sharpness. Skill decreases after two weeks and from winter to summer. Multimodel ensembling damps negative skill that is present in individual forecast systems, but overall, does not lead to substantial skill improvement compared to the best (ECMWF) model. Spatial pattern correction is implemented by projecting the ensemble mean temperatures neighboring each grid point onto Laplacian eigenfunctions, and then using those amplitudes as new predictors in the ELR. Forecasts and skill improve beyond week 2, when the ELR model is trained on spatially averaged temperature (i.e., the amplitude of the first Laplacian eigenfunction) rather than the gridpoint ensemble mean, but not at shorter leads. Forecasts are degraded when adding more Laplacian eigenfunctions that encode additional spatial details as predictors, likely due to the short reforecast sample size. Forecast skill variations with ENSO are limited, but MJO relationships are more pronounced, with the highest skill during MJO phase 3 up to week 3, coinciding with enhanced forecast probabilities of above-normal temperatures in winter.

Full access
Michael K. Tippett, Jeffrey L. Anderson, Craig H. Bishop, Thomas M. Hamill, and Jeffrey S. Whitaker

Abstract

Ensemble data assimilation methods assimilate observations using state-space estimation methods and low-rank representations of forecast and analysis error covariances. A key element of such methods is the transformation of the forecast ensemble into an analysis ensemble with appropriate statistics. This transformation may be performed stochastically by treating observations as random variables, or deterministically by requiring that the updated analysis perturbations satisfy the Kalman filter analysis error covariance equation. Deterministic analysis ensemble updates are implementations of Kalman square root filters. The nonuniqueness of the deterministic transformation used in square root Kalman filters provides a framework to compare three recently proposed ensemble data assimilation methods.

Full access
Anthony G. Barnston, Michael K. Tippett, Huug M. van den Dool, and David A. Unger

Abstract

Since 2002, the International Research Institute for Climate and Society, later in partnership with the Climate Prediction Center, has issued an ENSO prediction product informally called the ENSO prediction plume. Here, measures to improve the reliability and usability of this product are investigated, including bias and amplitude corrections, the multimodel ensembling method, formulation of a probability distribution, and the format of the issued product. Analyses using a subset of the current set of plume models demonstrate the necessity to correct individual models for mean bias and, less urgent, also for amplitude bias, before combining their predictions. The individual ensemble members of all models are weighted equally in combining them to form a multimodel ensemble mean forecast, because apparent model skill differences, when not extreme, are indistinguishable from sampling error when based on a sample of 30 cases or less. This option results in models with larger ensemble numbers being weighted relatively more heavily. Last, a decision is made to use the historical hindcast skill to determine the forecast uncertainty distribution rather than the models’ ensemble spreads, as the spreads may not always reproduce the skill-based uncertainty closely enough to create a probabilistically reliable uncertainty distribution. Thus, the individual model ensemble members are used only for forming the models’ ensemble means and the multimodel forecast mean. In other situations, the multimodel member spread may be used directly. The study also leads to some new formats in which to more effectively show both the mean ENSO prediction and its probability distribution.

Full access
N. Vigaud, M. K. Tippett, J. Yuan, A. W. Robertson, and N. Acharya

Abstract

The extent to which submonthly forecast skill can be increased by spatial pattern correction is examined in probabilistic rainfall forecasts of weekly and week-3–4 averages, constructed with extended logistic regression (ELR) applied to three ensemble prediction systems from the Subseasonal-to-Seasonal (S2S) project database. The new spatial correction method projects the ensemble-mean rainfall neighboring each grid point onto Laplacian eigenfunctions and then uses those amplitudes as predictors in the ELR. Over North America, individual and multimodel ensemble (MME) forecasts that are based on spatially averaged rainfall (e.g., first Laplacian eigenfunction) are characterized by good reliability, better sharpness, and higher skill than those using the gridpoint ensemble mean. The skill gain is greater for week-3–4 averages than week-3 leads and is largest for MME week-3–4 outlooks that are almost 2 times as skillful as MME week-3 forecasts over land. Skill decreases when using more Laplacian eigenfunctions as predictors, likely because of the difficulty in fitting additional parameters from the relatively short common reforecast period. Higher skill when increasing reforecast length indicates potential for further improvements. However, the current design of most subseasonal forecast experiments may prove to be a limit on the complexity of correction methods. Relatively high skill for week-3–4 outlooks with winter starts during El Niño and MJO phases 2–3 and 6–7 reflects particular opportunities for skillful predictions.

Free access
Ben P. Kirtman, Dughong Min, Johnna M. Infanti, James L. Kinter III, Daniel A. Paolino, Qin Zhang, Huug van den Dool, Suranjana Saha, Malaquias Pena Mendez, Emily Becker, Peitao Peng, Patrick Tripp, Jin Huang, David G. DeWitt, Michael K. Tippett, Anthony G. Barnston, Shuhua Li, Anthony Rosati, Siegfried D. Schubert, Michele Rienecker, Max Suarez, Zhao E. Li, Jelena Marshak, Young-Kwon Lim, Joseph Tribbia, Kathleen Pegion, William J. Merryfield, Bertrand Denis, and Eric F. Wood

The recent U.S. National Academies report, Assessment of Intraseasonal to Interannual Climate Prediction and Predictability, was unequivocal in recommending the need for the development of a North American Multimodel Ensemble (NMME) operational predictive capability. Indeed, this effort is required to meet the specific tailored regional prediction and decision support needs of a large community of climate information users.

The multimodel ensemble approach has proven extremely effective at quantifying prediction uncertainty due to uncertainty in model formulation and has proven to produce better prediction quality (on average) than any single model ensemble. This multimodel approach is the basis for several international collaborative prediction research efforts and an operational European system, and there are numerous examples of how this multimodel ensemble approach yields superior forecasts compared to any single model.

Based on two NOAA Climate Test bed (CTB) NMME workshops (18 February and 8 April 2011), a collaborative and coordinated implementation strategy for a NMME prediction system has been developed and is currently delivering real-time seasonal-to-interannual predictions on the NOAA Climate Prediction Center (CPC) operational schedule. The hindcast and real-time prediction data are readily available (e.g., http://iridl.ldeo.columbia.edu/SOURCES/.Models/.NMME/) and in graphical format from CPC (www.cpc.ncep.noaa.gov/products/NMME/). Moreover, the NMME forecast is already currently being used as guidance for operational forecasters. This paper describes the new NMME effort, and presents an overview of the multimodel forecast quality and the complementary skill associated with individual models.

Full access