Search Results

You are looking at 21 - 28 of 28 items for

  • Author or Editor: Simon Mason x
  • All content x
Clear All Modify Search
Michael K. Tippett, Timothy DelSole, Simon J. Mason, and Anthony G. Barnston

Abstract

There are a variety of multivariate statistical methods for analyzing the relations between two datasets. Two commonly used methods are canonical correlation analysis (CCA) and maximum covariance analysis (MCA), which find the projections of the data onto coupled patterns with maximum correlation and covariance, respectively. These projections are often used in linear prediction models. Redundancy analysis and principal predictor analysis construct projections that maximize the explained variance and the sum of squared correlations of regression models. This paper shows that the above pattern methods are equivalent to different diagonalizations of the regression between the two datasets. The different diagonalizations are computed using the singular value decomposition of the regression matrix developed using data that are suitably transformed for each method. This common framework for the pattern methods permits easy comparison of their properties. Principal component regression is shown to be a special case of CCA-based regression. A commonly used linear prediction model constructed from MCA patterns does not give a least squares estimate since correlations among MCA predictors are neglected. A variation, denoted least squares estimate (LSE)-MCA, is suggested that uses the same patterns but minimizes squared error. Since the different pattern methods correspond to diagonalizations of the same regression matrix, they all produce the same regression model when a complete set of patterns is used. Different prediction models are obtained when an incomplete set of patterns is used, with each method optimizing different properties of the regression. Some key points are illustrated in two idealized examples, and the methods are applied to statistical downscaling of rainfall over the northeast of Brazil.

Full access
Philip G. Sansom, Christopher A. T. Ferro, David B. Stephenson, Lisa Goddard, and Simon J. Mason

Abstract

This study describes a systematic approach to selecting optimal statistical recalibration methods and hindcast designs for producing reliable probability forecasts on seasonal-to-decadal time scales. A new recalibration method is introduced that includes adjustments for both unconditional and conditional biases in the mean and variance of the forecast distribution and linear time-dependent bias in the mean. The complexity of the recalibration can be systematically varied by restricting the parameters. Simple recalibration methods may outperform more complex ones given limited training data. A new cross-validation methodology is proposed that allows the comparison of multiple recalibration methods and varying training periods using limited data.

Part I considers the effect on forecast skill of varying the recalibration complexity and training period length. The interaction between these factors is analyzed for gridbox forecasts of annual mean near-surface temperature from the CanCM4 model. Recalibration methods that include conditional adjustment of the ensemble mean outperform simple bias correction by issuing climatological forecasts where the model has limited skill. Trend-adjusted forecasts outperform forecasts without trend adjustment at almost 75% of grid boxes. The optimal training period is around 30 yr for trend-adjusted forecasts and around 15 yr otherwise. The optimal training period is strongly related to the length of the optimal climatology. Longer training periods may increase overall performance but at the expense of very poor forecasts where skill is limited.

Full access
Anthony G. Barnston, Shuhua Li, Simon J. Mason, David G. DeWitt, Lisa Goddard, and Xiaofeng Gong

Abstract

This paper examines the quality of seasonal probabilistic forecasts of near-global temperature and precipitation issued by the International Research Institute for Climate and Society (IRI) from late 1997 through 2008, using mainly a two-tiered multimodel dynamical prediction system. Skill levels, while modest when globally averaged, depend markedly on season and location and average higher in the tropics than extratropics. To first order, seasons and regions of useful skill correspond to known direct effects as well as remote teleconnections from anomalies of tropical sea surface temperature in the Pacific Ocean (e.g., ENSO related) and in other tropical basins. This result is consistent with previous skill assessments by IRI and others and suggests skill levels beneficial to informed clients making climate risk management decisions for specific applications. Skill levels for temperature are generally higher, and less seasonally and regionally dependent, than those for precipitation, partly because of correct forecasts of enhanced probabilities for above-normal temperatures associated with warming trends. However, underforecasting of above-normal temperatures suggests that the dynamical forecast system could be improved through inclusion of time-varying greenhouse gas concentrations. Skills of the objective multimodel probability forecasts, used as the primary basis for the final forecaster-modified issued forecasts, are comparable to those of the final forecasts, but their probabilistic reliability is somewhat weaker. Automated recalibration of the multimodel output should permit improvements to their reliability, allowing them to be issued as is. IRI is currently developing single-tier prediction components.

Full access
Simon J. Mason, Jacqueline S. Galpin, Lisa Goddard, Nicholas E. Graham, and Balakanapathy Rajartnam

Abstract

Probabilistic forecasts of variables measured on a categorical or ordinal scale, such as precipitation occurrence or temperatures exceeding a threshold, are typically verified by comparing the relative frequency with which the target event occurs given different levels of forecast confidence. The degree to which this conditional (on the forecast probability) relative frequency of an event corresponds with the actual forecast probabilities is known as reliability, or calibration. Forecast reliability for binary variables can be measured using the Murphy decomposition of the (half) Brier score, and can be presented graphically using reliability and attributes diagrams. For forecasts of variables on continuous scales, however, an alternative measure of reliability is required. The binned probability histogram and the reliability component of the continuous ranked probability score have been proposed as appropriate verification procedures in this context, but are subject to some limitations. A procedure is proposed that is applicable in the context of forecast ensembles and is an extension of the binned probability histogram. Individual ensemble members are treated as estimates of quantiles of the forecast distribution, and the conditional probability that the observed precipitation, for example, exceeds the amount forecast [the conditional exceedance probability (CEP)] is calculated. Generalized linear regression is used to estimate these conditional probabilities. A diagram showing the CEPs for ranked ensemble members is suggested as a useful method for indicating reliability when forecasts are on a continuous scale, and various statistical tests are suggested for quantifying the reliability.

Full access
James Doss-Gollin, Ángel G. Muñoz, Simon J. Mason, and Max Pastén

Abstract

During the austral summer 2015/16, severe flooding displaced over 170 000 people on the Paraguay River system in Paraguay, Argentina, and southern Brazil. These floods were driven by repeated heavy rainfall events in the lower Paraguay River basin. Alternating sequences of enhanced moisture inflow from the South American low-level jet and local convergence associated with baroclinic systems were conducive to mesoscale convective activity and enhanced precipitation. These circulation patterns were favored by cross-time-scale interactions of a very strong El Niño event, an unusually persistent Madden–Julian oscillation in phases 4 and 5, and the presence of a dipole SST anomaly in the central southern Atlantic Ocean. The simultaneous use of seasonal and subseasonal heavy rainfall predictions could have provided decision-makers with useful information about the start of these flooding events from two to four weeks in advance. Probabilistic seasonal forecasts available at the beginning of November successfully indicated heightened probability of heavy rainfall (90th percentile) over southern Paraguay and Brazil for December–February. Raw subseasonal forecasts of heavy rainfall exhibited limited skill at lead times beyond the first two predicted weeks, but a model output statistics approach involving principal component regression substantially improved the spatial distribution of skill for week 3 relative to other methods tested, including extended logistic regressions. A continuous monitoring of climate drivers impacting rainfall in the region, and the use of statistically corrected heavy precipitation seasonal and subseasonal forecasts, may help improve flood preparedness in this and other regions.

Full access
Andrea K. Gerlak, Simon J. Mason, Meaghan Daly, Diana Liverman, Zack Guido, Marta Bruno Soares, Catherine Vaughan, Chris Knudson, Christina Greene, James Buizer, and Katharine Jacobs

Abstract

Little has been documented about the benefits and impacts of the recent growth in climate services, despite a growing call to justify their value and stimulate investment. Regional Climate Outlook Forums (RCOFs), an integral part of the public and private enterprise of climate services, have been implemented over the last 20 years with the objectives of producing and disseminating seasonal climate forecasts to inform improved climate risk management and adaptation. In proposing guidance on how to measure the success of RCOFs, we offer three broad evaluative categories that are based on the primary stated goals of the RCOFs: 1) quality of the climate information used and developed at RCOFs; 2) legitimacy of RCOF processes focused on consensus forecasts, broad user engagement, and capacity building; and 3) usability of the climate information produced at RCOFs. Evaluating the quality of information relies largely on quantitative measures and statistical techniques that are standardized and transferrable, but assessing the RCOF processes and perceived usability of RCOF products will necessitate a combination of quantitative and qualitative social science methods that are sensitive to highly variable regional contexts. As RCOFs have taken up different formats and procedures to adapt to diverse institutional and political settings and varied technical and scientific capacities, objective evaluation methods adopted should align with the goals and intent of the evaluation and be performed in a participatory, coproduction manner where producers and users of climate services together design the evaluation metrics and processes. To fully capture the potential benefits of the RCOFs, it may be necessary to adjust or recalibrate the goals of these forums to better fit the evolving landscape of climate services development, needs, and provision.

Full access
Andrea K. Gerlak, Zack Guido, Catherine Vaughan, Valerie Rountree, Christina Greene, Diana Liverman, Adrian R. Trotman, Roché Mahon, Shelly-Ann Cox, Simon J. Mason, Katharine L. Jacobs, James L. Buizer, Cedric J. Van Meerbeeck, and Walter E. Baethgen

Abstract

In many regions around the world, Regional Climate Outlook Forums (RCOFs) provide seasonal climate information and forecasts to decision-makers at regional and national levels. Despite having two decades of experience, the forums have not been systematically monitored or evaluated. To address this gap, and to better inform nascent and widespread efforts in climate services, the authors propose a process-oriented evaluation framework derived from literature on decision support and climate communication around the production and use of scientific information. The authors apply this framework to a case study of the Caribbean RCOF (CariCOF), where they have been engaged in a collaborative effort to integrate climate information and decision processes to enhance regional climate resilience. The authors’ examination of the CariCOF shows an evolution toward the use of more advanced and more diverse climate products, as well as greater awareness of user feedback. It also reveals shortfalls of the CariCOF, including a lack of diverse stakeholder participation, a need for better understanding of best practices to tailor information, undeveloped market research of climate products, insufficient experimentation and vetting of communication mechanisms, and the absence of a way to steward a diverse network of regional actors. The authors’ analysis also provides insight that allowed for improvements in the climate services framework to include mechanisms to respond to changing needs and conditions. The authors’ process-oriented framework can serve as a starting point for evaluating RCOFs and other organizations charged with the provision of climate services.

Full access
Chris Hewitt, Erica Allis, Simon Mason, Meredith Muth, Roger Pulwarty, Joy Shumake-Guillemot, Ana E. Bucher, Manola Brunet, Andreas M. Fischer, Angela M. Hama, Rupa Kumar Kolli, Filipe Lucio, Ousmane Ndiaye, and Barbara Tapia
Full access