Rapid extreme event attribution and communication of results through gray literature provide useful information for stakeholders and the public but warrant further community discussion regarding how to mitigate potential risks.
The attribution of extreme weather and climate events to a particular cause is an expanding scientific field. Extreme event attribution studies focus on a particular extreme and commonly combine observational and model data to determine whether specific factors (e.g., anthropogenic greenhouse gases) contributed to a specific observed aspect of the event (e.g., its intensity, magnitude, or frequency). A key aspect of the development of the field of event attribution is an enhanced focus on operational attribution, in which analyses are conducted promptly after an extreme weather or climate event has been observed, and an attribution statement is made publically through technical reports, websites, blogs, and/or the mainstream media. Near-real-time attribution analyses are often complemented by later peer-reviewed publication.
As a result of this rapid disciplinary evolution, scientific practice in the field of attribution—focused in the first instance on rapidity and broad communication—diverges notably from traditional approaches to science, in which analyses are peer reviewed and then published, without an emphasis on timeliness. This essay explores facets of this rapid approach to attribution and specifically investigates the implications of this recent development for scientific practice.
EVOLUTION OF ATTRIBUTION.
Early suggestions that it was theoretically possible to attribute an extreme climate event to a specific cause in a quantitative manner (Allen 2003) were promptly followed by a suite of studies implementing proposed techniques. Stott et al. (2004) published a seminal study that estimated how anthropogenic greenhouse gases increased the risk of the occurrence of Europe’s 2003 summer heat, the most severe since at least AD 1500. Attribution studies have since quantitatively explored a range of types of events and their impacts, as well as an increasing range of geographical regions, and employed a greater diversity of model datasets and statistical approaches (Herring et al. 2014; Peterson et al. 2012, 2013).
The field of attribution now encompasses multiple event targets. While the more established attribution approaches analyze meteorological or climatological variables (e.g., heat waves), impacts on human and natural systems are increasingly analyzed as attribution events. For example, Mitchell et al. (2016) investigated the impact of anthropogenic forcings on human mortality in Europe. While impact attribution and assessment of actual risks has been described as the next challenge for attribution (Otto 2016), this focus is a significantly different and new focus, which represents a significant increase in analytical complexity.
Overall, the expansion of extreme event attribution (or simply “attribution”) studies arose from the confluence of multiple factors. First, significant conceptual and technical advances permitted attribution statements to be made about an increasing variety of extreme events. For example, very large model datasets run with and without greenhouse gas forcings are now routinely made available to the scientific community through data archives [e.g., the Climate and Ocean: Variability, Predictability and Change (CLIVAR) Climate of the Twentieth Century (C20C) Detection and Attribution Project (Kinter and Folland 2011) and Coupled Model Intercomparison Project phase 5 (CMIP5) datasets (Taylor et al. 2012)]. This enhanced capacity to produce and share very large ensembles of model data broadly has allowed a multitude of recent extreme events to be explored scientifically.
Extremes are of increasing scientific and public interest. A mean warming of the climate system can lead to a very large percentage of change in the frequency and severity of certain types of extremes, some of which have already been observed (e.g., Perkins et al. 2012; Coumou and Rahmstorf 2012; Coumou and Robinson 2013). Extreme weather and climate events also have potentially severe environmental and socioeconomic impacts (IPCC 2012), and hence identifying the causes of weather and climate extremes is of wide interest beyond the scientific community (Hulme 2014). In some countries, the cause of extremes has entered the political discourse around climate change (Hassol et al. 2016), which has fuelled general interest in extremes. The dedication of an annual special issue of the Bulletin of the American Meteorological Society to exploring the causes of extremes demonstrates the convergence of these scientific, technical, and motivating factors underlying the rapid developments of this discipline.
CONTEMPORARY CONSIDERATIONS.
The scientific motivations for attribution studies are manifold, including to accelerate technical and analytical developments, improve the capacity of society to plan for and adapt to climate change (Stott et al. 2012), provide a means to redress public misperceptions about the risks of climate change, and potentially provide a basis for assigning liability for climate-related damages (Allen 2003). Fulfilling attribution’s goals around public communication and societal preparedness requires that scientific analyses are undertaken with alacrity and communicated broadly beyond highly trained scientific audiences. Hence, a fundamental aspect of the utility of attribution studies for a broad audience is their timeliness.
When the first quantitative attribution studies were undertaken, the period between an extreme event occurring and the publication of results was extensive. For complex climatological events, such as the autumn 2000 flooding in the United Kingdom, publication occurred a decade after the event (Pall et al. 2011). The World Weather Attribution1 (WWA; Haustein et al. 2015) and European Climate and Weather Events: Interpretation and Attribution (EUCLEIA; Stott 2016) projects now attribute events in near–real time. In 2016, the WWA project investigated the North Pole warming, December U.S. deep freeze, August Louisiana downpours, and May European rainstorms as these events were occurring.
Rapid attribution results are communicated first through the “gray” literature in online published reports and articles on websites. Reports, blogs, articles, and various publications produced outside of traditional commercial and academic publication and distribution channels are described as gray. This approach prioritizing rapidity and public engagement prompts questions for scientific practice. For example, peer review in particular is considered a critical instrument for assessing the legitimacy of contributions to scientific knowledge (Bornmann 2008). Through peer review, members of the scientific community assess the merit of a scientific contribution based on accuracy, importance, and the intrinsic interest of its subject matter (Polanyi 1962). Peer review is also used as an instrument of differentiation in legitimizing scholarly knowledge. For example, in politicized areas of academic inquiry, such as climate science, the process of peer review confers authority and differentiates scientific knowledge from science-like results (or pseudoscience), such as those populating online skeptic blogs. Rapid attribution workflows are critically different from orthodox scientific research, which is centered on scholarly publication processes.
Such justification prompts the question, to what extent can published methods be applied in differing contexts without negating the authoritative assignment of peer review? Is the application of peer-reviewed methodologies (e.g., Haustein et al. 2016) sufficient to instill confidence in analytical results, or must interpretations and conclusions also be scrutinized by peer review prior to being published? What is the role of peer review in science?Recently, we have started undertaking these event attribution analyses immediately after the extreme event has occurred or even before it has finished. As we are using a method that has been previously peer-reviewed, we can have confidence in our results.
REVIEWING PEER REVIEW.
Peer review has been described as the “heart” of science: “It is the method by which grants are allocated, papers published, academics promoted, and Nobel prizes won” (Smith 2006). Exploring the purpose and efficacy of peer review, Smith, the former editor-in-chief of BMJ,2 argues that the processes and purposes of peer review are impossible to define singularly. Peer review is variously applied with the aim of selecting the best manuscripts for publication or to improve the quality of published papers. Reviewers from within the scientific community are “gatekeepers” of science, testing and legitimizing scholarly work (Bornmann 2008). Smith demonstrates this process of legitimation from an experience during his tenure at BMJ, in which a journalist’s interest in a medical story was predicated on its peer-review status.
The peer-review process provides a mechanism to quality control scientific discourse, and peer-reviewed papers therefore provide a reliable and quality assured source of information on climate change science.
Posed is this manner, peer review essentially underpins both academic publishing and scientific practice itself. However, recent historical investigations of peer review dispute the prevailing view of an essential relationship between publication and peer review. Csiszar (2016) and Fyfe (2015) note the idea that peer review commenced as a standard established practice with the beginnings of formalized scientific publishing itself (marked over 350 years ago by the publication of the world’s oldest scientific journal, Philosophical Transactions) is a persistent fallacy. At this time, however, the editor was the gatekeeper, who made key publication decisions. For some society-based journals, publication decisions were also subject to a broader vote by a member-based committee. This nascent review process did not intend to ensure that the published material was credible or reliable. Early Philosophical Transactions issues stated that editorial decisions were not intended “to answer for the certainty of the facts, or propriety of the reasonings . . . which must still rest on the credit or judgment of their respective authors” but instead focused on the “importance and singularity” of the science (see Fyfe 2015).
The first peer-based referee systems were implemented later, and the gatekeeper conceptualization of reviewers followed thereafter. Learned societies sought independent opinions as a mean to inform the committee with more specific expertise. Independent journals remained under the authority of editors. Csiszar (2016) and Fyfe (2015) both comment that the idea of a formal referee system as a linchpin of journal publishing did not take hold until the second half of the twentieth century, after which it acquired its associations with objective proof and consensus (Spier 2002). The ubiquity of peer review in contemporary scholarly publishing makes its temporal and cultural context hard to appreciate. However, before the mid-twentieth century, peer review was largely absent in scientific publishing outside the Anglophone (Csiszar 2016). Furthermore, the process and role of peer-based review was contentious within scientific societies; referees were seen variously as an inefficient impediment to scientific creativity, or as gatekeepers with a duty to the ensuring the integrity of science (Csiszar 2016).
Peer review is a hallmark of modern scientific practice, not the foundation of scientific practice itself. Only in the 1970s did the term “peer review” replace “referee systems” and become widespread in English (Fyfe 2015). While peer review remains a key component of twenty-first-century scientific practice, concerns about aspects of the process are also widely articulated by the scientific community. Smith (2006) reveals that there is little evidence that peer review is useful for detecting errors or instances of scientific fraud, while quantitative evidence of the utility and value of peer review is scant (Jennings 2006). Furthermore, peer review is widely experienced by authors as slow, inconsistent, and potentially biased toward particular subsets of the scientific community.
Recently, the processes of publishing science have evolved, most notably capitalizing on the opportunities provided by the Internet. Proposed contemporary responses to the limitations of peer review include
blind review approaches, in which authors names are not disclosed to reviewers (Mulligan et al. 2013);
open review approaches, in which reviewers’ names are disclosed to authors, and public review approaches, in which both expert peers and the online public provide comments to authors (Mulligan et al. 2013); and
multistage review approaches, in which proposed experiments and results are reviewed in different phases of publication (Gonzales and Cunningham 2015).
Peer review is clearly not a static necessity of scientific publication, but a process in flux—“the currently dominant practice in a long and varied history of reviewing practices” (Fyfe 2015). Regardless, attitudes within and outside the scientific community do not uniformly reflect the mutability and fallibility of this academic review process. A survey of 4,000 researchers randomly selected from author records held by Thomson Reuters demonstrated that 84% believed peer review to be essential to the communication of scholarly research. How can the scientific community and public interpret scientific results that have not been peer reviewed? Should scientists and stakeholders be concerned about reliability or credibility, or are we simply seeing the beginning of the next major mutation in the evolution of peer review?
IDENTIFYING RISKS OF RAPIDITY.
Releasing the results of scientific analyses prior to review is not, in itself, unusual. Unpublished analyses are presented at conferences, papers are circulated among colleagues for feedback, and drafts are published to an individual researcher’s website. However, such prepublication practices are critically different from the broad publication of unreviewed scientific data with the express purpose of targeting a large public audience. The World Climate Research Programme’s position paper on the attribution of weather and climate extremes (Stott et al. 2012) argues that reliable assessments of event probabilities and changes in time are key for informing adaptation strategies. Hence, if the articulated motivations of attribution studies include providing information for adaptive decision-making and dispelling misinformation about the links between climate change and extreme events, then arguably the underlying scientific methodology must be robust and accepted as best practice.
Rapid attribution projects can be explored in this context through identifying potential risks associated with this approach. Risks for informing decision-making include that
prepublished attribution results are rejected during peer review;
prepublished attribution results are contentious and subject to debate in the scientific community;
prepublished attribution results are interpreted publically as fact; and
prepublished attribution results are interpreted primarily as a form of advocacy, which may be incompatible with both public and expert views on the role of scientists.
The manifestation of any of these risks may be counterproductive to the aims of rapid attribution. This can be demonstrated by applying hypothetical concerns to specific examples. WWA examined the August 2016 Louisiana downpours that damaged over 60,000 homes and claimed 13 lives. Observational and model analyses indicated that central Gulf Coast extreme precipitation events, like August 2016, were more likely and more intense as a result of climate change. Results were of broad public interest and were widely covered in the mainstream media (Worland 2016). If the complementary manuscript submitted to the interactive, open-access Hydrology and Earth System Sciences journal was rejected or substantially modified during review, the implications for rapid attribution and reporting, or any stakeholder decisions, may be significant.
Using a further WWA example speculatively, there is widespread international interest in coral bleaching, spanning political, social, economic, and environmental interests. It is possible that the WWA analysis that the GBR bleaching was virtually impossible without climate change may be countered by peer-reviewed publications demonstrating, for example, that bleaching was equally dependent on poor water quality. While such interpretations and complexities are permitted and encouraged within scientific discourse, results construed as “conflicting” could potentially impact public understandings of scientific processes and of climate change, which is an expressed purpose of rapid attribution. Furthermore, reef bleaching is an issue of significant political division in Australia (Slezak 2016), and hence, any future clarifications of adjustments to rapid attribution statements occurring during subsequent peer review could conceivably be viewed as a form of advocacy at the time of the bleaching. Such understandings, even where baseless, present a risk to attribution science.
Delving into such suppositional eventualities might seem unnecessary, particularly where operational systems can provide useful analogs for rapid event attribution. Operational systems are standard practice in many aspects of meteorology and climatology, providing, for example, summaries of seasonal outlooks for Pacific Ocean conditions, seasonal forecasts, river flow forecasts, and numerical weather prediction. The methodologies and analytical tools of operational systems are peer reviewed, but the specific results intended for stakeholders and policymakers are not. A simplified El Niño–Southern Oscillation (ENSO) 6-month outlook system can be summarized as such; data are presented from multiple models that have previously been peer reviewed and documented, with an ensemble mean of all models surveyed used to convey overall trends as a quick reference. The communication of climate data and monitoring information is aided by various guidelines on disseminating the outcomes of operational analyses, such as the “watch” system (Baethgen et al. 2005). This approach keeps stakeholders accurately informed on the most likely future climatic state through a systematic approach.
Parallels with rapid attribution may indicate that identified risks from rapid approaches can be dismissed as immaterial. There are, however, key differences between rapid attribution and the more established systems of operational monitoring and forecasting:
Operational systems are developed for a specific purpose (e.g., tracking ENSO conditions), while rapid attribution approaches are applied to a diverse variety of events of differing characteristics and definitions that are conducive to plural interpretations (e.g., defining a heat wave in terms of magnitude, duration, or impact) or potentially limiting but unexplored simplifications (e.g., using sea surface temperatures as a singular metric of reef bleaching).
Operational approaches, unlike attribution, are based on peer-reviewed methods but are rarely complemented by peer-reviewed follow-up publications that may or may not be contradictory. Rather, a peer-reviewed publication can be used to provide verification of operational approaches through evaluating forecast skill based on case studies.
Operational approaches are generally subject to established guidelines and/or internal peer-review procedures. Additionally, operational information is typically communicated from a government-sanctioned institution [e.g., the Met Office (United Kingdom), the National Oceanic and Atmospheric Administration (NOAA; United States), or the Bureau of Meteorology (Australia)], while attribution results are provided by individual scientists or groups that may be unfamiliar to the public (e.g., WWA or EUCLEIA). This may impact public perceptions of authority and value in results that have not been peer reviewed.
Operational approaches typically communicate results probabilistically, rather than definitively. For example, ENSO phase alerts are dependent on climatological likelihoods, with results communicated using language such as “probably,” “likely,” and “hints at.” Although probabilistic methodologies are often applied in attribution, such as the fraction of attributable risk approach (Stone and Allen 2005), conclusions are typically communicated precisely (e.g., 5 times more likely) or in less nuanced language (e.g., “virtually certain” or “impossible without climate change”).
These critical differences between operational weather and climate monitoring, forecast, and/or prediction systems on the one hand and near-real-time attribution systems on the other hand demonstrate that they are not always directly analogous. While the communication of scientific information outside the peer-reviewed literature as a basis for impelling adaptive action has a firm precedent, tailored approaches are required and are next explored.
OPTIMIZING THE GRAY ZONE.
Adjustments in rapid attribution approaches will mitigate the potential risks of rapidity in scientific analyses and optimize the use of gray literature for the greatest disciplinary and societal gains. Recommendations are focused on operationalizing attribution, as the capability to carry out operational responses is central to attribution fulfilling its multiple aims (Stott and Walton 2013). It is worth noting that the state of the science of attributing events depends on the complexity of events and is more established for some classes of extremes than others. Hence, for example, the Met Office already provides heat event attribution as an operational activity (see www.metoffice.gov.uk/news/releases/archive/2014/2014-global-temperature), while other events or impacts are not well suited to an operational framework with the current state of knowledge. The following recommendations would generally strengthen rapid attribution analyses and communication of results:
Clarity about what precisely constitutes the event of interest in rapid attribution statements and public dissemination. The reef bleaching analysis discussed previously (King et al. 2016) provided a comprehensive accompanying methods document that detailed analysis of the anomalously warm Coral Sea surface temperatures. However, the report was entitled “Great Barrier Reef bleaching,” and the accompanying public communication was published under the title “Great Barrier Reef bleaching would be almost impossible without climate change” (King et al. 2016). In this example, it is ambiguous if the bleaching itself or climatic conditions are being examined from an attribution perspective.
Judicious approaches to investigating new or complex types of events and impacts in near–real time. While the value of rapid attribution projects should not be overlooked, its value must be balanced carefully against the potential loss of necessary information through simplification. For example, applying the same attribution toolkit developed to explore weather and climate events (e.g., heat waves) to ecosystems responses (e.g., coral bleaching) may overlook important considerations. Advances in methodologies, target events, or impacts are better located within the peer-review system, rather than the gray literature.
Development of best-practice guidelines for rapid attribution. Both WWA and EUCLEIA aim to deliver reliable and user-relevant attribution on a rapid basis and have been discussed in peer-reviewed literature (Haustein et al. 2016; Stott 2016), and the outcomes of WWA rapid analyses have been subsequently submitted to peer-reviewed journals. However, the conceptual and epistemological aspects of this rapidly evolving field—particularly around impact attribution and gray literature—have not been explored on a community basis. Guidelines for best practices around implementing and describing rapid attribution approaches would be helpful for formally assessing and mitigating the possible risks of rapid publication and communication of results and operationalizing rapid attribution.
Development of evidence-based framework for public communication of attribution results. ENSO phase monitoring and prediction systems, for example, do not primarily aim to provide precise quantitative assessments of predicted El Niño or La Niña conditions, but rather to keep stakeholders informed about the risk of such phases emerging in the coming months. NOAA provides a two-stage alert system based on both subjective and objective criterion that can inform a watch or advisory classification of developing conditions. If a primary motivation for rapid attribution is to provide an immediate scientific response and raise risk awareness around extremes, an alert-type approach may be more valuable. In this way, a quantitative attribution analysis would be communicated by a set of qualitative descriptors that encapsulate the determined influence of global warming on an event and potential implications for future occurrences as a form of alert. Stott (2016) notes that the utility of attribution to societal resilience depends on credible and relevant attribution results, which are not necessary the most scientifically definitive.
These recommendations are not provided as a criticism of rapid attribution, of the use of the gray literature, or of the necessity (or otherwise) for prepublication peer review, but rather to enhance the overall value of rapid attribution.
Expeditious advancements in event attribution also provide scientists with a broader opportunity to reflect on scientific practice and the changing role of peer review in science. For example, an examination of the historical context of peer review demonstrates the mutable nature of scientific practice and processes. While peer review is now considered the cornerstone of scientific knowledge, such understandings are very recent and overlook the actualities of peer review as necessarily limited and flawed and as just one means to evaluate scientific knowledge. Extreme event attribution is one scientific practice where openness to different publication models for different outcomes and different types of science is required. This openness includes recognizing the value of the gray literature, which should not be considered inherently unscientific. When used prudently, the gray zone can and does provide a venue to communicate scholarly scientific information rapidly within and beyond the scientific academy (see GlobalChange.gov 2017).
ACKNOWLEDGMENTS
ARC DECRA 160100092 supported this work.
REFERENCES
Allen, M., 2003: Liability for climate change. Nature, 421, 891–892, doi:10.1038/421891a.
Baethgen, W. E., and Coauthors, 2005: Guidelines on climate watches. WMO/TD 118x, 44 pp. [Available online at www.wmo.int/pages/prog/wcp/wcdmp/documents/GuidelinesonClimateWatches.pdf.]
Bornmann, L., 2008: Scientific peer review: An analysis of the peer review process from the perspective of sociology of science theories. Hum. Archit., 6(2), 3. [Available online at http://scholarworks.umb.edu/humanarchitecture/vol6/iss2/3.]
Cook, J., and Coauthors, 2013: Quantifying the consensus on anthropogenic global warming in the scientific literature. Environ. Res. Lett., 8, 024024, doi:10.1088/1748-9326/8/2/024024.
Coumou, D., and S. Rahmstorf, 2012: A decade of weather extremes. Nat. Climate Change, 2, 491–496, doi:10.1038/nclimate1452.
Coumou, D., and A. Robinson, 2013: Historic and future increase in the global land area affected by monthly heat extremes. Environ. Res. Lett., 8, 034018, doi:10.1088/1748-9326/8/3/034018.
Csiszar, A., 2016: Peer review: Troubled from the start. Nature, 532, 306–308, doi:10.1038/532306a.
Department of Environment and Energy, 2017: Finding reliable information about climate science. Australian Government, accessed 10 January 2017. [Available online at www.environment.gov.au/climate-change/climate-science/understanding-climate-change/finding-reliable-information.]
Fyfe, A., 2015: Peer review: Not as old as you might think. Times Higher Education, accessed 9 January 2017. [Available online at www.timeshighereducation.com/features/peer-review-not-old-you-might-think.]
GlobalChange.gov, 2017: How to contribute. U.S. Global Change Research Program, accessed 3 April 2017. [Available online at www.globalchange.gov/content/how-contribute-nca4.]
Gonzales, J. E., and C. A. Cunningham, 2015: The promise of pre-registration in psychological research. American Psychological Association, accessed 25 January 2017. [Available online at www.apa.org/science/about/psa/2015/08/pre-registration.aspx.]
Hassol, S. J., S. Torok, S. C. Lewis, and P. Luganda, 2016: (Un)natural disasters: Communicating the linkages between extreme events and climate change. WMO Bull., 65, 2–9.
Haustein, K., F. Otto, P. Uhe, and M. Allen, 2015: Climate Central World Weather Attribution (WWA) project: Real-time extreme weather event attribution analysis. Geophysical Research Abstracts, Vol. 17, Abstract EGU2015–12788. [Available online at http://meetingorganizer.copernicus.org/EGU2015/EGU2015-12788.pdf.]
Haustein, K., and Coauthors, 2016: Real-time extreme weather event attribution with forecast seasonal SSTs. Environ. Res. Lett., 11, 064006, doi:10.1088/1748-9326/11/6/064006.
Herring, S. C., M. Hoerling, T. C. Peterson, and P. A. Stott, 2014: Explaining Extreme Events of 2013 from a Climate Perspective. Bull. Amer. Meteor. Soc., 95(9), S1–S104, doi:10.1175/1520-0477-95.9.S1.1.
Hulme, M., 2014: Attributing weather extremes to “climate change”: A review. Prog. Phys. Geogr., 38, 499–511, doi:10.1177/0309133314538644.
IPCC, 2012: Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation. Cambridge University Press, 582 pp.
Jennings, C., 2006: What you can’t measure, you can’t manage: The need for quantitative indicators in peer review. Nature, doi:10.1038/nature05032.
King, A. D., D. J. Karoly, M. T. Black, O. Hoegh-Guldberg, and S. E. Perkins-Kirkpatrick, 2016: Great Barrier Reef bleaching would be almost impossible without climate change. The Conversation, accessed 10 June 2016. [Available online at https://theconversation.com/great-barrier-reef-bleaching-would-be-almost-impossible-without-climate-change-58408.]
Kinter, J., and C. Folland, 2011: The International CLIVAR Climate of the 20th Century Project: Report of the Fifth Workshop. CLIVAR Exchanges, No. 16, International CLIVAR Project Office, Southampton, United Kingdom, 39–42.
Mitchell, D., and Coauthors, 2016: Attributing human mortality during extreme heat waves to anthropogenic climate change. Environ. Res. Lett., 11, 074006, doi:10.1088/1748-9326/11/7/074006.
Mulligan, A., L. Hall, and E. Raphael, 2013: Peer review in a changing world: An international study measuring the attitudes of researchers. J. Amer. Soc. Inf. Sci. Technol., 64, 132–161, doi:10.1002/asi.22798.
Oreskes, N., 2004: The scientific consensus on climate change. Science, 306, 1686–1686, doi:10.1126/science.1103618.
Otto, F. E. L., 2016: Extreme events: The art of attribution. Nat. Climate Change, 6, 342–343, doi:10.1038/nclimate2971.
Pall, P., T. Aina, D. A. Stone, P. A. Stott, T. Nozawa, A. G. J. Hilberts, D. Lohmann, and M. R. Allen, 2011: Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000. Nature, 470, 382–385, doi:10.1038/nature09762.
Perkins, S. E., L. V. Alexander, and J. R. Nairn, 2012: Increasing frequency, intensity and duration of observed global heatwaves and warm spells. Geophys. Res. Lett., 39, L20714, doi:10.1029/2012GL053361.
Peterson, T. C., P. A. Stott, and S. C. Herring, 2012: Explaining Extreme Events of 2011 from a Climate Perspective. Bull. Amer. Meteor. Soc., 93, 1041–1067, doi:10.1175/BAMS-D-12-00021.1.
Peterson, T. C., M. P. Hoerling, and P. A. Stott, 2013: Explaining Extreme Events of 2012 from a Climate Perspective. Bull. Amer. Meteor. Soc., 94, S1–S74, doi:10.1175/BAMS-D-13-00085.1.
Polanyi, M., 1962: The republic of science, its political and economic theory. Minerva, 1, 54–74, doi:10.1007/BF01101453.
Slezak, M., 2016: Australia scrubbed from UN climate change report after government intervention. Guardian, 26 May, accessed 20 June 2016. [Available online atwww.theguardian.com/environment/2016/may/27/australia-scrubbed-from-un-climate-change-report-after-government-intervention.]
Smith, R., 2006: Peer review: A flawed process at the heart of science and journals. J. Roy. Soc. Med., 99, 178–182, doi:10.1258/jrsm.99.4.178.
Spier, R., 2002: The history of the peer-review process. Trends Biotechnol., 20, 357–358, doi:10.1016/S0167-7799(02)01985-6.
Stone, D. A., and M. R. Allen, 2005: The end-to-end attribution problem: From emissions to impacts. Climatic Change, 71, 303–318, doi:10.1007/s10584-005-6778-2.
Stott, P. A., 2016: How climate change affects extreme weather events. Science, 352, 1517–1518, doi:10.1126/science.aaf7271.
Stott, P. A., and P. Walton, 2013: Attribution of weather and climate-related events: Understanding stakeholder needs. Weather, 68, 274–279, doi:10.1002/wea.2141.
Stott, P. A., D. A. Stone, and M. R. Allen, 2004: Human contribution to the European heatwave of 2003. Nature, 432, 610–614, doi:10.1038/nature03089.
Stott, P. A., and Coauthors, 2012: Attribution of weather and climate-related extreme events. WCRP Position Paper on ACE, 44 pp., accessed 16 March 2016. [Available online at http://library.wmo.int/pmb_ged/wcrp_2011-stott.pdf.]
Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485, doi:10.1175/BAMS-D-11-00094.1.
Worland, J., 2016: How climate change helped cause massive floods in Louisiana. Time, 7 September, accessed 11 January 2017. [Available online at http://time.com/4482109/climate-change-louisiana-flooding/.]
A partnership of the University of Oxford Environmental Change Institute, the Royal Netherlands Meteorological Institute, the University of Melbourne, the Red Cross Red Crescent Climate Centre, and Climate Central. I note that I have been associated with WWA through the Scientific Oversight Committee.
Originally called the British Medical Journal.