Formalizing Trust in Historical Weather Data

Renée Sieber aMcGill University, Montreal, Quebec, Canada

Search for other papers by Renée Sieber in
Current site
Google Scholar
PubMed
Close
,
Victoria Slonosky bCentre for Interdisciplinary Research on Montréal, Montreal, Quebec, Canada

Search for other papers by Victoria Slonosky in
Current site
Google Scholar
PubMed
Close
,
Linden Ashcroft cUniversity of Melbourne, Melbourne, Australia

Search for other papers by Linden Ashcroft in
Current site
Google Scholar
PubMed
Close
, and
Christa Pudmenzky dUniversity of Southern Queensland, Toowoomba, Australia

Search for other papers by Christa Pudmenzky in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Historical instrumental weather observations are vital to understanding past, present, and future climate variability and change. However, the quantity of historical weather observations to be rescued globally far exceeds the resources available to do the rescuing. Which observations should be prioritized? Here we formalize guidelines help make decisions on rescuing historical data. Rather than wait until resource-intensive digitization is done to assess the data’s value, insights can be gleaned from the context in which the observations were made and the history of the observers. Further insights can be gained from the transcription platforms used and the transcribers involved in the data rescue process, without which even the best historical observations can be mishandled. We use the concept of trust to help integrate and formalize the guidelines across the life cycle of data rescue, from the original observation source to the transcribed data element. Five cases of citizen science-based historical data rescue, two from Canada and three from Australia, guide us in constructing a trust checklist. The checklist assembles information from the original observers and their observations to the current transcribers and transcription approaches they use. Nineteen elements are generated to help future data rescue projects answer the question of whether resources should be devoted to rescuing historical meteorological material under consideration.

Significance Statement

Historical weather observations, such as ships’ logs and weather diaries, help us to understand our past, present, and future climate. More observations are waiting to be rescued than there are resources. Only after they have been rescued—transcribed—can the records be indexed, searched, and analyzed. Given the vast task, citizen scientists are often recruited to transcribe past weather records. Various tools, including software platforms, help volunteers transcribe these handwritten records. We provide guidance on choosing observations to rescue. This guidance is novel because it emphasizes trust throughout the data rescue process: trust in who the observers were and how the observations were made, trust in who the current transcribers are, and trust in the software tools that are used for transcription.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Renée Sieber, renee.sieber@mcgill.ca

Abstract

Historical instrumental weather observations are vital to understanding past, present, and future climate variability and change. However, the quantity of historical weather observations to be rescued globally far exceeds the resources available to do the rescuing. Which observations should be prioritized? Here we formalize guidelines help make decisions on rescuing historical data. Rather than wait until resource-intensive digitization is done to assess the data’s value, insights can be gleaned from the context in which the observations were made and the history of the observers. Further insights can be gained from the transcription platforms used and the transcribers involved in the data rescue process, without which even the best historical observations can be mishandled. We use the concept of trust to help integrate and formalize the guidelines across the life cycle of data rescue, from the original observation source to the transcribed data element. Five cases of citizen science-based historical data rescue, two from Canada and three from Australia, guide us in constructing a trust checklist. The checklist assembles information from the original observers and their observations to the current transcribers and transcription approaches they use. Nineteen elements are generated to help future data rescue projects answer the question of whether resources should be devoted to rescuing historical meteorological material under consideration.

Significance Statement

Historical weather observations, such as ships’ logs and weather diaries, help us to understand our past, present, and future climate. More observations are waiting to be rescued than there are resources. Only after they have been rescued—transcribed—can the records be indexed, searched, and analyzed. Given the vast task, citizen scientists are often recruited to transcribe past weather records. Various tools, including software platforms, help volunteers transcribe these handwritten records. We provide guidance on choosing observations to rescue. This guidance is novel because it emphasizes trust throughout the data rescue process: trust in who the observers were and how the observations were made, trust in who the current transcribers are, and trust in the software tools that are used for transcription.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Renée Sieber, renee.sieber@mcgill.ca

1. Introduction

Historical climatologists estimate that less than one-half of the historical instrumental weather observations in paper and ink form have been “rescued”—transformed into digital data to be useful for scientific analysis—although we do not know how many historical observations exist (Allan et al. 2011; Bosilovich et al. 2013; Thorne et al. 2017; Brönnimann et al. 2019). Historical observations, for example, ships’ logs, weather registers, or farmers’ diaries, provide real-world observations of past weather and climatic events. According to researchers in the historical weather data rescue community (e.g., Thorne et al. 2017), these observations compose the best source of information on our climate’s past variability and extreme events. They are valued benchmarks with which to compare current events. However, researchers face numerous restrictions in funding, time, and accessibility to transform hard copy observations into searchable data that can be integrated into weather and climate models.

Given the challenge for recovering potentially valuable records, how does the data rescue community prioritize which content to transcribe? Weather data rescue guidelines describe baseline requirements as finding precise and accurate observations, completing spatial and temporal coverages, and identifying observations that faithfully record climate variability and change (e.g., Brönnimann et al. 2019; World Meteorological Organization 2016). Yet researchers often find scientific and historic value in short-term and fragmentary records (e.g., Gergis et al. 2009; Pappert et al. 2021). While data rescuers can fill in spatial and temporal gaps, we cannot determine which observations are accurate or demonstrate sufficient variability because we cannot adequately analyze those weather observations until those individual observations become digital data.

For practical reasons, data rescue becomes a matter of seasoned expertise rather than a set of documented best practices or quantitative data analysis. To G. P. Compo, who assembles data records for global reanalyses, “data is like chocolate: When it’s good it’s great, and when it’s bad it’s better than nothing” (in Hunter et al. 2018, p. 56). Historical meteorology professor C. J. Mock cautions that, “it’s easy to tell bad data, hard to tell how GOOD it is” [C. J. Mock, personal communication with Hunter et al. (2018)]. Beyond completeness in location and time, there is scant documentation describing how to prioritize data rescue prior to transcription.

We define the question of prioritization as a matter of trust: trust in the historical context and trust in the transcription process. In the field of historical weather data rescue, the question of trust has not explicitly been considered (i.e., “all observations are valuable”) or else placed in the posttranscription validation process.

Because of the volume of untranscribed observations and a lack of resources, some observations will be digitized via crowdsourcing. Crowdsourcing is considered a finite resource (Shirky 2010), especially in citizen science as increasing numbers of projects emerge. Therefore, we also can examine the quality of transcriptions and context of transcribers. It is possible that the transcription of historical records will shortly be automated (Brohan 2020). However, accuracy rates remain below 30% and the availability of prepackaged software tools is limited, particularly for cursive handwriting found in many diaries and logbooks (Zhang 2021). Overall, how do we know the original observations are sufficiently trustworthy to be worth rescuing, who can we trust to transcribe them, and how can we trust them once they are transcribed?

In this paper we develop a checklist to formalize trust in historical data. Our focus is instrumental data, rather than documentary records of historical weather, although the latter records also are important to historical climate research (Veale et al. 2017). We base the checklist on lessons learned from five cases of historical data rescue in which we were closely involved. Our case studies represent large- and small-scale projects and provide broader viewpoints than other data rescue methodology papers, which are often based on the experience of one case (e.g., Sieber and Slonosky 2019).

We begin by briefly characterizing trust, which enables a framing of cases by observers and observations in the past and transcribers and transcriptions in the present. We then introduce our cases in terms of historical content and modern context. A description of cases allows us to derive a 19-element checklist that we hope can be applied to millions of historical weather observations remaining to be rescued.

2. Trust throughout the life cycle of historical data rescue

The concept of trust is not new to weather and climate science. Trust has been raised numerous times, for example, with regard to climate models (e.g., Raäisaänen 2007; McGovern 2020) and data sharing (e.g., Chandler et al. 2012). The need for trust in individuals (e.g., the targeted mistrust of climate scientists) also has been considered (cf. Tollefson 2010). Chandler et al. (2012) refer to trust in historical data by calling for documenting data provenance, metadata tracking, data sharing, and data transparency. Like others, Chandler et al. (2012) do not define “trust” but elide the concept with data transparency; they relegate trustworthy measures to posttranscription quality control procedures and statistical calculations. This focus can miss larger aspects of trust, although it does provide an opportunity to explore the concept and its characteristics more closely.

Trust is defined as “a: Assured reliance on the character, ability, strength, or truth of someone or something [and] b: One in which confidence is placed” (Merriam-Webster 2021). Trust remains a challenging concept to operationalize, in part because its definition and characterization differ by discipline, goal, and unit of analysis (Rousseau et al. 1998; Gambetta 1988; Yoon 2014). Mayer et al. (1995) derive three attributes, now widely used (cf. Svare et al. 2020) with which to consider trustworthiness: ability, benevolence, and integrity. Applying Mayer et al. (1995) to historical weather records, particularly those transcribed by citizen science, ability could refer to the skill of the historical weather observer and/or the citizen science transcriber. Much work in this domain, from both historical observers and citizen science transcribers, is voluntarily undertaken. The motivation to volunteer fits under the category of benevolence defined by Mayer et al. (1995, p. 118), “the extent to which a trustee is believed to want to do good to the trustor, aside from an egoistic profit motive” (cf. Rotman et al. 2014; Gilfedder et al. 2019). Mayer et al. (1995, 719–720) characterize integrity as “the trustor’s perception that the trustee adheres to a set of principles that the trustor finds acceptable [accompanied by] the consistency of the party’s past actions, credible communications about the trustee from other parties, belief that the trustee has a strong sense of justice, and the extent to which the party’s actions are congruent with his or her words.” For two reasons, we focus on the consistency component of integrity, in terms of consistently adhering to standards. First, integrity has a different connotation when applied to people (e.g., moral certitude) than when applied to data (e.g., data completeness). Second, we can only infer some components of integrity of past actors (e.g., sense of justice, congruence of actions with words).

Mayer et al. (1995) argue that the need for trust arises when we face a risk. Using trust for data rescue is pertinent. For current transcription efforts, we have the potential to directly assess transcribers’ ability, benevolence, and integrity but we typically cannot talk to the original observers. We must rely on what is written so we possess incomplete knowledge of observers’ skills, motivations, communication paths or adherence to standards. Even though our knowledge of the past is partial, we face a risk of losing fragile weather observations but we also risk wasting time laboriously transcribing records that ultimately prove to be poor quality (Brönnimann et al. 2006).

a. Trust in original historical weather and climate content

Most researchers working in the field of historical data rescue rely on post-transcription statistical methods to assess the quality of data, sometimes guided by metadata such as station history for potential inhomogeneities (e.g., weather station relocations, observer details, or instrument changes) (e.g., Auer et al. 2007; Chandler et al. 2012). Where there is pretranscription guidance, Brönnimann et al. (2006) suggest filling temporal, spatial, or other knowledge gaps, inferring fitness for intended modern usage and augmenting datasets with metadata. In Canada, long-term continuity is important for building a historical precipitation dataset (Mekis and Vincent 2011); for temperature, factors include “data quality, spatial and temporal coverage, and longevity” (Vincent et al. 2012, p. 2). For the U.S. Climate Data Modernization Program, “the proper usage of the newly digitized data depends upon an assessment of the history and the quality of the observations” (Dupigny-Giroux et al. 2007, p. 1017). Trustworthiness is implicit when quality is mentioned, although temporal and spatial coverage are also mentioned.

Assessments of historical context, whether recorded in the past or offered in contemporaneous accounts, can offer insight into observations. Some studies emphasize familiarity, on the part of both original observers and rescuers of those observations, with the weather of a place as a way to impute observation quality (e.g., Nash and Endfield 2002). Researchers have built confidence in historical observations by detailing the historical training and social setting of the observer, such as Hestmark and Nordli (2016), who discuss the background of observer J. Esmark in early nineteenth-century Oslo. Camuffo (2002) describes a historical tradition of the meteorological observations in Padova, Italy, as far back as Galileo. Demarée and Ogilvie (2008, p. 424) report on the value of observations from Moravian missionaries “who combined their evangelizing activities with a keen interest in the natural sciences.” Qualitative details about observers’ backgrounds and affiliations imply individuals’ skill, motivation to do good, and consistency in ensuring data quality.

b. Trust in citizen science and crowdsourced efforts

Trust is a crucial research domain in citizen science because credentialed scientists increasingly rely on amateurs. Certain research like large ornithological studies depend on data from eBird (https://ebird.org) or the Audubon Society’s long-running Christmas bird count (Cooper et al. 2014). Weather transcription efforts often rely on volunteers because transcription conducted by professionals or by paid data entry employees can be cost prohibitive. These nonexperts can be more than stenographers; they can identify discrepancies, problems, and occasionally correct original observations (Lewandowski and Specht 2015). Using citizen science in weather, agencies like the Australian Bureau of Meteorology can “incorporate . . . local knowledge and historical accounts” (Pecl et al. 2015, p. 3).

Reliance on nonexpert transcription drives research on the accuracy of data produced by citizen scientists. After the fact, Cohn (2008) find that seventh grade students had an accuracy rate of 95% in a citizen science project identifying biological specimens. Ryan et al. (2018) compare results of student transcription as part of a course to transcription by Met Éireann (the Meteorological Agency of Ireland) personnel and find a cumulative error rate of less than 1%. Historical weather data offer few benchmarks from professionally produced data with which to compare accuracy in citizen science transcribed data (e.g., Dupigny-Giroux et al. 2007; Lewandowski and Specht 2015). Hunter et al. (2013) develop a set of metrics to assess citizen science that could be useful for weather data rescue projects, including a contributor’s role and qualifications (e.g., primary/secondary/Ph.D. student, volunteer, council worker, scientist) and a quantitative measure of valuation (e.g., frequency and amount of contributions, level of training) (cf. Mateus et al. 2021). Whether indicators of skill or capacity, these inferential methods can assess quality of contributions.

Quality of contributions in citizen science research can be improved if we attend to citizen scientists’ motivations to contribute (e.g., Gilfedder et al. 2019). Primary reported motivations include participating in original scientific inquiry, learning about new fields, gaining recognition for their contributions, and pursuing a hobby (Raddick et al. 2009; Rotman et al. 2014; Land-Zandstra et al. 2021). Historical weather citizen scientists may be driven by additional motivations such as interests in history, genealogy, or geography (Sieber and Slonosky 2019). Research into the citizen science platform Old Weather (https://www.oldweather.org) finds that individuals contribute when they experience a sense of “connectedness and membership” (Oomen and Aroyo 2011). Levels of contribution to Old Weather also can vary by category of motivation: Intrinsic (e.g., curiosity, contributions to science) and extrinsic motivations (e.g., leaderboards) (Eveleigh et al. 2014). Superusers report intrinsic motivations; occasional transcribers are attuned to extrinsic features, like specific tasks. We align motivation to do good with benevolence in trust.

Eveleigh et al. (2014) find that the structure of the transcription interface can attract even the “dabblers,” who will be more motivated to transcribe when each transcription activity is relatively small or when the site is designed to assuage anxiety over transcription quality. Mugar et al. (2015) and Skarlatidou et al. (2019) show the role played by user interface (UI) design in attracting citizen scientists. They demonstrate that numerous design elements improve trust in contributions. Such elements include descriptions of elements on the homepage, prominent positioning of assistance features such as tutorials and frequently asked questions (FAQs), ability to participate in forums or to annotate contributions, use of images or pulldown menus, ability for the contributor to see the data soon after it has been collected, use of scientific and lay language together, and consistency in naming elements in the UI. By enhancing contributors’ user experience, attentive platform design can improve trust in the consistency of transcriptions.

c. Our organizing trust framework

We argue that historical weather rescue can be conceptualized as a life cycle that moves from trust in original observers and their observations onto trust in current transcribers and current transcriptions. We operationalize trust as skill, benevolence, and consistency. Trust can be accumulated at several steps, which is important because researchers may possess insufficient information in any one step to determine whether the trust level is sufficient to prioritize data rescue. Figure 1 shows our framework, where step-by-step we establish trustworthiness. In that way, trust is both a process and an outcome.

Fig. 1.
Fig. 1.

Our organizing framework, which links trust as a function of skill, benevolence, and consistency in four steps: from trust in historical weather observers and observations to trust in the present among transcribers and transcription methods of those observations. Steps can proceed in a linear manner, but one step can exert influence on a prior step, e.g., when a present-day transcriber identifies an error in a past observation or finds information in the text about a particular observer.

Citation: Weather, Climate, and Society 14, 3; 10.1175/WCAS-D-21-0077.1

It is important to acknowledge the limits of trust. We describe a framework on which to make decisions pretranscription. Once the observations are transcribed, they enter a new phase where more objective, quantitative, posttranscription validation can be applied (cf. Bonter and Cooper 2012; Brinkerhoff et al. 2017; Hunter et al. 2018). At the outset, there is no way of verifying an observation of, say, 31°C from a transcribed weather diary from a day in 1880 at a location in Australia as true. Instead, we infer that the best explanation for the database entry is that the temperature was consistently recorded and that the record was accurately kept and transcribed. We rely on trust rooted in our judgment and a qualitative assessment of data reliability.

3. Climate data rescue case studies

To develop a checklist for pretranscription trust in historical weather rescue, we draw on experiences in developing, contributing to, and overseeing five cases of historical data rescue. Using five cases, as opposed to a single project as is usually the case, allows us to draw insights across cases of varying sizes and stage of deployment. Table 1 compares the cases. The table shows the years covered by the historical record in column 2 and the estimated number of observations for each case (column 3). Columns 4–7 shows recent statistics, with the years the transcription project has been active, the number of transcribers, the transcription platform, and the data location. Although the five cases vary in scale and stage of deployment, they share similarities in terms of digital images of weather records that we ask citizen scientists to transcribe, with the eventual aim of reconstructing past weather and climate.

Table 1

Description of the case studies.

Table 1

New England and Hunter Historical Weather Data (NEHHWD) exemplifies a small project, in terms of number of observations and volunteers. Beginning as the rescue of diaries from grazier A. Belfield, the project has grown to focus on transcribing detailed weather diaries kept by nineteenth and early twentieth-century farmers in the New England and Hunter regions of New South Wales (https://hunterlivinghistories.com/category/weather-records-climatic-data/). These diaries offer a rare record for the inland region of Australia and allow us to “gain more insight into the nineteenth-century weather and climate of eastern Australia during a time of large interannual climate variability before the dominant impact of an anthropogenic warming signal” (Bridgman et al. 2019, p. 174). As an example, Fig. 2a shows a page from Belfield’s weather diaries.

Fig. 2.
Fig. 2.

(a) Example from A. Belfield’s weather diaries from NEHHWD (Source: https://www.une.edu.au/connect/news/2019/09/regional-weather-records-gain-international-recognition-for-accuracy). (b) Weather observations and drawing of sunspots collected by Captain W. C. Sinclair on S.S. Tarawera in July 1892 from Weather Detective. (c) Two example pages from McCord’s weather diary for June 1838 (Source: McCord Museum Archives). (d) A page from the McGill Observatory from the DRAW project.

Citation: Weather, Climate, and Society 14, 3; 10.1175/WCAS-D-21-0077.1

Ashcroft was a key investigator in the NEHHWD, as well as the South Eastern Australian Recent Climate History project (SEARCH). In SEARCH, records with stationary (land based) instrumental observations of temperature, pressure or rainfall across southeastern Australia were rescued by volunteers and paid students with the goal of improving understanding of Australia’s pre-twentieth-century climate variability (Ashcroft 2014).

Weather Detective was launched in conjunction with the Australian Broadcasting Corporation’s annual National Science Week citizen science program with Pudmenzky as scientific lead. Weather Detective aimed to transcribe the weather information contained in ship logbooks collected by Queensland Government Meteorologist C. Wragge (Adamson 2003). Wragge’s logbooks are from ships that traversed the ocean surrounding Australia, as well as the wider Atlantic, Pacific, and Indian Oceans. Figure 2b shows an example page.

The Canadian Volunteer Climate Data Rescue (ACRE-Canada) was launched by Slonosky with the intent of transcribing weather diaries kept in Canada prior to the inception of the Meteorological Service of Canada. The original focus was on instrumental weather records (see Fig. 2c) kept along the Saint Lawrence River, a major trade route into North America.

Data Rescue: Archives and Weather (DRAW), led by Slonosky and Sieber, continues the transcription efforts of ACRE-Canada with its focus on observations recorded at the McGill University Observatory (Fig. 2d). Once transcribed, this material will constitute the largest continuous record of weather in Canada for the period.

We use these cases to identify key elements that cause us to trust the observations (or not). In addition to playing prominent roles in the design and deployment of these data rescue efforts, we choose these cases because Canada and Australia are former colonies of the United Kingdom and therefore may not have been directly subject to formal meteorological rules. This offers yet another reason we require additional mechanisms of trust with which to inspect the contents.

4. Results

As a result of our data rescue experiences in these five case studies, we learned numerous lessons that were borne out in post hoc data validation. These include reasons we trust the veracity of the original observers as well as the accuracy and utility of the original observations. The lessons then shift from the past to the reasons we trust the motivations and roles of the transcribers and the transcription process.

a. Why do we trust the observers?

The question of whether observers in meteorology need to be formally accredited in the field to produce valuable data has received some attention (e.g., Endfield and Morris 2012). Our case studies show many, though not all, observers were not credentialed as meteorologists but had professional training in other fields. Observers for Weather Detective belonged either to the Royal Navy or merchant marine. Some ACRE-Canada observers were associated with the military (Royal Engineers or Royal Artillery); others were educators, doctors or associated with the law. DRAW was associated with the field of higher education at McGill University. Both SEARCH and NEHHWD include diaries from graziers who observed the weather on their farms for many decades so they could deepen their knowledge of the land. In Australia, convicts were trained to keep records (Ashcroft 2014). In Canada, volunteers from a wide variety of occupations, including religious men and women and lighthouse keepers, contributed to the public Meteorological Service of Canada (Kingston 1875).

Nonprofessional does not mean nonexpert, especially in the eighteenth and early nineteenth century when much scientific activity, particularly in the Anglosphere, was “amateur” (Silvertown 2009). Education in law, medicine and engineering emphasized careful and precise observation and diligent record keeping. Ashcroft et al. (2014, p. 160) find that “[m]any of the observers [in southeastern Australia pre-1860] were scientific men,” suggesting that they were connected with the scientific community, familiar with meteorological theories of the time, and conversant with principles of instrument exposure and calibration. Some observers whose professional duties involved maintaining meteorological records continued observing the weather in a private capacity once they retired (e.g., F. Abbott in Tasmania, Australian surveyor P. Parker King, and Canadian Artillery Bombardier T. Menzies; Moyal 2003; Ashcroft et al. 2014; Slonosky 2018; Rimmer 2020). Professional background, training in weather observing, or scientific interests suggest skill in our characterization of trust.

The SEARCH project places a high value on understanding the motivation of individual observers, whether individuals observed out of personal interest, professional duty or were paid because this informs trust in observers’ willingness to collect consistent results (Ashcroft et al. 2014). The biographies of many ACRE-Canada observers are documented in Slonosky (2018) and show a mix of motives, from documenting climate change to understanding disease. Some observers were keen to ensure their records captured unusual weather events. In ACRE-Canada, we note the enthusiasm and dedication in supernumerary recordings outside the usual or prescribed observation period, such as overnight precipitation events and extra observations during storms recorded by McCord (Fig. 2c). McCord had no formal scientific background but acquired sufficient knowledge during his lifetime to be named to numerous scientific societies (Slonosky 2018). In SEARCH, the observer at Sydney’s South Head station recorded rainfall observations approximately every hour during several extreme rain events in Sydney in the 1840s, as opposed to once a day as usual (Ashcroft et al. 2014). Observers’ dedication, their benevolence, implies that reports of high rainfall amounts during these events are trustworthy (Ashcroft et al. 2019).

Dedication also can be seen in observers’ reviews of their own observations. DRAW records contain visible corrections of observations. Observatory director and principal observer C. McLeod (McLeod 1879) wrote on 25 March 1879 that “for about a month back the 4:48 observation has not been taken regularly. The observer having been in the habit of manufacturing it from the 1:48 obs’n.” The observations at 0448 local time were struck through but not erased. Observers often left notes, comments, letters, diaries, additional calculations, and printed articles concerning their observations. They include comments on instruments such as maker, calibration, exposure, and comparisons with other observers’ instruments or observations, letters to peers and instrument makers, and printed articles in contemporaneous journals describing instruments and observatories (e.g., McCord 1843; Hall 1858; Bridgman et al. 2019).

We find instances in which the social network of the observer, such as how much one observer contacted other observers or what networks and social support were available, indicated the quality of the observations. This is especially so with observers who had no scientific background and learned through their participation in networks and correspondence (e.g., Slonosky 2018). For example, the advice of influential scientist J. Herschel to weather observers was widely distributed among amateur observers in the English-speaking world. His advice spread to T. Lempriere in Australia (SEARCH) and McCord (Herschel 1840) in Canada (ACRE-Canada). McCord, Smallwood, and Hall were part of local societies such as the Natural History Society of Montreal, where they presented their observations and results, as well as North American networks such as the Smithsonian volunteer weather observers’ network.

As the networks and communications grew more sophisticated, coordinating agencies sent out representatives and instructions to standardize observations. In SEARCH Australian Government Astronomers H.C. Russell, C. Todd, and C. Wragge developed training for observers in telegraph-connected locations (Home and Livingston 1994), sending calibrated instruments by train or by horse and carriage, writing thousands of letters and providing structured observation books to ensure standardized observations from volunteer observers. Kingston did the same in Canada, sending instruments by ship (e.g., Australian Museum of Applied Arts and Sciences 2008; Slonosky 2018). We find in ACRE-Canada that the Smithsonian provided preprinted forms with instructions for observations across North America. Following advice and standards provided by these networks suggest a degree of trust in consistency among observers.

b. How do we decide we can trust the original observations?

Here we look for observations that are skillfully, benevolently, and consistently recorded. To establish trust in the observations prior to transcription, we begin with the sources.

With SEARCH, we sought published observations from government collections rather than newspaper reports or casual correspondence as we were able to find more detailed metadata for these observations than, for example, values published in the newspaper.

For ACRE-Canada (Fig. 2c) and DRAW (Fig. 2d) we look to observations originating from collections in archival sources, such as the McGill University Archives, the McCord Museum Archives, and Canadian national and provincial archives. Weather Detective (Fig. 2b) used ships’ logs for the data source, which were mandatory records of the ship movements (Wheeler 2014). SEARCH and Weather Detectives assume that a report or mandated compilation were subject to a certain level of vetting to ascertain data quality; observations found in personal diaries and other casual documents, such as is more often the case in ACRE-Canada, will not necessarily have been checked. Additionally, a high volume of potential data in a special collection such as DRAW is easier to compile and assess than occasional weather observations found in other materials like newspaper articles or diaries. Large volumes of regularly formatted materials can be easier to transcribe. Throughout, we must be mindful of privileging official or formatted accounts in projects such as Weather Detectives or DRAW over original handwritten diaries in projects such as ACRE-Canada. Formalized observations can contain their own complexities (e.g., modifications by observers) but other sources like private weather diaries records can be more difficult and time intensive to find.

We make informed trade-offs in our choices, for instance, whether to transcribe all the records of one early instrumental network versus many weather diaries over larger geographical areas. One way to make an informed judgment is to check sources, although source checking does not confer trust upon all items within a given collection (e.g., an individual ledger book with hourly records, sunshine records, and wind records). We choose items in which observations were made at regularly observing times; were made daily or preferably several times a day; included daily or subdaily instrumental measurements such as thermometer or barometer; had continuous observations on all or almost all days without regularly missing days; and showed values consistent with expected season, location, and climate from a visual inspection of the manuscript data. An initial perusal of register books from the period 1874–1930 for DRAW found observations to be acceptably complete, precise, and legible, although subsequent examination before and after transcription has found observing times to be more irregular than first realized. This pre- to postexamination highlights the limits of trust, in terms of consistency. The occasional addition of observations (e.g., during a storm) suggests benevolence but comes at the expense of consistency.

Certain records contain multiple attributes, which allow us to preliminarily trust the observations based on the number and consistency among variables. DRAW logbooks contain temperature, barometer, precipitation, wind direction; some logbooks contain a catchall section called “Weather” or “Remarks” (“rain,” “clear”). We can visually inspect a sample for coherence across a subdaily record. If it was snowing then was the temperature below freezing? If a thunderstorm was recorded, was it warm enough for convection? In SEARCH, we explore whether temperatures were as high as expected during summer drought events (Ashcroft et al. 2014). Visual inspection of observations prior to transcription also can help us identify potential quality issues, such as multiple days with identically repeated values or a large number of observations rounded to the nearest 0 or 5, rather than more evenly spread numbers. Sometimes variability within an attribute is key. Figure 2c displays an ACRE-Canada page with variations in temperature from cold to thaw that suggests expected seasonality and demonstrates the skillful aspect of trust. We note that familiarity with the climate and local conditions is helpful for the researcher in deciding to trust the data. Section 4c suggests that local or acquired knowledge from the transcriber community may assist the researcher as well.

Important trust criteria for historical observations include known location and continuity over time. Not all data sources satisfy these criteria; for certain places, the only observations available are less than ideal. It is rare to find all these criteria in one source document. In SEARCH, short sets of observations for four different stations on Australia’s east coast for 1821–22 are the only observations for the country in that time period. Occasionally having multiple stations observing at the same time can increase trust because these observations were likely part of a campaign and incorporated observer training and coordinated calibration.

Knowledge of instruments affirms trust, via skill and consistency, in observations. Nineteenth-century meteorological instruments were often large, cumbersome, and expensive, and prone to breakage and calibration issues (referenced in Fig. 2c). Calibration was no trivial task, as the instrument needed to be transported to and from an official observatory (e.g., by ship, rail, canoe, or horse and carriage) and compared with the official instruments. Many early observations in Australia and Canada are marred by gaps due to uncalibrated instruments or instruments that broke enroute. When exploring the diaries for NEHHWD, for example, we noticed a missive from Russell, who wrote to a regional observer in New South Wales in 1879, “it is quite possible that the barometer . . . is in error because, it assumes that [it] is correct after a long, and in olden time very rough journey, which may put it out of order” (Thornton et al. 2018).

Accurately measuring extremely low temperatures has proved difficult in Canada for centuries. Certain instrument types may produce higher uncertainties for values below a certain threshold and subsequently lower the trust in those observations. Problems with instruments often arose because of questions observers had about their instruments. We found that standardized instruments were purchased for some observers by central organizing offices such as the Meteorological Service of Canada (Kingston 1880) or available for purchase via the Smithsonian network (Smallwood 1858).

Last, we increase our trust when observations contain “show your work” calculations within the body of the record. DRAW ledgers contain several columns of raw observations and derived measurements, such as barometer adjusted for thermal expansion of mercury. Pages also contain intermediate calculations such as sums and averages. By having original plus intermediate values recorded, any errors can be identified and corrected, further increasing trust in skill and consistency.

c. Why do we trust the transcribers?

We rely on similar measures of trust in the transcribers to record the original observations skillfully, benevolently, and consistently. We find many transcribers to be highly engaged and dedicated, considered a contribution to benevolent trust, in the transcription process. For instance, ACRE-Canada transcribers added image file names to the transcription spreadsheets for traceability, transcribed summary and mean calculations, and compared these historically calculated values with mean values calculated from transcribed data in spreadsheets. One contributor found a “ruler” app to ensure correct data entry.

Site analytics can reveal transcriber interactions with the site. Transcribers accessed the tutorials page on the DRAW website over 1100 times between 1 January 2019 and 23 April 2021; the FAQ pages were accessed 494 times. The interactions suggest users were concerned about making mistakes. Bellman et al. (2016) conducted focus groups to assess the DRAW user experience. They find that citizen scientists were so anxious about making mistakes they required reassurance that their contributions would be validated by others. If transcribers exhibit such concern about the quality of their contributions and attempt to gain skills then we can begin to trust the accuracy of their contributions.

We also can base trust in the transcriber on rating or ranking metrics, where we consider our transcribers to be contributors as used in the crowdsourcing literature. Using contributor metrics as a replacement or augmentation for contribution accuracy has become acceptable practice for crowdsourced big data (Goodchild and Li 2012). Contributor metrics can be operationalized, for example, as the number of contributions completed by an individual or the length of time contributors have participated. Figure 3 provides the contribution patterns in DRAW. This heavy skew is common in crowdsourcing and citizen science in which a very few superusers greatly exceed the contributions of others (Gura 2013; Brovelli et al. 2020). With experience, some transcribers become as much, or sometimes more expert in individual observers or diaries than the project leads. DRAW’s superusers often notice anomalies or inconsistencies in the records and alert project organizers to data that require further inspection. Superusers can become highly skilled transcribers, which offers yet another reason why we trust transcribers, at least certain users.

Fig. 3.
Fig. 3.

Histogram of number of contributions from each DRAW transcriber, as of 31 Mar 2021.

Citation: Weather, Climate, and Society 14, 3; 10.1175/WCAS-D-21-0077.1

As with original historical observers, “nonprofessional” does not necessarily mean nonexpert. We do not know the backgrounds of all contributors—citizen science platforms often allow anonymity—but we do have information for some. Several participants in ACRE-Canada and DRAW possess credentials like those seen among historical observers: retired meteorologists and technicians, including those from the military or government, archivists, historians, computer specialists, and teachers. In DRAW, a sizable contingent consists of students or recent graduates with training in meteorology, environmental science, history, archives, and information studies (Bellman et al. 2016). Most are volunteers, although Weather Detective and SEARCH compensated some students to enter data.

Transcribers express several motivations for our data rescue projects, for example, to improve understanding of their local weather and climate and contribute to a global database of information that could “make a difference” in understanding climate change. We also see a strong historical element of motivation: Participants may wish to learn more about their own genealogy or the history of their community or are drawn to a time period covered by the weather observations. In Weather Detective, transcribers found records from ships on which their ancestors sailed. Many volunteers say that they are motivated by altruism. Some students report being motivated to add experiences to their resumés.

As with original observations, we see evidence of social networking in our projects. Volunteers who contributed to NEHHWD were found through personal connections as well as an online call for volunteers. This added to an inherent trust that came from the coordinators knowing some volunteers in person and or through lengthy email conversations. Additionally, NEHHWD relied on tiered content moderation approach, with retired individuals, a meteorologist and a historian, who coordinated contributions. Contributors to ACRE-Canada started a photo sharing site to exchange screenshots of difficult alphabetic characters to see if others could decipher the entries. Trust therefore can be characterized as consistency when transcribers adhere to norms created by the network.

d. Why do we trust the transcriptions?

Transcription of weather-related information from one format to another is not new. Herschel (1835) provided best practices for transcribing observations from one logbook format to a format that could be easily shared. As designers of weather citizen science projects, we realize that we too have a role in ensuring trust of the transcriptions. Because our cases rely heavily on citizen science, trusting transcriptions means developing robust platforms for volunteer engagement.

Predesigned templates play a crucial, if largely hidden, role in facilitating consistent transfer of information from historical media to machine readable formats (World Meteorological Organization 2016). Detailed spreadsheet templates aided in the transcription of data for NEHHWD (Bridgman et al. 2019) and project leads redesigned the templates into more convenient formats in response to contributors’ feedback. The ACRE-Canada project lead designed spreadsheet templates for each different diary type.

Both the Weather Detective and the DRAW leads developed web applications carefully tailored for the source material as a further way to ensure consistency. Figure 2d shows two pages from DRAW that are densely packed with numbers, meteorological symbols, and standardized abbreviations written in cursive handwriting. DRAW developers have customized the UI with drop-down select menus for fields, such as weather symbols, which could not be obviously transcribed by typing. To minimize uncertainty on the part of transcribers, drop-down select menus also are introduced for wind directions or cloud types, which limited entry options to abbreviations that matched the original (Sieber and Slonosky 2019).

All cases offer help pages or sections. Smaller projects, like NEHHWD or ACRE-Canada, allowed site administrators to send detailed instructions via individual emails. ACRE-Canada provided FAQs (readme files) in a DropBox folder containing image files and spreadsheet templates. Weather Detective and SEARCH had FAQs and help sections on their websites. DRAW has a help section on the website and explainers attached to each data entry element as well as video tutorials (https://vimeo.com/525062600). The Weather Detective site offered a tutorial showing how to read the observations and digitize the data (https://www.youtube.com/watch?v=qdYLxmEEe_g). Assistance features act to increase communications and provide training, which leads to an increase in trust in the area of skill. Improved skills in turn help to decrease errors, which increase consistency.

All projects create methods for volunteers to provide feedback on their transcriptions. Email exchanges in NEHHWD and SEARCH enabled transcribers to report if methodological errors were identified. ACRE-Canada had a communications forum and email so transcribers could confer with project investigators. DRAW has a feedback form as well as a dedicated email feedback account. Weather Detective had email and phone support for individual questions, for example, if volunteers could not read the handwriting or the text was in French. Two-way communication increases transcribers’ trust in us and our trust in them because we have evidence they were motivated and engaged.

Developers provide semiregular updates to contributors about project goals for SEARCH, Weather Detective, and DRAW. Volunteers have shared interesting case studies or publications they had found, via forums or directly with the project team. DRAW maintains a blog, a news page on the website, and a stand-alone newsletter. If project leads inform volunteers of the purpose of the data, especially if accuracy is important to that purpose, then research suggests we can better trust the data (Skarlatidou et al. 2019). It also suggests a “currentness” to the project; activities are ongoing and project leads care about informing participants (benevolence). Increasing communication and training increases involvement or benevolence, which increases skill and consistency, mutually reinforcing all aspects of trust.

This is not a paper on validation, which is a posttranscription activity; however, there are possibilities for some data validation during transcription. Preliminary validation on DRAW data by Brinkerhoff et al. (2017) find that 96.14% of the data was transcribed accurately for fields where accuracy was testable; 99.97% of the data transcribed fell within meteorological norms. Brinkerhoff et al. (2017) include a taxonomy of errors and find that misplaced decimal points, transposed digits, and mistaken punctuation marks were common sources of inaccuracies. Error rates from the ACRE-Canada project range from 0.02% to 0.20%. Validation of NEHHWD identifies less than 1% (0.80%) of the transcribed data required correction (Bridgman et al. 2019). There also are possibilities for data verification before or during transcription. An enhancement to the DRAW site enables self-verification: Users can review their own entries and add comments as deemed necessary (Sieber and Slonosky 2019). Weather Detective used a crowdsourcing approach, designing the system so each record was transcribed at least 3 times but often as many as 10 times. If the values were not identical then the value appearing most times was accepted.

5. Discussion and conclusions: Formalized checklist for trusting historical weather

We derive a trust checklist, shown in Table 2, that builds on lessons learned in results as well as cases and checklists mentioned in section 2. The checklist is composed of 19 elements, divided into four steps, observer, observation, transcriber, and transcription. In this checklist, we choose not to offer a quantitative, hierarchical, scaled or otherwise “objective” set of elements because the nature of historical records is such that few records will fit all 19 criteria. Our checklist is invariably subjective. The checklist is not meant to cover every possible case, although it builds on the authors’ extensive experience in weather data rescue and citizen science. We look to formalize existing historical weather rescue practices and provide a rule of thumb when there is little other guiding knowledge. Users of such records will always have to make a judgment call based on incomplete information when needing to invest scarce resources of time, effort, and funds. Different projects and users of the data will have different needs, and not all elements will weigh equally for all research goals. This checklist provides a guide for those judgment calls; other criteria are possible.

Table 2

Checklist to assess how much to trust a potential source of rescued historical data. The steps listed in the left-hand column correspond to the steps shown in Fig. 1.

Table 2

a. Checking the checklist

The value of this checklist was affirmed after transcribing and validating case study data (cf. Slonosky 2014; Ashcroft 2014; Brinkerhoff et al. 2017; Sieber and Slonosky 2019). To further assess the utility of the checklist we apply it to a new case, a recently published dataset for Perth, Australia (Gergis et al. 2021). As we move through the steps, we can build a logical picture of reliability of observations before transcription is complete.

Step 1 exposes potential trust issues in the observations, as little is known about the observer of these weather records (Gergis et al. 2021). Notes within the weather diaries suggests that observers were scientifically trained as they were able to name cloud types and discussed calibration of various weather instruments.

The value and detail revealed when working through step 2 of the checklist can compensate for the lack of observer information. We can deduce that the source is relatively official as the 16 handwritten weather diaries transcribed as a part of the dataset came from a survey office. They are recorded on preprinted forms; they are regular and span a long period (1830–75). The observations also can be checked for internal consistency with qualitative and quantitative data available for cross referencing. Some information about instrumentation is provided. The observations are valuable because they fill a spatial and temporal gap, providing some of the earliest nineteenth-century weather observations we have for western Australia.

Whereas the transcriber elements of this project (step 3) did not fit in the standard citizen science mold; the checklist can still be applied to assess the trust we can put in their efforts. Two undergraduate students completing majors in atmospheric science worked together on a paid and then voluntary basis to transcribe the dataset. They were motivated by finances and academic opportunity as well their intrinsic interest as illustrated by the willingness of one student to continue in a volunteer role. The students also worked together and communicated regularly with other members of the team, giving them a strong social network. Both transcribers are authors on the study on which their efforts are based, as further recognition and high rating of their work.

The transcription elements used by Gergis et al. (step 4) reveal a small-scale setup and implementation, with templates designed to match the original observations. This included checking dates meticulously in the original diaries, as many Sundays were missing, and using conditional formatting to automatically highlight values that were outside expected thresholds. Instructions were provided to transcribers, with a direct line to the researchers involved in the project for feedback and questions. Multiple key entry was conducted at the start of the project (Gergis et al. 2021), with lessons learned from that experience fed back to the transcribers and used to improve the transcription process.

Overall, the checklist enables us to put moderate to high trust in the record described by Gergis et al. (2021) before conducting any statistical analysis of the dataset. Applying the checklist was quick and pragmatic and gave us an opportunity to consider the project from many angles before proceeding.

b. Cross connections and conclusions

Beyond applying individual elements, trust in data rescue represents an iterative, interlocking process of feedback loops. Many of our decisions about trust in observations lead back to the observer. Consistency in the observations suggests the observer exhibited benevolence and gained skills by observing all aspects of weather rather than just noticing extreme events or spectacular weather. This increases our trust in their observations. As suggested above, the source of the records can lead back to the observer. If the source is a government compilation (e.g., Australia) or specialized society source (e.g., The Natural History Society of Montreal) then it suggests not only that the observations were reviewed by peers but that the observer was sufficiently dedicated (i.e., benevolently motivated) to seek out credentialing and training. If present-day transcribers improve their knowledge of the observers and norms and practices surrounding observations being transcribed, then transcribers can better interpret the observations and be consistent in the observations they transcribe. Knowledge that an observer was originally from Scotland, for example, allowed one transcriber to discover that the word blinks was a Scots word for drizzle and allowed her to better interpret the handwriting of the diary. Examples also show how interactions between the transcribers and the platforms lead to an increase in trust in both the transcribers and the transcriptions through increased training leading to an increase in skill and consistency, while increased communication leads to an increase in benevolence.

Elements of the checklist are not mutually exclusive. For example, an original diary with a known observer and personal biographical information can use elements from steps 1 and 2. Information from sources such as printed national or international collections such as annual reports can fall under step 2. Should the content already be transcribed then we can assess the effectiveness of steps 3 and 4. We also may wish to revisit 1 and/or 2 to check if the original content is trustworthy.

Not all elements in Table 2 are necessary to confer trust with any given project. Insufficient information may exist to satisfy elements in the observation step. Conversely, sufficient information may exist to satisfy elements in the observer step, which then increases confidence in the resulting dataset. Not even our cases satisfy all the elements of the checklist, nor are the elements meant to replace other expert knowledge, which, for example, are needed to gauge expected seasonal values for the region. For nearly every case, we can think of a counterexample: A credentialed observer with links to networks whose observations proved less reliable when validated. Obscure, unknown diaries with no known social or professional links have produced highly accurate keystone records. Information is likely incomplete for any historical record. At minimum, this calls for careful documentation of the transcription process.

Elements of the list can be geographically, temporally, and also culturally contingent. Geographically and temporally, the field of meteorology evolved in different places at different rates from largely amateur in certain societies to more professionally organized in others. Historical weather data cover four centuries, six continents, all the oceans, and the upper reaches of the atmosphere, measured by dozens if not hundreds of instruments with continually evolving precision levels. Something as simple as the expected number of decimal places will thus vary by historical period and instrument. Expected seasonal variability changes with location, variable, and time step (hourly, daily, or monthly). Judgment needs to be exercised in conjunction with what information is available when evaluating historical observations. Traditions and organizations change between political and cultural milieus (e.g., military vs civilian, government sponsored vs amateur, independent or university organized). Historical data rescue is more than empirical validation. Trust is built on people and their cultures (e.g., Mayer et al. 1995; Rousseau et al. 1998).

In conclusion, we contend that trust in historical weather data rescue encompasses the entire life cycle of acquiring and transcribing observations. This includes trust in observers that considers observer skills and their benevolent motivations and consistency as well as trust in the observations, with issues such as consistency of source and instrumentation. In several cases, transcription has been undertaken by volunteer crowdsourcing or citizen science and therefore is connected to the quality of the transcription environment and the knowledge and passion of the transcribers. We hope that this checklist derived from our experiences assists in prioritizing the millions of historical weather observations waiting to be transcribed.

Acknowledgments.

We thank the many citizen scientists and volunteers who have donated many hours to transcribing weather records. The authors acknowledge the work of the lead coordinators in the SEARCH and NEHHWD projects, particularly Joëlle Gergis, David Karoly, Howard Bridgman, and Ken Thornton. We thank Mac Benoy and the anonymous reviewers for their helpful suggestions. Funding for the paper comes from the Social Sciences and Humanities Research Council Partnership Grant 895-2012-1023, the Fonds Nature et Technologies Equip grant, and the McGill University Library Innovation fund. We also thank McGill students, DRAW volunteers, and volunteers recruited through the Canadian Meteorological and Oceanographic Society website and the RealClimate blog site.

Data availability statement.

This is not a data analysis paper but rather a paper about data selection. Our data for finished projects are publicly available as listed in Table 1, but, because of privacy and ethical concerns, data on individual contributors cannot be made available.

REFERENCES

  • Adamson, P., 2003: Clement Lindley Wragge and the naming of weather disturbances. Weather, 58, 359363, https://doi.org/10.1256/wea.13.03.

    • Search Google Scholar
    • Export Citation
  • Allan, R., P. Brohan, G. P. Compo, R. Stone, J. Luterbacher, and S. Brönnimann, 2011: The international Atmospheric Circulation Reconstructions over the Earth (ACRE) initiative. Bull. Amer. Meteor. Soc., 92, 14211425, https://doi.org/10.1175/2011BAMS3218.1.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., 2014: Extending the instrumental climate record of southeastern Australia. Ph.D. thesis, School of Earth Sciences, University of Melbourne, 516 pp.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., J. Gergis, and D. J. Karoly, 2014: A historical climate dataset for southeastern Australia, 1788–1859. Geosci. Data J., 1, 158178, https://doi.org/10.1002/gdj3.19.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., D. J. Karoly, and A. J. Dowdy, 2019: Historical extreme rainfall events in southeastern Australia. Wea. Climate Extreme, 25, 100210, https://doi.org/10.1016/j.wace.2019.100210.

    • Search Google Scholar
    • Export Citation
  • Auer, I., and Coauthors, 2007: HISTALP—Historical instrumental climatological surface time series of the Greater Alpine Region. Int. J. Climatol, 27, 1746, https://doi.org/10.1002/joc.1377.

    • Search Google Scholar
    • Export Citation
  • Australian Museum of Applied Arts and Sciences, 2008: Historical letters from the Sydney Observatory. Accessed 15 March 2022, https://www.maas.museum/observations/category/letters/.

  • Bellman, B., C. Cohen, M. Midani Moore, S. Raspa, and A. Stein, 2016: Testing citizen science with the McGill DRAW Project. McGill University School of the Environment Final Project Rep. for class ENVR 401, 58 pp.

    • Search Google Scholar
    • Export Citation
  • Bonter, D. N., and C. B. Cooper, 2012: Data validation in citizen science: A case study from Project FeederWatch. Front. Ecol. Environ., 10, 305307, https://doi.org/10.1890/110273.

    • Search Google Scholar
    • Export Citation
  • Bosilovich, M. G., J. Kennedy, D. Dee, R. Allan, and A. O’Neill, 2013: On the reprocessing and reanalysis of observations for climate. Climate Science for Serving Society, G. Asrar and J. Hurrell, Eds., Springer, 5171, https://doi.org/10.1007/978-94-007-6692-1_3.

    • Search Google Scholar
    • Export Citation
  • Bridgman, H., L. Ashcroft, K. Thornton, G. Di Gravio, and W. Oates, 2019: Meteorological observations for Eversleigh Station, near Armidale, New South Wales, Australia, 1877–1922. Geosci. Data J., 6, 174188, https://doi.org/10.1002/gdj3.80.

    • Search Google Scholar
    • Export Citation
  • Brinkerhoff, C., A. Albano, S. Becker, B. Feddersen, E. Hernandez, K. Kruglova, K. Nicoll-Griffith, and M. Tsynkevych, 2017: Data quality of citizen science: Learning from the past to inform the present. McGill University School of the Environment Final Project Rep. for class ENVR 401, 41 pp.

    • Search Google Scholar
    • Export Citation
  • Brohan, P., 2020: Testing AWS Textract for weather data rescue. Accessed 16 April 2021, https://brohan.org/AWS-Textract/.

  • Brönnimann, S., and Coauthors, 2006: A guide for digitising manuscript climate data. Climate Past, 2, 137144, https://doi.org/10.5194/cp-2-137-2006.

    • Search Google Scholar
    • Export Citation
  • Brönnimann, S., and Coauthors, 2019: Unlocking pre-1850 instrumental meteorological records: A global inventory. Bull. Amer. Meteor. Soc., 100, ES389ES413, https://doi.org/10.1175/BAMS-D-19-0040.1.

    • Search Google Scholar
    • Export Citation
  • Brovelli, M. A., M. Ponti, S. Schade, and P. Solís, 2020: Citizen science in support of digital Earth. Manual of Digital Earth, H. Guo, M. F. Goodchild, and A. Annoni, Eds., Springer, 593622.

    • Search Google Scholar
    • Export Citation
  • Camuffo, D., 2002: History of the long series of daily air temperature in Padova (1725–1998). Climatic Change, 53, 775, https://doi.org/10.1023/A:1014958506923.

    • Search Google Scholar
    • Export Citation
  • Chandler, R. E., P. Thorne, J. Lawrimore, and K. Willett, 2012: Building trust in climate science: Data products for the 21st century. Environmetrics, 23, 373381, https://doi.org/10.1002/env.2141.

    • Search Google Scholar
    • Export Citation
  • Cohn, J. P., 2008: Citizen science: Can volunteers do real research? BioScience, 58, 192197, https://doi.org/10.1641/B580303.

  • Cooper, C. B., J. Shirk, and B. Zuckerberg, 2014: The invisible prevalence of citizen science in global research: Migratory birds and climate change. PLOS ONE, 9, e106508, https://doi.org/10.1371/journal.pone.0106508.

    • Search Google Scholar
    • Export Citation
  • Demarée, G. R., and A. E. J. Ogilvie, 2008: The Moravian missionaries at the Labrador coast and their centuries-long contribution to instrumental meteorological observations. Climatic Change, 91, 423450, https://doi.org/10.1007/s10584-008-9420-2.

    • Search Google Scholar
    • Export Citation
  • Dupigny-Giroux, L. A., T. F. Ross, J. D. Elms, R. Truesdell, and S. R. Doty, 2007: NOAA’s Climate Database Modernization Program: Rescuing, archiving, and digitizing history. Bull. Amer. Meteor. Soc., 88, 10151017, https://doi.org/10.1175/BAMS-88-7-1015.

    • Search Google Scholar
    • Export Citation
  • Endfield, G., and C. Morris, 2012: Exploring the role of the amateur in the production and circulation of meteorological knowledge. Climatic Change, 113, 6989, https://doi.org/10.1007/s10584-012-0415-7.

    • Search Google Scholar
    • Export Citation
  • Eveleigh, A. M. M., C. Jennett, A. Blandford, P. Brohan, and A. L. Cox, 2014: Designing for dabblers and deterring drop-outs in citizen science. Proc. SIGCHI Conf. on Human Factors in Computing Systems, Toronto, ON, Canada, ACM Special Interest Group on Computer-Human Interaction, 29852994, https://doi.org/10.1145/2556288.2557262.

    • Search Google Scholar
    • Export Citation
  • Gambetta, D., 1988: Trust: Making and Breaking Cooperative Relations. Blackwell, 246 pp.

  • Gergis, J., D. J. Karoly, and R. J. Allan, 2009: A climate reconstruction of Sydney Cove, New South Wales, using weather journal and documentary data, 1788–1791. Aust. Meteor. Oceanogr. J., 58, 8398, https://doi.org/10.22499/2.5802.001.

    • Search Google Scholar
    • Export Citation
  • Gergis, J., Z. Baillie, S. Ingallina, L. Ashcroft, and T. Ellwood, 2021: A historical climate dataset for southwestern Australia, 1830–1875. Int. J. Climatol., 41, 48984919, https://doi.org/10.1002/joc.7105.

    • Search Google Scholar
    • Export Citation
  • Gilfedder, M., C. J. Robinson, J. E. M. Watson, T. G. Campbell, B. L. Sullivan, and H. P. Possingham, 2019: Brokering trust in citizen science. Soc. Nat. Resour., 32, 292302, https://doi.org/10.1080/08941920.2018.1518507.

    • Search Google Scholar
    • Export Citation
  • Goodchild, M. F., and L. Li, 2012: Assuring the quality of volunteered geographic information. Spat. Stat., 1, 110120, https://doi.org/10.1016/j.spasta.2012.03.002.

    • Search Google Scholar
    • Export Citation
  • Gura, T., 2013: Citizen science: Amateur experts. Nature, 496, 259261, https://doi.org/10.1038/nj7444-259a.

  • Hall, A., 1858: The observatory at St. Martin, Isle Jesus, Canada East. Can. Nat. Geol., 1858, 352363.

  • Herschel, J. F., 1835: Instructions for making and registering meteorological observations at various stations in southern Africa, and other countries in the South Seas. J. Roy. Geogr. Soc. London, 5, 367380, https://doi.org/10.2307/1797890.

    • Search Google Scholar
    • Export Citation
  • Herschel, J. F., 1840: Letter to John Samuel McCord. McCord Museum Archives.

  • Hestmark, G., and Ø. Nordli, 2016: Jens Esmark’s Christiania (Oslo) meteorological observations 1816–1838: The first long-term continuous temperature record from the Norwegian capital homogenized and analysed. Climate Past, 12, 20872106, https://doi.org/10.5194/cp-12-2087-2016.

    • Search Google Scholar
    • Export Citation
  • Home, R. W., and K. T. Livingston, 1994: Science and technology in the story of Australian federation: The case of meteorology, 1876–1908. Hist. Rec. Aust. Sci., 10, 109127, https://doi.org/10.1071/HR9941020109.

    • Search Google Scholar
    • Export Citation
  • Hunter, J., A. Abdulmonem, and C. van Ingen, 2013: Assessing the quality and trustworthiness of citizen science data. Concurrency Comput., 25, 454466, https://doi.org/10.1002/cpe.2923.

    • Search Google Scholar
    • Export Citation
  • Hunter, M., J. Cagianos, Z. Garyfalakis, G. Stankovic, and S. Wan, 2018: Historical weather data rescue with the DRAW project. McGill University School of the Environment Final Project Rep. for class ENVR 401, 62 pp.

    • Search Google Scholar
    • Export Citation
  • Kingston, G. T., 1875: Appendix No. 1: Fourth Report of the Meteorological Office of the Dominion of Canada. Reports on the Meteorological, Magnetic and Other Observatories of the Dominion of Canada for the Calendar Year Ended 31st December 1874, Supplement (no. 4) To the Seventh Annual Report of the Department of Marine and Fisheries, Being for the Fiscal Year ended 30th June, 1874. Meteorological Office of the Dominion of Canada, 317 pp.

    • Search Google Scholar
    • Export Citation
  • Kingston, G. T., 1880: Letter Book of G. T. Kingston to Beyond Canada 1873–1880. Meteorological Service of Canada.

  • Land-Zandstra, A., G. Agnello, and Y. S. Gültekinm, 2021: Participants in citizen science. The Science of Citizen Science, K. Vohland et al., Eds., Springer, 243259, https://doi.org/10.1007/978-3-030-58278-4_13.

    • Search Google Scholar
    • Export Citation
  • Lewandowski, E., and H. Specht, 2015: Influence of volunteer and project characteristics on data quality of biological surveys. Conserv. Biol., 29, 713723, https://doi.org/10.1111/cobi.12481.

    • Search Google Scholar
    • Export Citation
  • Mateus, C., A. Potito, and M. Curley, 2021: Engaging secondary school students in climate data rescue through service‐learning partnerships. Weather, 76, 113118, https://doi.org/10.1002/wea.3841.

    • Search Google Scholar
    • Export Citation
  • Mayer, R. C., J. H. Davis, and F. D. Schoorman, 1995: An integrative model of organizational trust. Acad. Manage. Rev., 20, 709734, https://doi.org/10.2307/258792.

    • Search Google Scholar
    • Export Citation
  • McCord, J. S., 1843: Notebook on climate and meteorology of North America. McCord Museum Archives, Fonds McCord Family Papers, P001-828.

    • Search Google Scholar
    • Export Citation
  • McGovern, A., 2020: NSF AI institute for research on trustworthy AI in weather, climate, and coastal oceanography. AI Matters, 6, 1416, https://doi.org/10.1145/3446243.3446249.

    • Search Google Scholar
    • Export Citation
  • McLeod, C., 1879: Meteorological Register 1879. McGill University Archives, RG 32, collection 1491 item 120.

  • Mekis, É., and L. A. Vincent, 2011: An overview of the second generation adjusted daily precipitation dataset for trend analysis in Canada. Atmos.–Ocean, 49, 163177, https://doi.org/10.1080/07055900.2011.583910.

    • Search Google Scholar
    • Export Citation
  • Merriam-Webster, 2021: Trust. Accessed 8 April 2021, https://www.merriam-webster.com/dictionary/trust.

  • Moyal, A., 2003: The Web of Science: The Scientific Correspondence of W.B. Clarke. Vols. 1 and 2, Australian Scholarly Publishing, 1340 pp.

    • Search Google Scholar
    • Export Citation
  • Mugar, G., C. B. Jackson, C. Østerlund, and K. Crowston, 2015: Being present in online communities: Learning in citizen science. Proc. Seventh Int. Conf. on Communities and Technologies, Limerick, Ireland, Association for Computing Machinery, 129138, https://doi.org/10.1145/2768545.2768555.

    • Search Google Scholar
    • Export Citation
  • Nash, D. J., and G. H. Endfield, 2002: A 19th century climate chronology for the Kalahari region of central southern Africa derived from missionary correspondence. Int. J. Climatol., 22, 821841, https://doi.org/10.1002/joc.753.

    • Search Google Scholar
    • Export Citation
  • Oomen, J., and L. Aroyo, 2011: Crowdsourcing in the cultural heritage domain: Opportunities and challenges. Proc. Fifth Int. Conf. on Communities and Technologies, Brisbane, Queensland, Australia, Queensland University of Technology, 138149, https://doi.org/10.1145/2103354.2103373.

    • Search Google Scholar
    • Export Citation
  • Pappert, D., Y. Brugnara, S. Jourdain, A. Pospieszyńska, R. Przybylak, C. Rohr, and S. Brönnimann, 2021: Unlocking weather observations from the Societas Meteorologica Palatina (1781–1792). Climate Past, 17, 23612379, https://doi.org/10.5194/cp-17-2361-2021.

    • Search Google Scholar
    • Export Citation
  • Pecl, G., C. Gillies, C. Sbrocchi, and P. Roetman, 2015: Building Australia through citizen science. Office of the Chief Scientist Occasional Paper Series Issue 11, 4 pp., https://www.chiefscientist.gov.au/sites/default/files/Citizen-science-OP_web.pdf.

  • Raäisaänen, J., 2007: How reliable are climate models? Tellus, 59A, 229, https://doi.org/10.1111/j.1600-0870.2006.00211.x.

  • Raddick, M. J., G. Bracey, P. L. Gay, C. J. Lintott, P. Murray, K. Schawinski, A. S. Szalay, and J. Vandenberg, 2009: Galaxy Zoo: Exploring the motivations of citizen science volunteers. arXiv, 0909.2925v1, https://arxiv.org/abs/0909.2925.

  • Rimmer, G., 2020: Abbott, Francis (1799–1883). Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/abbott-francis-4/text4063.

  • Rotman, D., J. Hammock, J. Preece, C. Boston, D. Hansen, A. Bowser, and Y. He, 2014: Does motivation in citizen science change with time and culture? Proc. 17th ACM Conf. on Computer Supported Cooperative Work and Social Computing, Baltimore, MD, ACM Special Interest Group on Computer-Human Interaction, 229232, https://doi.org/10.1145/2556420.2556492.

    • Search Google Scholar
    • Export Citation
  • Rousseau, D., S. Sitkin, R. Burt, and C. Camerer, 1998: Not so different after all: A cross-discipline view of trust. Acad. Manage. Rev., 23, 393404, https://doi.org/10.5465/amr.1998.926617.

    • Search Google Scholar
    • Export Citation
  • Ryan, C., and Coauthors, 2018: Integrating data rescue into the classroom. Bull. Amer. Meteor. Soc., 99, 17571764, https://doi.org/10.1175/BAMS-D-17-0147.1.

    • Search Google Scholar
    • Export Citation
  • Shirky, C., 2010: Cognitive Surplus: Creativity and Generosity in a Connected Age. Penguin Press, 256 pp.

  • Sieber, R., and V. C. Slonosky, 2019: Developing a flexible platform for crowdsourcing historical weather records. Hist. Methods, 52, 164177, https://doi.org/10.1080/01615440.2018.1558138.

    • Search Google Scholar
    • Export Citation
  • Silvertown, J., 2009: A new dawn for citizen science. Trends Ecol. Evol., 24, 467471, https://doi.org/10.1016/j.tree.2009.03.017.

  • Skarlatidou, A., A. Hamilton, M. Vitos, and M. Haklay, 2019: What do volunteers want from citizen science technologies? A systematic literature review and best practice guidelines. J. Sci. Commun., 18, A02, https://doi.org/10.22323/2.18010202.

    • Search Google Scholar
    • Export Citation
  • Slonosky, V., 2014: Historical climate observations in Canada: 18th and 19th century daily temperature from the St. Lawrence Valley, Quebec. Geosci. Data J., 1, 103120, https://doi.org/10.1002/gdj3.11.

    • Search Google Scholar
    • Export Citation
  • Slonosky, V. C., 2018: Climate in the Age of Empire: Weather Observers in Colonial Canada. Amer. Meteor. Soc., 288 pp.

  • Smallwood, C., 1858: Letter to Prof. Henry, June 10th 1858. Smithsonian Institution Archives, Record Unit 60, Meteorological Project, 1849–1875 (data from 1820).

    • Search Google Scholar
    • Export Citation
  • Svare, H., A. H. Gausdal, and G. Möllering, 2020: The function of ability, benevolence, and integrity-based trust in innovation networks. Ind. Innovation, 27, 585604, https://doi.org/10.1080/13662716.2019.1632695.

    • Search Google Scholar
    • Export Citation
  • Thorne, P. W., and Coauthors, 2017: Toward an integrated set of surface meteorological observations for climate science and applications. Bull. Amer. Meteor. Soc., 98, 26892702, https://doi.org/10.1175/BAMS-D-16-0165.1.

    • Search Google Scholar
    • Export Citation
  • Thornton, K., L. Ashcroft, H. Bridgman, W. Oates, and G. di Gravio, 2018: Algernon Henry Belfield and the Eversleigh Weather Diaries, 1877–1922. J. Aust. Colon. Hist., 20, 139154, https://doi.org/10.3316/ielapa.935908270897929.

    • Search Google Scholar
    • Export Citation
  • Tollefson, J., 2010: Climate science: An erosion of trust? Nature, 466, 2426, https://doi.org/10.1038/466024a.

  • Veale, L., and Coauthors, 2017: Dealing with the deluge of historical weather data: The example of the TEMPEST database. Geo, 4, e00039, https://doi.org/10.1002/geo2.39.

    • Search Google Scholar
    • Export Citation
  • Vincent, L. A., X. L. Wang, E. J. Milewska, H. Wan, F. Yang, and V. Swail, 2012: A second generation of homogenized Canadian monthly surface air temperature for climate trend analysis. J. Geophys. Res., 117, D18110, https://doi.org/10.1029/2012JD017859.

    • Search Google Scholar
    • Export Citation
  • Wheeler, D., 2014: Hubert Lamb’s ‘treasure trove’: Ships’ logbooks in climate research. Weather, 69, 133139, https://doi.org/10.1002/wea.2284.

    • Search Google Scholar
    • Export Citation
  • World Meteorological Organization, 2016: Guidelines on best practices for climate data rescue. WMO Doc. WMO 1182, 38 pp., https://library.wmo.int/doc_num.php?explnum_id=3318.

  • Yoon, A., 2014: End users’ trust in data repositories: Definition and influences on trust development. Arch. Sci., 14, 1734, https://doi.org/10.1007/s10502-013-9207-8.

    • Search Google Scholar
    • Export Citation
  • Zhang, Y., 2021: An alternative to manual data rescue: AI & machine learning come to help. ACRE Virtual Workshop, Online, Singapore Management University.

    • Search Google Scholar
    • Export Citation
Save
  • Adamson, P., 2003: Clement Lindley Wragge and the naming of weather disturbances. Weather, 58, 359363, https://doi.org/10.1256/wea.13.03.

    • Search Google Scholar
    • Export Citation
  • Allan, R., P. Brohan, G. P. Compo, R. Stone, J. Luterbacher, and S. Brönnimann, 2011: The international Atmospheric Circulation Reconstructions over the Earth (ACRE) initiative. Bull. Amer. Meteor. Soc., 92, 14211425, https://doi.org/10.1175/2011BAMS3218.1.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., 2014: Extending the instrumental climate record of southeastern Australia. Ph.D. thesis, School of Earth Sciences, University of Melbourne, 516 pp.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., J. Gergis, and D. J. Karoly, 2014: A historical climate dataset for southeastern Australia, 1788–1859. Geosci. Data J., 1, 158178, https://doi.org/10.1002/gdj3.19.

    • Search Google Scholar
    • Export Citation
  • Ashcroft, L., D. J. Karoly, and A. J. Dowdy, 2019: Historical extreme rainfall events in southeastern Australia. Wea. Climate Extreme, 25, 100210, https://doi.org/10.1016/j.wace.2019.100210.

    • Search Google Scholar
    • Export Citation
  • Auer, I., and Coauthors, 2007: HISTALP—Historical instrumental climatological surface time series of the Greater Alpine Region. Int. J. Climatol, 27, 1746, https://doi.org/10.1002/joc.1377.

    • Search Google Scholar
    • Export Citation
  • Australian Museum of Applied Arts and Sciences, 2008: Historical letters from the Sydney Observatory. Accessed 15 March 2022, https://www.maas.museum/observations/category/letters/.

  • Bellman, B., C. Cohen, M. Midani Moore, S. Raspa, and A. Stein, 2016: Testing citizen science with the McGill DRAW Project. McGill University School of the Environment Final Project Rep. for class ENVR 401, 58 pp.

    • Search Google Scholar
    • Export Citation
  • Bonter, D. N., and C. B. Cooper, 2012: Data validation in citizen science: A case study from Project FeederWatch. Front. Ecol. Environ., 10, 305307, https://doi.org/10.1890/110273.

    • Search Google Scholar
    • Export Citation
  • Bosilovich, M. G., J. Kennedy, D. Dee, R. Allan, and A. O’Neill, 2013: On the reprocessing and reanalysis of observations for climate. Climate Science for Serving Society, G. Asrar and J. Hurrell, Eds., Springer, 5171, https://doi.org/10.1007/978-94-007-6692-1_3.

    • Search Google Scholar
    • Export Citation
  • Bridgman, H., L. Ashcroft, K. Thornton, G. Di Gravio, and W. Oates, 2019: Meteorological observations for Eversleigh Station, near Armidale, New South Wales, Australia, 1877–1922. Geosci. Data J., 6, 174188, https://doi.org/10.1002/gdj3.80.

    • Search Google Scholar
    • Export Citation
  • Brinkerhoff, C., A. Albano, S. Becker, B. Feddersen, E. Hernandez, K. Kruglova, K. Nicoll-Griffith, and M. Tsynkevych, 2017: Data quality of citizen science: Learning from the past to inform the present. McGill University School of the Environment Final Project Rep. for class ENVR 401, 41 pp.

    • Search Google Scholar
    • Export Citation
  • Brohan, P., 2020: Testing AWS Textract for weather data rescue. Accessed 16 April 2021, https://brohan.org/AWS-Textract/.

  • Brönnimann, S., and Coauthors, 2006: A guide for digitising manuscript climate data. Climate Past, 2, 137144, https://doi.org/10.5194/cp-2-137-2006.

    • Search Google Scholar
    • Export Citation
  • Brönnimann, S., and Coauthors, 2019: Unlocking pre-1850 instrumental meteorological records: A global inventory. Bull. Amer. Meteor. Soc., 100, ES389ES413, https://doi.org/10.1175/BAMS-D-19-0040.1.

    • Search Google Scholar
    • Export Citation
  • Brovelli, M. A., M. Ponti, S. Schade, and P. Solís, 2020: Citizen science in support of digital Earth. Manual of Digital Earth, H. Guo, M. F. Goodchild, and A. Annoni, Eds., Springer, 593622.

    • Search Google Scholar
    • Export Citation
  • Camuffo, D., 2002: History of the long series of daily air temperature in Padova (1725–1998). Climatic Change, 53, 775, https://doi.org/10.1023/A:1014958506923.

    • Search Google Scholar
    • Export Citation
  • Chandler, R. E., P. Thorne, J. Lawrimore, and K. Willett, 2012: Building trust in climate science: Data products for the 21st century. Environmetrics, 23, 373381, https://doi.org/10.1002/env.2141.

    • Search Google Scholar
    • Export Citation
  • Cohn, J. P., 2008: Citizen science: Can volunteers do real research? BioScience, 58, 192197, https://doi.org/10.1641/B580303.

  • Cooper, C. B., J. Shirk, and B. Zuckerberg, 2014: The invisible prevalence of citizen science in global research: Migratory birds and climate change. PLOS ONE, 9, e106508, https://doi.org/10.1371/journal.pone.0106508.

    • Search Google Scholar
    • Export Citation
  • Demarée, G. R., and A. E. J. Ogilvie, 2008: The Moravian missionaries at the Labrador coast and their centuries-long contribution to instrumental meteorological observations. Climatic Change, 91, 423450, https://doi.org/10.1007/s10584-008-9420-2.

    • Search Google Scholar
    • Export Citation
  • Dupigny-Giroux, L. A., T. F. Ross, J. D. Elms, R. Truesdell, and S. R. Doty, 2007: NOAA’s Climate Database Modernization Program: Rescuing, archiving, and digitizing history. Bull. Amer. Meteor. Soc., 88, 10151017, https://doi.org/10.1175/BAMS-88-7-1015.

    • Search Google Scholar
    • Export Citation
  • Endfield, G., and C. Morris, 2012: Exploring the role of the amateur in the production and circulation of meteorological knowledge. Climatic Change, 113, 6989, https://doi.org/10.1007/s10584-012-0415-7.

    • Search Google Scholar
    • Export Citation
  • Eveleigh, A. M. M., C. Jennett, A. Blandford, P. Brohan, and A. L. Cox, 2014: Designing for dabblers and deterring drop-outs in citizen science. Proc. SIGCHI Conf. on Human Factors in Computing Systems, Toronto, ON, Canada, ACM Special Interest Group on Computer-Human Interaction, 29852994, https://doi.org/10.1145/2556288.2557262.

    • Search Google Scholar
    • Export Citation
  • Gambetta, D., 1988: Trust: Making and Breaking Cooperative Relations. Blackwell, 246 pp.

  • Gergis, J., D. J. Karoly, and R. J. Allan, 2009: A climate reconstruction of Sydney Cove, New South Wales, using weather journal and documentary data, 1788–1791. Aust. Meteor. Oceanogr. J., 58, 8398, https://doi.org/10.22499/2.5802.001.

    • Search Google Scholar
    • Export Citation
  • Gergis, J., Z. Baillie, S. Ingallina, L. Ashcroft, and T. Ellwood, 2021: A historical climate dataset for southwestern Australia, 1830–1875. Int. J. Climatol., 41, 48984919, https://doi.org/10.1002/joc.7105.

    • Search Google Scholar
    • Export Citation
  • Gilfedder, M., C. J. Robinson, J. E. M. Watson, T. G. Campbell, B. L. Sullivan, and H. P. Possingham, 2019: Brokering trust in citizen science. Soc. Nat. Resour., 32, 292302, https://doi.org/10.1080/08941920.2018.1518507.

    • Search Google Scholar
    • Export Citation
  • Goodchild, M. F., and L. Li, 2012: Assuring the quality of volunteered geographic information. Spat. Stat., 1, 110120, https://doi.org/10.1016/j.spasta.2012.03.002.

    • Search Google Scholar
    • Export Citation
  • Gura, T., 2013: Citizen science: Amateur experts. Nature, 496, 259261, https://doi.org/10.1038/nj7444-259a.

  • Hall, A., 1858: The observatory at St. Martin, Isle Jesus, Canada East. Can. Nat. Geol., 1858, 352363.

  • Herschel, J. F., 1835: Instructions for making and registering meteorological observations at various stations in southern Africa, and other countries in the South Seas. J. Roy. Geogr. Soc. London, 5, 367380, https://doi.org/10.2307/1797890.

    • Search Google Scholar
    • Export Citation
  • Herschel, J. F., 1840: Letter to John Samuel McCord. McCord Museum Archives.

  • Hestmark, G., and Ø. Nordli, 2016: Jens Esmark’s Christiania (Oslo) meteorological observations 1816–1838: The first long-term continuous temperature record from the Norwegian capital homogenized and analysed. Climate Past, 12, 20872106, https://doi.org/10.5194/cp-12-2087-2016.

    • Search Google Scholar
    • Export Citation
  • Home, R. W., and K. T. Livingston, 1994: Science and technology in the story of Australian federation: The case of meteorology, 1876–1908. Hist. Rec. Aust. Sci., 10, 109127, https://doi.org/10.1071/HR9941020109.

    • Search Google Scholar
    • Export Citation
  • Hunter, J., A. Abdulmonem, and C. van Ingen, 2013: Assessing the quality and trustworthiness of citizen science data. Concurrency Comput., 25, 454466, https://doi.org/10.1002/cpe.2923.

    • Search Google Scholar
    • Export Citation
  • Hunter, M., J. Cagianos, Z. Garyfalakis, G. Stankovic, and S. Wan, 2018: Historical weather data rescue with the DRAW project. McGill University School of the Environment Final Project Rep. for class ENVR 401, 62 pp.

    • Search Google Scholar
    • Export Citation
  • Kingston, G. T., 1875: Appendix No. 1: Fourth Report of the Meteorological Office of the Dominion of Canada. Reports on the Meteorological, Magnetic and Other Observatories of the Dominion of Canada for the Calendar Year Ended 31st December 1874, Supplement (no. 4) To the Seventh Annual Report of the Department of Marine and Fisheries, Being for the Fiscal Year ended 30th June, 1874. Meteorological Office of the Dominion of Canada, 317 pp.

    • Search Google Scholar
    • Export Citation
  • Kingston, G. T., 1880: Letter Book of G. T. Kingston to Beyond Canada 1873–1880. Meteorological Service of Canada.

  • Land-Zandstra, A., G. Agnello, and Y. S. Gültekinm, 2021: Participants in citizen science. The Science of Citizen Science, K. Vohland et al., Eds., Springer, 243259, https://doi.org/10.1007/978-3-030-58278-4_13.

    • Search Google Scholar
    • Export Citation
  • Lewandowski, E., and H. Specht, 2015: Influence of volunteer and project characteristics on data quality of biological surveys. Conserv. Biol., 29, 713723, https://doi.org/10.1111/cobi.12481.

    • Search Google Scholar
    • Export Citation
  • Mateus, C., A. Potito, and M. Curley, 2021: Engaging secondary school students in climate data rescue through service‐learning partnerships. Weather, 76, 113118, https://doi.org/10.1002/wea.3841.

    • Search Google Scholar
    • Export Citation
  • Mayer, R. C., J. H. Davis, and F. D. Schoorman, 1995: An integrative model of organizational trust. Acad. Manage. Rev., 20, 709734, https://doi.org/10.2307/258792.

    • Search Google Scholar
    • Export Citation
  • McCord, J. S., 1843: Notebook on climate and meteorology of North America. McCord Museum Archives, Fonds McCord Family Papers, P001-828.

    • Search Google Scholar
    • Export Citation
  • McGovern, A., 2020: NSF AI institute for research on trustworthy AI in weather, climate, and coastal oceanography. AI Matters, 6, 1416, https://doi.org/10.1145/3446243.3446249.

    • Search Google Scholar
    • Export Citation
  • McLeod, C., 1879: Meteorological Register 1879. McGill University Archives, RG 32, collection 1491 item 120.

  • Mekis, É., and L. A. Vincent, 2011: An overview of the second generation adjusted daily precipitation dataset for trend analysis in Canada. Atmos.–Ocean, 49, 163177, https://doi.org/10.1080/07055900.2011.583910.

    • Search Google Scholar
    • Export Citation
  • Merriam-Webster, 2021: Trust. Accessed 8 April 2021, https://www.merriam-webster.com/dictionary/trust.

  • Moyal, A., 2003: The Web of Science: The Scientific Correspondence of W.B. Clarke. Vols. 1 and 2, Australian Scholarly Publishing, 1340 pp.

    • Search Google Scholar
    • Export Citation
  • Mugar, G., C. B. Jackson, C. Østerlund, and K. Crowston, 2015: Being present in online communities: Learning in citizen science. Proc. Seventh Int. Conf. on Communities and Technologies, Limerick, Ireland, Association for Computing Machinery, 129138, https://doi.org/10.1145/2768545.2768555.

    • Search Google Scholar
    • Export Citation
  • Nash, D. J., and G. H. Endfield, 2002: A 19th century climate chronology for the Kalahari region of central southern Africa derived from missionary correspondence. Int. J. Climatol., 22, 821841, https://doi.org/10.1002/joc.753.

    • Search Google Scholar
    • Export Citation
  • Oomen, J., and L. Aroyo, 2011: Crowdsourcing in the cultural heritage domain: Opportunities and challenges. Proc. Fifth Int. Conf. on Communities and Technologies, Brisbane, Queensland, Australia, Queensland University of Technology, 138149, https://doi.org/10.1145/2103354.2103373.

    • Search Google Scholar
    • Export Citation
  • Pappert, D., Y. Brugnara, S. Jourdain, A. Pospieszyńska, R. Przybylak, C. Rohr, and S. Brönnimann, 2021: Unlocking weather observations from the Societas Meteorologica Palatina (1781–1792). Climate Past, 17, 23612379, https://doi.org/10.5194/cp-17-2361-2021.

    • Search Google Scholar
    • Export Citation
  • Pecl, G., C. Gillies, C. Sbrocchi, and P. Roetman, 2015: Building Australia through citizen science. Office of the Chief Scientist Occasional Paper Series Issue 11, 4 pp., https://www.chiefscientist.gov.au/sites/default/files/Citizen-science-OP_web.pdf.

  • Raäisaänen, J., 2007: How reliable are climate models? Tellus, 59A, 229, https://doi.org/10.1111/j.1600-0870.2006.00211.x.

  • Raddick, M. J., G. Bracey, P. L. Gay, C. J. Lintott, P. Murray, K. Schawinski, A. S. Szalay, and J. Vandenberg, 2009: Galaxy Zoo: Exploring the motivations of citizen science volunteers. arXiv, 0909.2925v1, https://arxiv.org/abs/0909.2925.

  • Rimmer, G., 2020: Abbott, Francis (1799–1883). Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/abbott-francis-4/text4063.

  • Rotman, D., J. Hammock, J. Preece, C. Boston, D. Hansen, A. Bowser, and Y. He, 2014: Does motivation in citizen science change with time and culture? Proc. 17th ACM Conf. on Computer Supported Cooperative Work and Social Computing, Baltimore, MD, ACM Special Interest Group on Computer-Human Interaction, 229232, https://doi.org/10.1145/2556420.2556492.

    • Search Google Scholar
    • Export Citation
  • Rousseau, D., S. Sitkin, R. Burt, and C. Camerer, 1998: Not so different after all: A cross-discipline view of trust. Acad. Manage. Rev., 23, 393404, https://doi.org/10.5465/amr.1998.926617.

    • Search Google Scholar
    • Export Citation
  • Ryan, C., and Coauthors, 2018: Integrating data rescue into the classroom. Bull. Amer. Meteor. Soc., 99, 17571764, https://doi.org/10.1175/BAMS-D-17-0147.1.

    • Search Google Scholar
    • Export Citation
  • Shirky, C., 2010: Cognitive Surplus: Creativity and Generosity in a Connected Age. Penguin Press, 256 pp.

  • Sieber, R., and V. C. Slonosky, 2019: Developing a flexible platform for crowdsourcing historical weather records. Hist. Methods, 52, 164177, https://doi.org/10.1080/01615440.2018.1558138.

    • Search Google Scholar
    • Export Citation
  • Silvertown, J., 2009: A new dawn for citizen science. Trends Ecol. Evol., 24, 467471, https://doi.org/10.1016/j.tree.2009.03.017.

  • Skarlatidou, A., A. Hamilton, M. Vitos, and M. Haklay, 2019: What do volunteers want from citizen science technologies? A systematic literature review and best practice guidelines. J. Sci. Commun., 18, A02, https://doi.org/10.22323/2.18010202.

    • Search Google Scholar
    • Export Citation
  • Slonosky, V., 2014: Historical climate observations in Canada: 18th and 19th century daily temperature from the St. Lawrence Valley, Quebec. Geosci. Data J., 1, 103120, https://doi.org/10.1002/gdj3.11.

    • Search Google Scholar
    • Export Citation
  • Slonosky, V. C., 2018: Climate in the Age of Empire: Weather Observers in Colonial Canada. Amer. Meteor. Soc., 288 pp.

  • Smallwood, C., 1858: Letter to Prof. Henry, June 10th 1858. Smithsonian Institution Archives, Record Unit 60, Meteorological Project, 1849–1875 (data from 1820).

    • Search Google Scholar
    • Export Citation
  • Svare, H., A. H. Gausdal, and G. Möllering, 2020: The function of ability, benevolence, and integrity-based trust in innovation networks. Ind. Innovation, 27, 585604, https://doi.org/10.1080/13662716.2019.1632695.

    • Search Google Scholar
    • Export Citation
  • Thorne, P. W., and Coauthors, 2017: Toward an integrated set of surface meteorological observations for climate science and applications. Bull. Amer. Meteor. Soc., 98, 26892702, https://doi.org/10.1175/BAMS-D-16-0165.1.

    • Search Google Scholar
    • Export Citation
  • Thornton, K., L. Ashcroft, H. Bridgman, W. Oates, and G. di Gravio, 2018: Algernon Henry Belfield and the Eversleigh Weather Diaries, 1877–1922. J. Aust. Colon. Hist., 20, 139154, https://doi.org/10.3316/ielapa.935908270897929.

    • Search Google Scholar
    • Export Citation
  • Tollefson, J., 2010: Climate science: An erosion of trust? Nature, 466, 2426, https://doi.org/10.1038/466024a.

  • Veale, L., and Coauthors, 2017: Dealing with the deluge of historical weather data: The example of the TEMPEST database. Geo, 4, e00039, https://doi.org/10.1002/geo2.39.

    • Search Google Scholar
    • Export Citation
  • Vincent, L. A., X. L. Wang, E. J. Milewska, H. Wan, F. Yang, and V. Swail, 2012: A second generation of homogenized Canadian monthly surface air temperature for climate trend analysis. J. Geophys. Res., 117, D18110, https://doi.org/10.1029/2012JD017859.

    • Search Google Scholar
    • Export Citation
  • Wheeler, D., 2014: Hubert Lamb’s ‘treasure trove’: Ships’ logbooks in climate research. Weather, 69, 133139, https://doi.org/10.1002/wea.2284.

    • Search Google Scholar
    • Export Citation
  • World Meteorological Organization, 2016: Guidelines on best practices for climate data rescue. WMO Doc. WMO 1182, 38 pp., https://library.wmo.int/doc_num.php?explnum_id=3318.

  • Yoon, A., 2014: End users’ trust in data repositories: Definition and influences on trust development. Arch. Sci., 14, 1734, https://doi.org/10.1007/s10502-013-9207-8.

    • Search Google Scholar
    • Export Citation
  • Zhang, Y., 2021: An alternative to manual data rescue: AI & machine learning come to help. ACRE Virtual Workshop, Online, Singapore Management University.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Our organizing framework, which links trust as a function of skill, benevolence, and consistency in four steps: from trust in historical weather observers and observations to trust in the present among transcribers and transcription methods of those observations. Steps can proceed in a linear manner, but one step can exert influence on a prior step, e.g., when a present-day transcriber identifies an error in a past observation or finds information in the text about a particular observer.

  • Fig. 2.

    (a) Example from A. Belfield’s weather diaries from NEHHWD (Source: https://www.une.edu.au/connect/news/2019/09/regional-weather-records-gain-international-recognition-for-accuracy). (b) Weather observations and drawing of sunspots collected by Captain W. C. Sinclair on S.S. Tarawera in July 1892 from Weather Detective. (c) Two example pages from McCord’s weather diary for June 1838 (Source: McCord Museum Archives). (d) A page from the McGill Observatory from the DRAW project.

  • Fig. 3.

    Histogram of number of contributions from each DRAW transcriber, as of 31 Mar 2021.

All Time Past Year Past 30 Days
Abstract Views 146 0 0
Full Text Views 2781 1643 264
PDF Downloads 1007 252 30