1. Introduction
Climate services involve the production, translation, transfer, and use of climate knowledge and information in climate-informed decision-making and climate-smart policy and planning. Such services are intended to facilitate climate adaptation, mitigation, and disaster risk reduction, widely recognized as important challenges to sustainable development in rich and poor countries alike (Asrar et al. 2012; Wahlström 2009). Interest in climate services has grown in recent years, particularly since the 2011 initiation of the Global Framework for Climate Services (GFCS), an international initiative focused on improving the production, delivery, and application of climate information around the world (Hewitt et al. 2012).
This growing interest reflects an assumption that advancement in this area will produce gains in social and economic well-being; despite this assumption, there is active debate on what climate services are, where they are most effective, and how they should be designed to best deliver societal benefits. Questions regarding the kinds of information on which climate services should be based, the sorts of problems they can most effectively address, and the institutional arrangements needed to support them continue to consume planning efforts, as the users and providers of climate services engage in a simultaneous and loosely coordinated process of learning by doing.
Some aspects have been more studied than others. Indeed, relatively more attention has been paid toward assessing particular attributes of the climate information itself—including, for instance, the quality of the data that underlie specific services (Bhowmik and Costa 2014; Brunet and Jones 2011; Girvetz et al. 2013; Overpeck et al. 2011) and the verification of climate predictions (Goddard et al. 2013; Hyvärinen et al. 2015; Mason and Chidzambwa 2009), among other things. In the social science realm, efforts have focused on defining the parameters of “usable” science (see, e.g., Dilling and Lemos 2011; Tang and Dessai 2012), identifying factors that improve the communication of climate information (e.g., Lorenz et al. 2015; Marx et al. 2007; Taylor et al. 2015), and assessing the impact of specific services (see, e.g., Clements et al. 2013; Rahman et al. 2016; Thornton 2007).
To date, however, a broad-based review of the existing practice of operational climate services has not yet been attempted [for an overview of commercial investment, see Georgeson et al. (2017)]. The current article fills this gap by analyzing a unique dataset of more than 100 self-reported descriptions of climate service activities, which were submitted to the Global Framework for Climate Services and the Climate Services Partnership in 2012 (detailed descriptions of the data and methods are found in section 2). In doing so, this article creates a snapshot of the state of the field shortly after the implementation of the GFCS (results appear in section 3), allowing for a point of comparison in this evolving field. The article also offers observations on what can—and cannot—be learned from this kind of broad sampling activity (this discussion occurs in section 4), ending with some conclusions regarding how best to design future sampling efforts in order to more effectively advance learning (section 5).
2. Methods
a. Data
This article draws on the written descriptions of 101 climate services collected independently, though in a coordinated fashion, by the Climate Services Partnership (CSP) and the World Meteorological Organization (WMO) in 2012. Both entities used the same template (see appendix A) to solicit self-reported descriptions of climate service activities from within their networks. Both organizations called these “case studies,” though the methodology used was an open-ended survey rather than a social science case study per se. Both described the goal of this activity as identifying good practice. More detail on the process by which case studies were collected is found in appendix B.
The results of this joint activity were published in conjunction with the second International Conference on Climate Services (September 2012) and an extraordinary session of the World Meteorological Congress focused on the implementation of the Global Framework for Climate Services (October 2012), respectively. Though the WMO represents the official coordinating body of the world’s governmentally mandated meteorological and hydrological services, both CSP and WMO collections include submissions from public-, private-, and third-sector sources—perhaps reflecting the extent of collaboration between these different kinds of organizations.
While authors of both CSP and WMO studies responded to the same template to design their responses, some differences in the way the studies were collected, edited for publication, and categorized by the different organizations complicated the combining of datasets. For instance, the responses ranged in length and quality across both collections, with the longest piece nearly 9000 words long and the shortest closer to 1000.
In addition, four climate services are included in both collections. As the goal of our analysis is not to contrast CSP and WMO documents but to use both collections to learn about the practice of climate service design and implementation, we analyzed these duplicates together, using information from both texts to create a more comprehensive view of the service in question. As a result, eight CSP/WMO documents were consolidated into four combined studies in our analysis.
Another complication stemmed from the fact that three responses challenged our understanding of “climate services” as defined earlier in this article. These were removed entirely from the study, though a more thorough treatment of these cases appears in the discussion section.
Finally, four studies collected by the WMO provide a general overview of the activities of a project of climate service provider without delving into the details of a particular service. These documents describe broad concepts and goals but do not provide enough detail to answer many of the questions we used in our analysis; as such, these too-broad responses were included in overarching analyses but left off analyses that addressed more specific questions. A full listing of the 101 climate services included in the analysis is found in appendix C.
b. Theoretical framing
Though the case study template followed a “what, how, what next” format (see appendix A), our method of analysis follows the climate service evaluation framework proposed by Vaughan and Dessai (2014). Designed to help guide future work on climate service evaluation, this framework identifies four factors drawn from the literature on the use of seasonal and long-term climate information that influence the benefits and relative success of climate services. These factors are described in the original article and summarized in brief below.
1) Problem identification and the decision-making context
The contexts in which climate services are provided naturally condition their success. Indeed, in some cases the strongest impediments to the adoption of climate information are contextual or institutional, rather than technical. Conversely, certain situations create opportunities for climate services to be more impactful than others (for more on this, see, e.g., Broad and Agrawala 2000; Millner and Washington 2011). Our analysis of the responses explored questions including where and in what sectors climate services are provided and whether or not such services are designed with specific users in mind.
2) Characteristics, tailoring, and dissemination of the climate information
The success of a climate service depends on the quality of the climate information that underpins it; it also depends on the extent that information is appropriately tailored to meet users’ needs and the ability of users to access information in a timely fashion (see, e.g., Furman et al. 2011; Harrison and Williams 2007). We analyzed studies to identify the time scale of the climate information provided, whether the services report information describing the “quality” of the information (i.e., data quality control, forecast verification), and any contextual information included in the service.
3) Governance, process, and structure of the service
Climate services require the development of structures that can facilitate interactions between dispersed institutional and administrative mechanisms, projects, and financial resources. In this context, the structure and governance of a climate service are important determinants of the effectiveness of the service itself (for more on this, see Broad et al. 2002; Lemos et al. 2012). Our analysis explored the scale on which services are provided, the kinds of actors involved in service provision, the mechanisms by which the service connects to users, and how the services are funded.
4) Socioeconomic value of the service
Assessing the effectiveness of a climate service should involve some assessment of its economic value and the value it has to individuals or to society writ large. Indeed, benefits from climate services may take many forms and may accrue to the individual, the collective, or the natural environment (for more on this, please see Clements et al. 2013). Though none of the documents in the current study identify the economic impact of their services, our analysis reports on those that discuss efforts to evaluate the services in question.
Our analysis used this framework to develop a series of questions (see Table 1) to guide our research regarding the topics addressed by the template (see appendix A).
Factors and key questions address by the study.
Studies were coded to facilitate the identification and aggregation of information specific to each question. While all documents responded to the same template, the fact that they were self-reported means there was some variation in both the topics and the level of detail. In some cases, information relevant to our research questions appeared at different places in the document. In other cases, requested information was not explicit in the material; in these cases, we report how many studies reported relevant information before describing the responses themselves.
c. Caveats
While the CSP/WMO case study collection represents the most comprehensive detailing of climate service activities to date, it is important to remember that it is a “sample of opportunity” rather than one specifically designed for the purposes of this analysis. This brings with it several caveats, including the following:
We cannot assume that the breadth of the case study collection reflects a representative sample; since we have no way of knowing how many climate services currently exist, we are not capable of stating whether this sample is representative of that larger group.
We are not able to control for the role that selection bias may play on the case study collection. CSP case studies were collected primarily from CSP members, while the WMO solicited studies from its own network—including, but not limited to, its 191 member states—which is likely to have affected the number of case studies received from national meteorological or hydrological services (see, for instance, the discussion on African climate services in the results section).
We cannot independently verify information included in the studies. Since nearly all documents were reported by people involved in providing the service in question, some may (or may not) exaggerate accomplishments or selectively omit challenges. All documents are likely to highlight the topics the authors found most important, perhaps sacrificing topics of interest to our analysis.
While these caveats are important to consider, they do not impede our ability to draw meaningful insights from the collection as a whole—which, while imperfect, represents a sample of 101 climate service activities in 106 countries and involving 133 different organizations and is the most comprehensive source of information on climate services in the world to date.
3. Results
Our analysis of 101 studies engages specific questions around the four factors that influence the relative success of climate services.
a. Problem identification and decision-making context
1) Where are climate services provided?
The regional foci of responses are included in Table 2. While some regions are more represented than others, it is important to note the role that sampling methods may play in these numbers. For instance, the WMO solicited responses from each of its member states, so while there are 26 responses focused on Africa, this must be considered in light of the fact that 53 member states in Africa were asked to submit an example of their work. Conversely, 28 case studies were submitted from the area that constitutes WMO Region II (Asia), which comprises 35 member states. In some cases, international organizations submitted studies that cover more than one country or region; as a result, the sum of the number of regions studied exceeds the total number of studies themselves. Nine climate services are considered to be global in scope.
Regional focus of studies (n = 101).
2) What sectors do climate services engage?
As illustrated in Fig. 1, the most commonly engaged sectors include agriculture (24), water (15), disasters (13), and health (9). A description of the 24 studies that are classified as pertaining to “capacity development” is included in the discussion session. Roughly one-third of the case studies were assigned to more than one category—engaging, for instance, water and capacity building, or agriculture and ecosystems.
3) What kinds of services are implemented where?
To get a sense of whether some sectors are more actively engaged in certain locations, we compared regions and sectors, revealing that the services that engaged with agriculture were more common in Africa and Asia than in Australia, Europe, or North America. Water-related case studies were most commonly drawn from Europe, including, for instance, analyses of the impact of climate change on the Nieman and Danube Rivers (International Commission for the Protection of the Danube River 2012; Korneev 2012). More details are found in Fig. 2.
We also looked to see if services were more likely to be provided at certain scales in certain regions. The region including North America shows more subnational services than national services—perhaps reflecting services that cater to regions within the relatively large countries of the United States and Canada—while Europe has more national and regional services and only one subnational service (Table 3).
Scale of service by region (n = 72).
4) Do climate services target specific users?
To help explicate the extent to which existing climate services were targeted to specific problems and/or how these problems were understood, we analyzed the number of responses that mentioned specific users. We considered studies as targeted to users whether these groups included specific organizations or broad groups (e.g., “farmers,” “disaster risk managers”). We found that 50 of the 101 cases mentioned users in this way. Of this group, 48 discussed involving users in the development of the service in any capacity. Users include both individuals (e.g., specific farmers, humanitarian workers, disaster managers, extension agents) and organizations (planning ministries, railway companies); seven case studies also appeal to the general public (e.g., the Health Heat Warning System).
When possible, we also considered the decisions that the service was intended to inform. These vary considerably but include those related to farm management (e.g., planting, seed selection, harvest), disaster risk reduction (including preparedness and prevention), and transport (planning and infrastructure investment). Cases that directly mention users are roughly 5 times as likely to operate at subnational than at global scales. Twelve cases report operating at more than one scale.
5) What kinds of user organizations do services target?
The data allow us to describe the specific user organizations mentioned in the studies, the majority of which include government offices (36), humanitarian organizations (17), private companies (14), and researchers (10), among others. More information on user types is found in Fig. 3.
b. Characteristics, tailoring, and communication of climate information
1) What is the time scale of information provided?
For those studies that included this type of information (83/101), seasonal information was by far the most prevalent, though weather and long-term information was also used by nearly 30% of studies as well. More details are found in Table 4.
Time scale of climate information (n = 83).
2) Do climate services measure/report the quality of their information?
While the quality of information was not explicitly addressed by the case study template, we have attempted to characterize the extent to which case studies discussed the quality of information in two ways, finding that 10 case studies in the collection mention the verification of their forecasts, while 22 mention the quality control of data that goes into their analysis.
3) Do climate services solicit user input to design the services?
It was not possible to develop quantitative measures of information tailoring; we did, however, count 48 case studies that specifically discussed user engagement in the development of the service, soliciting input through workshops, consultation, or surveys.
4) How is information communicated to potential users?
For those that provided this information (66/101), websites were far and away the most prominent mode of information provision. More information is found in Fig. 4.
c. Governance, process, and structure of the service
1) On what scale is the service provided?
As illustrated in Fig. 5, more services operate on national scales (39) than on regional (23) or subnational (18) scales. Seven of the documents mention services that provide information on a global scale.
2) Who is involved in the service provision?
Based on an assumption that those motivated to contribute documents to this endeavor were involved in developing the service, we used the organizational affiliation of the authors of the submitted documents as a proxy for those organizations involved in the service provision. For the most part, this includes research institutes (52 out of 132 named organizations) and meteorological agencies (34 out of 132). Universities (20/132) and humanitarian organizations (11/132) also have a sizeable presence in the list of organizations that contributed to the collection.
3) How do climate services connect to users?
The connection between climate service users and providers is described in an earlier section on problem identification. Of course, this is also a governance issue, as climate services must create a context for sustained interaction between users and providers; as mentioned above, only 50 of the 101 studies mention specific connection with users. We are also able to characterize the extent to which the studies describe the processes by which providers stay in contact with users even after the service has launched. For instance, 14 case studies suggest they solicit ad hoc feedback from users, while another 10 mention consultation workshops that help the providers to understand how information is used.
4) How are climate services funded?
The case study collection provides a general sense of the funding models that currently support climate services. For instance, of the 42 case studies that describe the funding schemes that support the services in question, 25 are funded by the national government receiving the service; another 23 are donor funded on a project basis. Only 11 of the services in question describe their funding as “sustainable”; eight are able to operate on little or no funding, primarily by piecing together budgets associated with existing activities that benefit from climate services.
d. Socioeconomic value of the service
What evaluation methods are used?
The case study template specifically asked authors to describe mechanisms for evaluation. Of the 37 that do so, 10 describe forecast verification, a method of evaluating the quality of the forecast itself; another 10 describe consultation workshops by which climate service providers receive user feedback. Fourteen case studies say the climate service providers receive this feedback in an informal ad hoc fashion; another nine use surveys, generally without much supporting detail. Two case studies describe independent evaluators contracted to assess the extent to which the service contributed to project goals; several studies mention website statistics as a valuable source of information regarding how many people are using the service.
No studies mention efforts to economically value the climate service, though it seems likely that authors would have reported information on this type of evaluation were it available.
4. Discussion
Analysis of this unique dataset has allowed us to make several observations about the state of climate service implementation in 2012, including the extent to which certain practices were common to services around the world.
The dataset confirms, for instance, that climate services were provided in all regions and in a range of different sectors—though relatively more services engaged agriculture, water, disasters, and health than other sectors (e.g., energy, transportation). Services based on seasonal climate information were more common than those based on other types of information. Nearly half the climate services in question are targeted to government offices, though services were also targeted to the private (18%) and third sectors (22%) in roughly equal numbers. The majority of climate services are provided on websites.
The dataset also allows us to make several overarching observations about the state of the field—identifying the faint outline of what could be called a typical climate service (section 4a), while also revealing the relatively inchoate nature of the field (section 4b). Ways to improve this overview, and our analysis of it, are also considered (section 4c). This includes topics that were not included in the original studies but merit attention in future such surveys
a. A “typical” service
Analysis of the 101 climate services revealed the very wide diversity of services that are currently being provided. Through an analysis of the frequency with which certain characteristics appear in the dataset, however, we can develop an outline of what might be considered “typical.” In this scheme, a “typical” climate service was provided by a research institute—frequently in conjunction with national meteorological institute—and operated on a national scale to provide seasonal climate information (paired, perhaps, with weather forecasts and/or long-term climate information) to agricultural decision-makers online.
It is possible that our sample—and thus our characterization of a typical climate service—was influenced by the entities that requested the studies. For instance, given the direct communication with the World Meteorological Organization, national-level climate service providers may be somewhat overrepresented in our study. On the other hand, the fact that much of the world’s climate data are in the hands of national meteorological agencies ensures these actors will be heavily involved in the production, dissemination, and distribution of climate services for years to come (Overpeck et al. 2011).
Other aspects of this characterization of a “typical service” are consistent with the literature—including the relative focus on seasonal forecasting. The field of seasonal climate prediction is more advanced than that of decadal or long-term forecasting (though not more advanced than monitoring or observations) and there is also a relatively extensive literature on the use of seasonal forecasts for decision-making. In some cases, this literature has been used as an analog to understand information uptake, indicating the extent to which scholars and service providers have focused on the use of information at this scale, particularly following the 1997/98 El Niño (Adger et al. 2003; Lemos 2003).
The focus on agriculture also seems borne out by other types of information. Indeed, 63% of respondents to a recent survey on research priorities for climate services identified climate services for agriculture as most developed when compared to other sectors including water, health, financial services, and disaster risk management (Vaughan et al. 2016). It is likely this is due in part to the directness of the connection between climate variability and the impacts of human welfare: Whereas health-related climate impacts are frequently moderated by disease vectors (for instance, mosquitos), the impacts of climate on agriculture track basic climatological factors, including rainfall and temperature. This direct connection made it easier for people to observe, understand, and respond to climate fluctuations over centuries, leading to a more developed understanding of how climate information can link to decision-making.
In this context, the relatively well-developed field of agrometeorology also means that there is a trained cadre of professionals and extension officers able to interpret and employ climate information in agricultural decision-making (Sivakumar et al. 2000). While hydrometeorologists perform the same function in the water sector, there is no corollary for health or disaster managers. These experts bolster the capacity of the sector to absorb and act on climate information.
Our perspective regarding a “typical” climate service is based on a tabulation of the most common characteristics across a number of different categories, which serves as a convenient way to synthesize the very wide range of combinations of different characteristics found in the collection. Indeed, our analysis allows us to merge this varied data in a way that establishes a signpost regarding the overall shape of the field of climate services, as it existed in 2012, which can serve as a point of comparison as the field evolves. Several examples of this archetype are described below.
In Ethiopia, for instance, the National Meteorological Agency uses the Enhancing National Climate Services (ENACTS) initiative to integrate local observations and global monitoring data, and provides information to agricultural and other users through online map rooms (Fig. 6) (Dinku and Sharoff 2012). Another example is found in Chile’s Agroclimate Outlook (Fig. 7), a monthly bulletin produced by the Dirreción Meteorológica de Chile (DMC) and made freely available in the organization’s website. It contains information about the predicted seasonal climate conditions that are most likely to prevail during the next three months (Quintana et al. 2012). Both of these cases represent the model of a “typical” service as identified by this study.
b. An emerging field
While the studies in question more frequently target agricultural than users in other sectors, our analysis makes it clear that as of 2012, the field was still emerging—marked by contested definitions, an emphasis on capacity development, uneven progress toward coproduction, uncertain funding streams, and a lack of evaluation activities.
1) Contested definitions
One indication of the emergent nature of the field in 2012 is the fact that the World Meteorological Organization used a rather broad scope for incorporating studies in their own collection, even to the point of including several studies that do not meet most traditional definitions of climate services. Indeed, two of these studies describe new methods to collect information about the climate system, rather than efforts to tailor that information to specific decisions. A third describes a low-carbon-growth service that helps businesses understand how they may reduce their greenhouse gas emissions.
The services in these studies are not just very different from each other; they are also clearly at odds with the WMO definition of climate services, expressed on the website in this way: “Climate services provide climate information in a way that assists decision-making by individuals and organizations” (www.gfcs-climate.org). That these services were included in the WMO case study collection seems to reflect the contested nature of a term whose meaning was still being debated; as the field has developed, it seems unlikely such services would be included if this kind of activity were conducted today.
It is curious as well to note the inclusion of 25 services that are based, at least in part, on weather information. As information at this time scale is not traditionally considered to be part of a “climate” service, it may reflect the extent of the studies collected from operational weather service providers who were engaged, more or less, in business-as-usual activities. Conversely, this prevalence of services based on weather information may reflect the beginning of an evolution toward seamless services, providing information at time scales from days to decades.
Though a number of organizations now offer official definitions of the term “climate services” (e.g., Street et al. 2015), it is likely that our general sense of what counts as a climate service, and what does not, will remain fluid for some time (Hulme 2009).
2) Emphasis on capacity development
Another indication of the emerging nature of climate services in 2012 is the relative emphasis on capacity development within the dataset.
This focus squares well with the priorities of the Global Framework for Climate Services, which explicitly includes capacity development as one of the “five pillars” of the framework. As articulated in the Capacity Development Annex to the GFCS Implementation Plan, the GFCS specifically seeks to develop the human resources needed to advance the other four pillars of the framework, which include observations and monitoring; research, modeling, and prediction; climate services information system; and the user interface platform (World Meteorological Organization 2014). The GFCS also strives to bolster the basic requirements (including national policies/legislation, institutions, infrastructure, and personnel) needed to enable GFCS-related activities to occur.
In this context, it is interesting to note that the 24 documents in this dataset that deal with capacity development fall roughly into three categories, including those that seek to build capacity by training individuals, mostly with respect to the analysis or use of climate information; those that make climate data and/or information available to researchers and decision-makers; and those that seek to build and/or strengthen the institutions that produce or use climate services. These do not necessarily map well to the five pillars of the GFCS, meaning that some GFCS-priority topics (e.g., observations and monitoring, and some aspects of the user interface platform) were not being addressed. Reviewing the extent to which capacity building activities have and continue to evolve since 2012 will help to gauge the extent these efforts have fallen in line with GFCS priorities.
3) Uneven progress toward coproduction
As noted above, a growing literature has sprung up around climate services, particularly involving the use of seasonal forecasting. The literature seems to converge around the need to engage users in the coproduction of climate services in order to ensure that products are useful, useable, and used (Lemos et al. 2012; McNie 2007; Roncoli et al. 2009; Ziervogel and Downing 2004). While the importance of coproduction is certainly reflected in the collected documents, the interpretation of this term is relatively irregular.
There are, for instance, several case studies that detail extensive efforts to communicate with users regarding climate information needs. One such case study describes the efforts of the Australian Bureau of Meteorology to solicit and incorporate user feedback into the presentation and dissemination of their seasonal climate outlook. This process—which included targeted interviews, a survey, focus groups, and user testing—provided the BoM with a better understanding of how their users understand and employ seasonal climate information; it also afforded users the opportunity to advance their understanding of and confidence in the seasonal climate outlook itself (Boulton et al. 2012).
While this example seems to reflect good practice as reflected by the literature on user engagement (e.g., Lemos and Morehouse 2005; Steynor et al. 2016), more than half the case studies in the collection did not mention specific users, or the process by which those users were incorporated into the development of the service. This seems to reflect rather uneven progress toward the coproduction of climate services, with some services exemplifying the demand-driven principles and many others retaining the “loading dock” approach (Cash et al. 2006).
4) Uncertain funding streams
Another observation can be made regarding the funding streams on which climate services depend. While the documents describe funding to support climate services as coming primarily from national governments (25) and donor organizations (23), only 11 of the case studies describe the funding that supports the service as sustainable. Other services relied on project funding and have sometimes had to scramble for funding to support continued operations.
This was true of even relatively long-running services, including the West African Regional Climate Outlook Forum (PRESAO), which began in 1998 but had not yet been institutionalized with funding from regional budgets. The PRESAO case study in particular makes clear that financial sustainability will rely heavily on the development of documents that illustrate the economic value of this sort of climate services and to policymakers and donors (Kadi 2012). This was echoed by those who saw sustainable funding as one of the main challenges to the Regional Climate Outlook Forum process (Ogallo et al. 2008).
5) Dearth of evaluation activities
No case studies explored the economic value of their service or mention attempts to do so, reflecting logistical and theoretical challenges to economic valuation that have been discussed elsewhere (Clements et al. 2013; Lazo et al. 2009; Anderson et al. 2015). Those studies that have engaged in evaluation relied mostly on the ad hoc feedback of users’ groups with whom they are in regular contact and/or slightly more formal processes, including surveys and user workshops. These processes provide the climate service provider with a better understanding of the users’ needs and capability, in the interest of coproduction, but do not advance the work of assessing priorities or informing investment decisions; this lack of evaluation represents a major gap in practice at the time the case studies were collected, which is in some cases exacerbated by limited engagement with users.
c. Improving upon our bird’s-eye view
We used the collected documents to provide a bird’s-eye view of the state of the field of climate services in 2012. But while the analysis offers a reasonable snapshot of climate services in 2012, it is important to note how difficult it is to use these cases to identify “good practice” in the way that those who solicited the studies may have liked. Indeed, because these studies are self-reported, primarily from the point of view of the climate service provider, it is relatively hard to get a sense of which services were more or less successful, or why; authors were not incentivized to be forthcoming regarding challenges or failures and there is little objective evaluation to refer to. Furthermore, it is difficult to use the studies to understand the users’ experience of the services, or the extent to which individual climate services and/or climate services in general are able to improve social and economic well-being.
This is unfortunate given that the documents were dubbed “case studies” by the coordinating organizations—and case study research is uniquely suited to addressing these kinds of detailed questions. Indeed, the case study approach can be particularly useful in documenting specific practice and experiences; in identifying causal links between interventions and outcomes; and in enlightening situations in which an intervention has no clear, or clearly defined, set of outcomes (Yin 2014). Case studies are also valuable in developing and elaborating theory, which creates opportunities for the sort of analytic generalization that could shed empirical light on current hunches regarding what constitutes good practice in climate services development and delivery (Ford et al. 2010).
That the 2012 collection does not lend itself to this kind of analytic generalization calls attention to the need to shift focus regarding the development of such case studies moving forward. In setting priorities for further efforts, two items that deserve particular attention include 1) a focus on analysis in addition to sampling and 2) a focus on efforts to evaluate the relative contribution of specific climate services. More on each of these items are described below.
1) Sampling versus analysis
A primary goal of the 2012 data collection activity was to capture the breadth of climate services offered at the time—that is, to provide a bird’s-eye view. Since the effort coincided roughly with the launch of the Climate Services Partnership and the implementation of the Global Framework for Climate Services, this kind of sampling activity was interesting to the sponsoring organizations, both of which were motivated to document and learn about contemporary practice to support larger efforts to advocate for climate service development around the world.
Capturing the breadth of activity in this field is still a worthy goal, of course, though it does not necessarily have to be carried out through case studies. Indeed, the GFCS Compendium of Projects, which lists GFCS projects that meet certain basic criteria, makes a good start in sampling current efforts. To the extent that it is able to facilitate easy monitoring of key indicators (e.g., target sector, time scale of information, provision method, user groups), this kind of sample could allow researchers, practitioners, and the donor community to maintain a general overview of the climate services community as it evolves over time.1 Similar efforts are organized by the European Joint Programming Initiative “Connecting Climate Knowledge for Europe” (Monfray and Bley 2016) where the mapping of climate service providers has been undertaken for a few European countries [e.g., Manez et al. (2014) for Germany and Goransson and Rummukainen (2014) for the Netherlands and Sweden].
This sort of overview can also fuel the development of hypotheses that can be investigated through the production of case studies that are exploratory and/or explanatory in nature—using such studies to develop and hone hypotheses for further inquiry, and to explain the causal links between specific interventions and the ultimate outcomes. Building off existing work (Hellmuth et al. 2011, 2007, 2009), this sort of effort would employ multiple-case research methods that could advance the identification and refinement of principles, improving our understanding of the forces and factors that limit the applicability of such principles in certain situations.
To this end, case study researchers will need to greatly expand the range of topics they explore—moving beyond efforts to document climate services in specific regions or sectors, to engage with thornier issues (e.g., ethics, institutional arrangements, sustainability). Case study authors will also need to pay careful attention to concerns of validity and reliability in order to avoid common criticisms of case studies as anecdotes from which it is impossible to generalize (Bennett and Elman 2006; Flyvbjerg 2006). Case study authors may also make efforts to perform analyses that are similar with regard to the questions explored and the methodologies used by other authors. In this sense, the field will begin to develop a host of case studies that can undergo specific meta-analyses allowing us to learn more about the implementation of climate services in different contexts.
The development of a priority list of these hypotheses and methodologies is something that climate services coordinating bodies may like to take up. At the very least, the current analysis suggests that topics regarding capacity development, coproduction, funding, and evaluation should be included.
2) Case studies and climate service evaluation
The case study collection highlights several challenges related to evaluation. First, the fact that the case studies were all self-reported makes it very difficult to use them to impartially assess the services in question. At the same time, the content of the case studies underscores just how few climate services are engaged in any kind of formal evaluation—relying, at best, on informal communication with users to gather feedback on information needs as well as on current and planned activities.
Of course, this reflects a challenge of resources as evaluative activities require dedicated efforts. It is clear, however, that the climate services community will need to prioritize the development of formal monitoring and evaluation protocols, and the involvement of independent evaluators. Without a strong push to improve evaluation, the community will struggle to justify its own efforts to improve service development and delivery; it will be challenged as well in attracting and sustaining funding from public- and private-sector actors interested to get the most out of their investment.
This is especially true with regard to economic valuation, which can describe the return on investment from climate services in different contexts, and regarding the extent of uptake and use of climate services. At the same time, answering questions regarding good practice will involve assessing the extent to which services are operating effectively along all aspects of the value chain. Tying the evaluation of information use and/or economic outcomes to long-term monitoring and evaluation activities can help to illustrate the relative contribution of certain practices. Indeed, while climate service evaluators should avail themselves of the full suite of evaluation methodologies, the role of case studies in evaluation bears special mention in this article. In contrast to survey or quasi-experimental methods, case studies are able to capture the complexity of services, and of the contexts in which they operate, making them particularly well suited to identify strengths and weaknesses, or to explain previously identified causal links, in this emerging field (Rogers 2000). Case studies are also useful in providing initial feedback in cases in which climate services take years to develop or in which the impacts of information use are expected to develop over long periods of time.
5. Summary and conclusions
This article analyzes a unique dataset comprising the self-reported descriptions of 101 climate service activities, collected separately but in a coordinated fashion by the Climate Services Partnership and the World Meteorological Organization, in 2012.
The dataset provides a bird’s-eye view of the emerging field of climate services as it was in 2012, confirming that climate services were provided in all regions and in a range of different sectors—and that services that engaged agriculture, water, disasters, and health were relatively more common than those that engaged other sectors (e.g., energy, transportation). Services based on seasonal climate information were found to be significantly more common than those based on other types of information, although a range of other time scales (historical, monitoring, weather, decadal, long-term) were also included in the study. While nearly half the climate services in question were targeted to government offices, services were also targeted to the private (18%) and third sectors (22%) in relatively equal numbers.
The dataset reflects a diversity of climate services, but it also allows for the identification of certain attributes that were more common than others. For instance, the most common type of service reported involved seasonal climate information provided by national meteorological services, in conjunction with research institutes, to agricultural actors over the Internet. A large number of case studies also dealt with capacity building, either through individual education, the development of information portals, and the bolstering of institutions involved in the production and or use of climate services.
The prevalence of case studies focused on capacity building illustrates the extent to which climate services were still an emerging field in 2012; other factors that seem to confirm this characterization include the fact that several case studies did not match the definitions of climate services provided by the World Meteorological Organization, and the fact that many case studies did not discuss specific users (Lourenço et al. 2016) but rather focused on the supply-driven provision of climate information. In addition, very few climate services maintained sustainable funding streams; even fewer evaluated their progress.
While a number of caveats limit the utility of the 2012 dataset, it remains the most comprehensive source of information on climate services in the world to date and is thus useful in providing a snapshot of the state of the field at the time the GFCS was implemented. It will be important to continue to survey the field of climate services with respect to these factors in order to develop a picture of how the field is changing—particularly as new methods, new information, and new investments change the way that climate services are designed, developed, and delivered. Other topics, including methods to diagnose climate information needs and prioritize service development, should also be monitored as the field develops.
Importantly, while the caveats mentioned above do not impede our ability to draw meaningful conclusions from the case study collection as a whole, they highlight the challenge inherent to efforts to keep an account of progress in this rapidly changing field. Efforts to sample climate services, such as the GFCS Compendium of Projects, will need to be expanded, and kept up to date, if researchers are to be able track changes to the climate service community as a whole and keep tabs on the extent to which such services contribute to society’s efforts to adapt to climate variability and change. Other perspectives—including, for instance, Harjanne’s (2017) analysis of the institutional logics of climate services as derived from articles published in the World Meteorological Organization Bulletin—can offer additional perspective on the changing field.
It is important to note as well that while the current dataset is useful in providing a historical overview of the field in 2012, it is less useful in providing a sense of good practice. To advance this discussion, case studies will need to move past a simple accounting of practice to explore and explain current strengths and weaknesses of climate services from a more theoretical perspective. To this end, case studies should develop hypotheses for future inquiry and explain causal links between particular interventions and ultimate outcomes. Case studies also have a key role to play in climate service evaluation, complementing experimental and quasi-experimental methods, and supplementing them in cases in which such methods may be inappropriate or premature.
Acknowledgments
CV served as the program manager of the Climate Services Partnership in 2012 when the documents reviewed for this study were produced. In this role, CV was involved in collecting and editing case studies for CSP studies, though she was not involved in designing the survey; she was supported in this work by the Climate Change Resilient Development project of USAID. Suraje Dessai was supported by the European Research Council (ERC) under the European Union’s Seventh Framework Programme for Research (FP7/2007–2013), ERC Grant Agreement 284369, the EUPORIAS project, Grant Agreement 308291, and the U.K. ESRC Centre for Climate Change Economics and Policy (ES/K006576/1). The authors acknowledge Meaghan Daly, who provided useful comments on an earlier draft.
APPENDIX A
Case Study Template: Global Framework for Climate Services and Climate Services Partnership—Case Study Solicitation, January 2012
a. Introduction
The Climate Services Partnership (CSP) was formed at the first International Conference on Climate Services (ICCS) to advance climate services around the world. In doing so, the CSP supports the Global Framework for Climate Services (GFCS), a formal international system that facilitates the coordinated support of climate services worldwide.
In an effort to advance common goals, the GFCS and the CSP are soliciting case studies that document experiences in the provision, development, and application of climate services. Case studies should detail the perspective of users of climate information as well as that of providers of such information. They should highlight successful strategies, detail challenges, and share lessons learned.
Case studies will form an integral part of the GFCS implementation plan. The plan, currently being drafted by over 100 experts worldwide, will be presented before an Extraordinary Congress of the World Meteorological Organization (WMO) in October 2012; it will guide the activities of the GFCS in the years ahead. Case studies provided by WMO Members will be collected into a single document and distributed at the October 2012 Extraordinary Congress as well.
The Climate Services Partnership will distribute case studies through an online knowledge capture portal. In making case studies available to the broader community, the CSP hopes to offer perspective on approaches that can be adopted or adapted by other interested parties.
Though each case study will of course be unique, authors should attempt to answer as many of the question posed by the case study guidelines as possible. Questions, comments, or suggestions should be directed to
Filipe Lúcio
Global Framework for Climate Services
WMO
Catherine Vaughan
Climate Services Partnership
b. GFCS/CSP case study guidelines
Please describe your climate service activity in the following terms.
1) What?
Briefly describe the service being provided. What socioeconomic issue/problem does your project/service address? What audience does it target?
Briefly describe the climate and contextual information that is incorporated into service.
What kinds of climate information are used? What are the sources of this information (National Meteorological Service/other)? How is information accessed (including, for instance, format, cost)?
Is information regarding socioeconomic factors a part of the service? If so, what is the source of this information and how is it accessed?
Is the information tailored to specific users? Who is responsible for tailoring information (user/provider/ joint team)?
How is climate information used in decision-making?
2) How?
Processes and mechanisms
1) Stakeholder identification: Who are the stakeholders involved in the process and how were they identified? How did the group decide to focus on this issue? Who was involved in making this decision?
2) Stakeholder involvement: Please describe the full chain or network associated with your activity and any mechanisms to facilitate the dissemination of information. Who do you give information or advice to? Who gives information or advice to you? Describe the channels used to access climate information products and services.
3) Funding mechanisms: Briefly describe the program’s business model. Is the program supported by donor, government, or private-sector funding, or by some combination thereof? Are their challenges to financial sustainability? Is it possible to scale up this project? What investments have been made in infrastructure?
4) Implementation: Does the service involve one or more institutions? If more than one institution is involved, what are their roles in the management of the project? How are decisions made?
5) Evaluation: Is there a process by which the project/service is evaluated? Are there mechanisms to understand the value of the decisions informed by the service? Are there processes for soliciting user feedback and adjusting the service in response? Are their concrete examples of this activity facilitating adaptation to climate change?
Capacities
1) Present: What human, infrastructural, institutional, and procedural capacities were necessary to build your service? Please describe the level of climate expertise in user organizations and the extent to which these organizations rely on external support for interpretation of information.
2) Lacking: What capacities were lacking and how were they overcome (e.g., joint projects, interchange of personnel)?
a. Describe a challenge you faced in matching information products or services available to needs.
b. Describe any innovations that were put in place to meet needs.
3) What next?
What are goals for the future of the project/service?
Could your program be scaled up? Could lessons learned be transferred to other sectors and/or locations? What did and did not work?
What are the main challenges moving forward?
4) Principles of the GFCS
Authors are also encouraged to indicate which, if any, of the principles of the Global Framework on Climate Services (listed below) are reflected in their service and how they have been included. More on the background, history, and ongoing activities of the GFCS can be found at www.wmo.int/gfcs.
Principle 1: All countries will benefit, but priority shall go to building the capacity of climate-vulnerable developing countries.
Principle 2: The primary goal of the framework will be to ensure greater availability of, access to, and use of climate services for all countries.
Principle 3: Framework activities will address three geographic domains: global, regional, and national.
Principle 4: Operational climate services will be the core element of the framework.
Principle 5: Climate information is primarily an international public good provided by governments, which will have a central role in its management through the framework.
Principle 6: The framework will promote the free and open exchange of climate-relevant observational data while respecting national and international data policies.
Principle 7: The role of the framework will be to facilitate and strengthen, not to duplicate.
Principle 8: The framework will be built through user-provider partnerships that include all stakeholders.
APPENDIX B
Further Detail Regarding Case Study Collection Process
The leadership of the GFCS and the CSP agreed to engage in this case study activity in the last quarter of 2011, developing a shared template and sending the template to their respective networks, in order to collect the case studies separately.
The WMO reached out to all the national meteorological and hydrological services that make up its membership, but also collected case studies from universities, private companies, and other public-sector entities. The CSP—which had itself just launched at the first International Conference on Climate Services (October 2011)—reached out to its own smaller and more informal network.
When made aware that the same person and/or organization had been contacted by both organizations, the leadership of the GFCS and CSP coordinated regarding the overlap; in some cases, the leadership was not aware of the overlap, resulting in several duplicates between both collections.
There were several differences in the way that the studies were edited for publication. For instance, the CSP case studies are in general longer than the GFCS ones, which reflects the fact that the GFCS documents were collected into a hardcover publication, while the CSP documents were published online. The structure of the documents is also slightly different, as the CSP editors pressed authors to complete the entire template, while GFCS editors accepted documents that followed the template more loosely.
The WMO categorized its case studies as follows: agriculture; water; health; disaster risk reduction; energy; ecosystems; transport and infrastructure; urban issues; communities; and capacity development. The CSP categorized its case studies as follows: agriculture; decision support; disaster risk reduction; ecosystems; education; energy; financial services; food security; health; tourism; urban issues; and water.
APPENDIX C
Case Studies
A complete list of case studies included in the analysis is shown in Table C1.
Full list of studies included in the review.
REFERENCES
Adger, N. W., S. Huq, K. Brown, D. Conway, and M. Hulme, 2003: Adaptation to climate change in the developing world. Progr. Dev. Stud., 3 (3), 179–195.
Anderson, G., and Coauthors, 2015: Valuing weather and climate: Economic assessment of meteorological and hydrological services. WMO Doc. 1153, 308 pp., http://www.wmo.int/pages/prog/amp/pwsp/documents/wmo_1153_en.pdf.
Asrar, G. R., V. Ryabinin, and V. Detemmerman, 2012: Climate science and services: Providing climate information for adaptation, sustainable development and risk management. Curr. Opin. Environ. Sustainability, 4, 88–100, https://doi.org/10.1016/j.cosust.2012.01.003.
Bennett, A., and C. Elman, 2006: Qualitative research: Recent developments in case study methods. Annu. Rev. Polit. Sci., 9, 455–476, https://doi.org/10.1146/annurev.polisci.8.082103.104918.
Bhowmik, A. K., and A. C. Costa, 2014: Data scarcity or low representativeness?: What hinders accuracy and precision of spatial interpolation of climate data? Proc. AGILE 2014 Int. Conf. on Geographic Information Science, Castellón de la Plana, Spain, Association of Geographic Information Laboratories in Europe, 3–6, http://repositori.uji.es/xmlui/handle/10234/99547.
Boulton, E., A. Watkins, and D. Perry, 2012: A user-centered design approach to the Seasonal Climate Outlook. Climate Exchange, Tudor Rose Publications and WMO, 230–233.
Broad, K., and S. Agrawala, 2000: The Ethiopia food crisis—Uses and limits of climate forecasts. Science, 289, 1693–1694, https://doi.org/10.1126/science.289.5485.1693.
Broad, K., A. S. P. Pfaff, and M. H. Glantz, 2002: Effective and equitable dissemination of seasonal-to-interannual climate forecasts: Policy implications from the Peruvian fishery during El Niño. Climatic Change, 54, 415–438, https://doi.org/10.1023/A:1016164706290.
Brunet, M., and P. Jones, 2011: Data rescue initiatives: Bringing historical climate data into the 21st century. Climate Res., 47, 29–40, https://doi.org/10.3354/cr00960.
Cash, D. W., J. C. Borck, and A. C. Patt, 2006: Countering the loading-dock approach to linking science and decision making: Comparative analysis of El Nino/Southern Oscillation (ENSO) forecasting systems. Sci. Technol. Hum. Values, 31, 465–494, https://doi.org/10.1177/0162243906287547.
Clements, J., A. Ray, and G. Anderson, 2013: The value of climate services across economic and public sectors: A review of relevant literature. USAID, 54 pp., http://www.climate-services.org/wp-content/uploads/2015/09/CCRD-Climate-Services-Value-Report_FINAL.pdf.
Dilling, L., and M. C. Lemos, 2011: Creating usable science: Opportunities and constraints for climate knowledge use and their implications for science policy. Global Environ. Change, 21, 680–689, https://doi.org/10.1016/j.gloenvcha.2010.11.006.
Dinku, T., and J. Sharoff, 2012: ENACTS Ethiopia: Partnerships for improving climate data availability, accessibility and utility. Climate Services Partnership, 7 pp., http://www.climate-services.org/wp-content/uploads/2015/09/ENACTS_Case_Study.pdf.
Flyvbjerg, B., 2006: Five misunderstandings about case-study research. Qual. Inq., 12, 219–245, https://doi.org/10.1177/1077800405284363.
Ford, J. D., E. C. H. Keskitalo, T. Smith, T. Pearce, L. Berrang-Ford, F. Duerden, and B. Smit, 2010: Case study and analogue methodologies in climate change vulnerability research. Wiley Interdiscip. Rev.: Climate Change, 1, 374–392, https://doi.org/10.1002/wcc.48.
Furman, C., C. Roncoli, T. Crane, and G. Hoogenboom, 2011: Beyond the “fit”: Introducing climate forecasts among organic farmers in Georgia (United States). Climatic Change, 109, 791–799, https://doi.org/10.1007/s10584-011-0238-y.
Georgeson, L., M. Maslin, and M. Poessinouw, 2017: Global disparity in the supply of commercial weather and climate information services. Sci. Adv., 3, e1602632, https://doi.org/10.1126/sciadv.1602632.
Girvetz, E. H., E. Maurer, P. Duffy, A. Ruesch, B. Thrasher, and C. Zganjar, 2013: Making climate data relevant to decision making: The important details of spatial and temporal downscaling. World Bank, 38 pp., http://sdwebx.worldbank.org/climateportal/doc/Global_Daily_Downscaled_Climate_Data_Guidance_Note.pdf.
Goddard, L., and Coauthors, 2013: A verification framework for interannual-to-decadal predictions experiments. Climate Dyn., 40, 245–272, https://doi.org/10.1007/s00382-012-1481-2.
Goransson, T., and M. Rummukainen, 2014: Climate services: Mapping of providers and purveyors in the Netherlands and Sweden. Lund University CEC Rep. 1, 101 pp., https://www.cec.lu.se/sites/cec.lu.se/files/20140623_report_climate_services_final_small.pdf.
Harjanne, A., 2017: Servitizing climate science—Institutional analysis of climate services discourse and its implications. Global Environ. Change, 46, 1–16, https://doi.org/10.1016/j.gloenvcha.2017.06.008.
Harrison, M., and J. I. M. B. Williams, 2007: Communicating seasonal forecasts. Seasonal Climate: Forecasting and Managing Risk, A. Troccoli et al., Eds., NATO Science Series, Springer Academic, 299–322.
Hellmuth, M. E., A. Moorhead, and J. Williams, 2007: Climate risk management in Africa: Learning from practice. Climate and Society Series 1, International Research Institute for Climate and Society, 116 pp., https://iri.columbia.edu/wp-content/uploads/2013/07/Climate-and-Society-No1_en.pdf.
Hellmuth, M. E., D. E. Osgood, U. Hess, A. Moorhead, and H. Bhojwani, 2009: Index insurance and climate risk: Prospects for development and disaster management. Climate and Society Series 2, International Research Institute for Climate and Society, 122 pp., https://iri.columbia.edu/wp-content/uploads/2013/07/Climate-and-Society-Issue-Number-2.pdf.
Hellmuth, M. E., S. J. Mason, C. Vaughan, M. K. van Aalst, and R. Choularton, 2011: A better climate for disaster risk management. Climate and Society Series 3, International Research Institute for Climate and Society, 133 pp., https://iri.columbia.edu/wp-content/uploads/2013/07/CSP3_Final.pdf.
Hewitt, C., S. Mason, and D. Walland, 2012: The Global Framework for Climate Services. Nat. Climate Change, 2, 831–832, https://doi.org/10.1038/nclimate1745.
Hulme, M., 2009: Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity. Cambridge University Press, 428 pp.
Hyvärinen, O., L. Mtilatila, A. Venäläinen, and H. Gregow, 2015: The verification of seasonal precipitation forecasts for early warning in Zambia and Malawi. Adv. Sci. Res., 12, 31–36, https://doi.org/10.5194/asr-12-31-2015.
International Commission for the Protection of the Danube River, 2012: The Danube River Basin climate adaptation strategy. Climate Exchange, Tudor Rose Publications and WMO, 95–98.
Kadi, M., 2012: Climate information and development: Regional Climate Outlook Forums in Africa. Climate Services Partnership, 5 pp., http://www.climate-services.org/wp-content/uploads/2015/09/RCOF_Africa_Case_Study.pdf.
Korneev, V., 2012: Adapting to climate change in the Nieman River basin. Climate Exchange, Tudor Rose Publications and WMO, 92–95.
Lazo, J. K., R. S. Raucher, T. J. Teisberg, C. J. Wagner, and R. F. Weiher, 2009: Primer on economics for national meteorological and hydrological services. WMO, 47 pp., https://www.wmo.int/pages/prog/amp/pwsp/documents/Primer_on_Economics_for_NMHS_2008_01.pdf.
Lemos, M. C., 2003: A tale of two policies: The politics of climate forecasting and drought relief in Ceará, Brazil. Policy Sci., 36, 101–123, https://doi.org/10.1023/A:1024893532329.
Lemos, M. C., and B. J. Morehouse, 2005: The co-production of science and policy in integrated climate assessments. Global Environ. Change, 15, 57–68, https://doi.org/10.1016/j.gloenvcha.2004.09.004.
Lemos, M. C., C. J. Kirchhoff, and V. Ramprasad, 2012: Narrowing the climate information usability gap. Nat. Climate Change, 2, 789–794, https://doi.org/10.1038/nclimate1614.
Lorenz, S., S. Dessai, J. Paavola, and P. M. Forster, 2015: The communication of physical science uncertainty in European National Adaptation Strategies. Climatic Change, 132, 143–155, https://doi.org/10.1007/s10584-013-0809-1.
Lourenço, T. C., R. Swart, H. Goosen, and R. Street, 2016: The rise of demand-driven climate services. Nat. Climate Change, 6, 13–14, https://doi.org/10.1038/nclimate2836.
Manez, M., T. Zolch, and J. Cortekar, 2014: Mapping of climate service providers—Theoretical foundation and empirical results : A German case study. Climate Service Center Rep. 15, 54 pp., http://www.climate-service-center.de/imperia/md/content/csc/csc_report15.pdf.
Marx, S. M., E. U. Weber, B. S. Orlove, A. Leiserowitz, D. H. Krantz, C. Roncoli, and J. Phillips, 2007: Communication and mental processes: Experiential and analytic processing of uncertain climate information. Global Environ. Change, 17, 47–58, https://doi.org/10.1016/j.gloenvcha.2006.10.004.
Mason, S. J., and S. Chidzambwa, 2009: Position paper: Verification of RCOF. IRI Tech. Rep. 09-02, 26 pp., https://doi.org/10.7916/D85T3SB0.
McNie, E. C., 2007: Reconciling the supply of scientific information with user demands: An analysis of the problem and review of the literature. Environ. Sci. Policy, 10, 17–38, https://doi.org/10.1016/j.envsci.2006.10.004.
Millner, A., and R. Washington, 2011: What determines the perceived value of seasonal climate forecasts? A theoretical analysis. Global Environ. Change, 21, 209–218, https://doi.org/10.1016/j.gloenvcha.2010.08.001.
Monfray, P., and D. Bley, 2016: JPI Climate: A key player in advancing Climate Services in Europe. Climate Serv., 4, 61–64, https://doi.org/10.1016/j.cliser.2016.11.003.
Ogallo, L., P. Bessemoulin, J.-P. Ceron, S. Mason, and S. J. Connor, 2008: Adapting to climate variability and change: The Climate Outlook Forum process. WMO Bull., 57, 93–102.
Overpeck, J. T., G. A. Meehl, S. Bony, and D. R. Easterling, 2011: Climate data challenges in the 21st century. Science, 331, 700–702, https://doi.org/10.1126/science.1197869.
Quintana, J., B. Piuzzi, and J. F. Carrasco, 2012: Seasonal climate prediction in Chile: The agroclimate outlook. Climate Exchange, Tudor Rose Publications and WMO, 50–52.
Rahman, T., J. Buizer, and Z. Guido, 2016: The economic impact of seasonal drought forecast information service in Jamaica 2014-15. IRAP Working Paper, 62 pp., https://irapclimate.org/wp-content/uploads/project_documents/Economic-Impact-of-Drought-Information_Cover.pdf.
Rogers, P., 2000: Program theory: Not whether programs work but how they work. Evaluation Models: Viewpoints on Educational and Human Services Evaluation, D. L. Stufflebeam, G. F. Madaus, and T. Kellaghan, Eds., Kluwer Academic, 208–232.
Roncoli, C., and Coauthors, 2009: From accessing to assessing forecasts: An end-to-end study of participatory climate forecast dissemination in Burkina Faso (West Africa). Climatic Change, 92, 433–460, https://doi.org/10.1007/s10584-008-9445-6.
Sivakumar, M. V., R. Gommes, and W. Baier, 2000: Agrometeorology and sustainable agriculture. Agric. For. Meteor., 103, 11–26, https://doi.org/10.1016/S0168-1923(00)00115-5.
Steynor, A., J. Padgham, C. Jack, B. Hewitson, and C. Lennard, 2016: Co-exploratory climate risk workshops: Experiences from urban Africa. Climate Risk Manage., 13, 95–102, https://doi.org/10.1016/j.crm.2016.03.001.
Street, R., and Coauthors, 2015: A European research and innovation roadmap for climate services. European Commission Directorate-General for Research and Innovation, 56 pp., https://publications.europa.eu/en/publication-detail/-/publication/73d73b26-4a3c-4c55-bd50-54fd22752a39.
Tang, S., and S. Dessai, 2012: Usable science? The UK Climate Projections 2009 and decision support for adaptation planning. Wea. Climate Soc., 4, 300–313, https://doi.org/10.1175/WCAS-D-12-00028.1.
Taylor, A. L., S. Dessai, and W. Bruine De Bruin, 2015: Communicating uncertainty in seasonal and interannual climate forecasts in Europe. Philos. Trans. Royal Soc., 373A, 20140454, https://doi.org/10.1098/rsta.2014.0454.
Thornton, P. K., 2007: Ex ante impact assessment and seasonal climate forecasts: Status and issues. Climate Res., 33, 55–65, https://doi.org/10.3354/cr033055.
Vaughan, C., and S. Dessai, 2014: Climate services for society: Origins, institutional arrangements, and design elements for an evaluation framework. Wiley Interdiscip. Rev.: Climate Change, 5, 587–603, https://doi.org/10.1002/wcc.290.
Vaughan, C., L. Buja, A. Kruczkiewicz, and L. Goddard, 2016: Identifying research priorities to advance climate services. Climate Serv., 4, 65–74, https://doi.org/10.1016/j.cliser.2016.11.004.
Wahlström, M., 2009: Disaster risk reduction, climate risk management and sustainable development. WMO Bull., 58, 165–174.
World Meteorological Organization, 2014: Annex to the implementation plan of the Global Framework for Climate Services—Capacity development. WMO, 68 pp., http://www.gfcs-climate.org/sites/default/files/Components/Capacity%20Development//GFCS-ANNEXES-CD-FINAL-14143_en.pdf.
Yin, R. K., 2014: Case Study Research: Design and Methods. 5th ed. SAGE Publications, 312 pp.
Ziervogel, G., and T. E. Downing, 2004: Stakeholder networks: Improving seasonal climate forecasts. Climatic Change, 65, 73–101, https://doi.org/10.1023/B:CLIM.0000037492.18679.9e.
While the compendium is an important contribution, we must also note that it currently falls short in describing both the breadth and depth of climate services. Indeed, the compendium describes just the scope, objectives, activities, benefits, and deliverables of just 40 GFCS projects, with another 10 “contributing projects” not funded through the GFCS included on the website. This results in a partial picture of a small subset of activities. Bolstering this activity (by including, e.g., information on quality control measures, modes of communication, the scale of services provided, and the sustainability of services) should be an important priority moving forward.