In Aesop’s fable, “The Gnat and the Bull,” a tiny gnat overestimates his importance by presuming that he disturbed a large bull by landing on his horn. However, when the gnat prepares to fly away, apologizing to the bull for the disturbance, the bull replies “It’s all the same to me. I did not even know you were there.” This fable may hold a useful analogy for the current state of efforts to link scientific climate information with societal decision-making (Vaughan and Dessai 2014) and frames the question asked in this paper: to what extent are claims about the value of climate services justified?
This question is increasingly urgent as climate services, defined as a “means of providing climate information to assist decision-making in ways that involve appropriate engagement, as an effective access mechanism, and as a response to user needs” (WMO 2014a, p. iii), are gaining in importance. Climate services support climate adaptation, disaster risk reduction, and socioeconomic development programs, frameworks, and policies around the globe, including the Global Framework for Climate Services (GFCS) and the U.N. Sendai Framework for Disaster Risk Reduction (WMO 2011; Brasseur and Gallardo 2016). Regional Climate Outlook Forums (RCOFs) are part of the public and private enterprise of climate services, furthering a long-standing effort to link climate information and services to decision-making (Guido et al. 2016). RCOFs have been held over the last 20 years with the principal objectives of producing and disseminating seasonal climate forecasts to improve climate risk management and adaptation (Ogallo et al. 2008; Daly and Dessai 2018).
While climate scientists may see seasonal climate forecasts and other scientific climate information as powerful tools for informing decisions and action, decision-makers and the public may not share this perspective. In practice, climate forecasts may be considerably less salient to decision-makers than other sources of information or concerns (Steynor et al. 2016). Further, decision-makers may have different perceptions of the relevance and usability of the scientific information than climate scientists do (Cash et al. 2003; Lemos et al. 2012). Although the more recent emphasis on climate services marks a growing determination to better link climate information with decision-making, few assessments have been undertaken to evaluate their success (Lourenço et al. 2016; Tall et al. 2018). As climate services have evolved into a multibillion dollar public and private enterprise (Georgeson et al. 2017), it is important to be able to assess the impacts they are having, as well as to set appropriate objectives, so that climate services can be meaningfully designed, implemented, and evaluated.
Having been initiated in the late-1990s by WMO and other international agencies to produce user-relevant climate outlook products and help manage climate risks for large regions of the globe, RCOFs are particularly good subjects for assessment. They have been at the forefront of the evolution of climate service initiatives and have the aim of creating consensus seasonal climate forecasts and of improving understanding of needs for climate information (Buizer et al. 2000). The WMO and other international agencies have consistently promoted the sustainability and adoption of RCOFs, which have been established with varying degrees of national and regional support in most regions across the globe (Ogallo et al. 2008) (Fig. 1).
Strong claims have been made about the positive impacts of RCOFs, ranging from significant reduction in drought and disaster losses, to advances in developing the capacity of scientists and improving awareness and networking among users, to guiding community adaptation to climate change (Aldrian et al. 2010; Martinez et al. 2010; Njau 2010; Tall 2010). While many benefits of the RCOFs can be substantiated, many other purported benefits are based on anecdotal information, lack rigorous evidence and measurement, and are made predominantly from within the climate service provider community rather than by independent evaluators. In the absence of objective criteria for evaluating RCOFs, mismatches between their perceived benefits among different RCOF stakeholders could hypothetically develop, which could, in turn, erode long-term support for this activity. Like Aesop’s gnat, which thought it might be of greater significance than it was, there is a danger that in the absence of robust evidence to inform learning and improvement, RCOFs and other climate services will be unable to produce the intended benefits.
In this paper, we offer guidance on how to measure the success of RCOFs in meeting their objectives from a range of perspectives, so as to measure 1) the quality of the climate information used and developed at RCOFs; 2) the legitimacy of RCOF processes that are focused on consensus forecasts, broad user engagement, and capacity building; and 3) the usability of the climate information produced in the RCOFs. By integrating multiple perspectives, we propose broad evaluative categories that might be useful to climate service providers, users, and funders beyond the RCOF example. Collectively, we draw from our experience as researchers who have observed, studied, and participated in RCOFs around the world and as climate service providers facilitating consensus forecasts and creating new science products for communities through RCOF processes. Our guidance on how to evaluate RCOFs can also be adapted to design and evaluate many other national or local efforts that have emerged to connect climate science to decision-making, such as National Climate Outlook Forums, other multidisciplinary participatory workshops (e.g., National Seasonal Assessment Workshops; Garfin et al. 2016), and Climate Field Schools and Farmers’ Forums (Feder et al. 2004; Siregar and Crane 2011).
Goals of the RCOFs
The history and perceived utility of the RCOFs are reflected in their integration into the WMO’s climate service infrastructure under the GFCS as a model and means of enabling interaction with users (Hewitt et al. 2012). The RCOFs now constitute a primary element of the GFCS’s User Interface Platform (UIP; “a structured means for users, climate researchers and climate service providers to interact”) at the regional scale (WMO 2014b). The RCOFs have adjusted their formats and goals to respond to regional preferences. Therefore, no two RCOFs are exactly alike; there is considerable diversity in the way in which information is generated and stakeholders participate.
Despite these differences, most RCOFs are formed around several common, interrelated goals (Daly and Dessai 2018). The core goal is typically the production of an operational seasonal forecast for precipitation, temperature, and other climate risks at the regional scale (for the coming 3–7 months, depending on the Forum). This product is often referred to as a “consensus forecast” (Buizer et al. 2000; Ogallo et al. 2008; Aldrian et al. 2010). It involves the development of consensus among “producers”—representatives of National Meteorological and Hydrological Services (NMHSs), regional and international climate centers, and other technical partners—in order to integrate multiple model outputs, local experience, and other relevant information into a single regional product. Capacity building among these regional and national scientists and networking among the broader climate science community are important contributing or ancillary goals, which directly support the production of high-quality, operational forecast products. All these capacity-building efforts are undertaken with the ultimate end goal of improving climate risk management and adaptation decision-making (WMO 2009).
RCOFs were also intended from the outset to provide a platform to enable interaction between producers and users of climate forecasts to enhance their uptake and practical application within decision-making (NOAA 1998). Potential users of the forecasts are invited from various climate-sensitive sectors in the region, sometimes sampling multiple sectors, and sometimes focusing on a specific sector (Ogallo et al. 2008). Opportunities are provided at the forums to discuss with sectoral participants the possible impacts implied by the forecasts so as to promote context-specific understanding, and to exchange ideas about possible actions to take in advance.
Reviews of RCOFs have raised pertinent issues, especially regarding challenges with such user engagement (WMO 2000, 2008, 2017). However, these reviews have been limited and have not proposed evaluation criteria or metrics. In the absence of more comprehensive reviews, the evaluation of the accuracy of the consensus seasonal forecasts, as a subset of climate services, has come to represent the majority of evaluation efforts on RCOFs (e.g., Hansen et al. 2011; Perrels et al. 2013) [for more examples of such evaluations, see Bruno Soares et al. (2018)]. Without evaluations that use well-justified and broadly accepted frameworks and criteria, any purported benefits may address only limited perspectives or even be invalid. Therefore, there is a potential risk that such claims are not leading to adaptive learning within the RCOF system. Given the growth and expansion of RCOFs around the world over the past two decades and the considerable investment made in climate service products and processes associated with them, we argue that the time is ripe to develop a more intentional and multifaceted approach to their evaluation.
An approach to evaluating the RCOFs
How might we better evaluate the RCOFs to improve the value of climate information to society?
We offer some insights to this question based on our collective experiences and observations in designing climate services, participating in RCOFs, and undertaking evaluations and research at different RCOFs using a variety of methods. We propose three broad evaluative categories that are based on the primary stated goals of the RCOFs.
The first category—quality of climate information produced (Jolliffe and Stephenson 2012)—focuses on the core goal of developing scientifically credible and skillful climate information. Forecast “skill” can be interpreted loosely as a measure of whether the forecasts contain potentially useful information, but, more strictly, it is a relative measure comparing the quality of the forecasts with that of another set, usually a naïve set of forecasts such as ones that never change (Murphy 1993). For RCOFs, regional seasonal forecasts are the primary climate services products. Forecast quality is necessary, but it represents only a technical criterion (Kumar 2010) independent of how the information is communicated, understood, or why the forecast was shared in the first place. Given that climate service products that go beyond the consensus seasonal forecasts are increasingly created and shared as part of RCOF processes (Gerlak et al. 2018), it is also appropriate to examine the quality of climate information within them.
Thus, the second category—legitimacy of RCOF processes—corresponds to the contributing goals of facilitating consensus in a forecast, engaging a broad set of users, and building capacity. Legitimacy in climate services refers to the openness and fairness of knowledge, meaning that the processes incorporate diverse perspectives and all users find the processes beneficial (West et al. 2018; Daly and Dessai 2018; Cash et al. 2003; Tang and Dessai 2012).
The third category—usability of climate information—includes aspects that can facilitate the uptake and use of RCOF seasonal forecasts. In climate services, usability is widely seen as how the climate information informs policy and decision-making (Lemos et al. 2012; Brasseur and Gallardo 2016; Daly and Dessai 2018). In our approach, this category relates to the end goal of improving climate risk management and adaptation.
Figure 2 illustrates the goals of each evaluative category. The RCOFs aim to achieve multiple interrelated goals, from the “core goal” of creating the forecast, to “contributing goals” and “end goals.” This approach aligns categories associated with the goals as currently interpreted and implemented by organizers of RCOFs (Daly and Dessai 2018), with three evaluative categories. Arrows indicate how the various goals of the RCOFs relate to each other. Scientific consensus, user engagement, and capacity building and networking are part of the process of producing operational climate information to support usable information for climate risk management and adaptation. Table 1 reports evaluative metrics for each of the three categories, indicating relevant references, and providing some strategies for how these evaluative metrics can be used in practice.
Evaluative metrics of RCOFs.
Whereas evaluating the quality of forecasts relies largely on quantitative measures and statistical techniques that are standardized and transferrable, assessing RCOF processes and the perceived usability of RCOF products will require a combination of quantitative and qualitative social science methods that are sensitive to highly variable regional contexts. Because the RCOFs have taken up different formats and procedures to adapt to different institutional and political settings, as well as varied technical and scientific capacities, it is neither possible nor desirable to prescribe a rigid set of metrics to be applied across all of them. Ultimately, the evaluation methods adopted should align with the goals and intent of the evaluation and be performed in a participatory, coproduction manner where producers and users of climate services together design the evaluation metrics and processes (Hinkel and Bisaro 2015; Meadow et al. 2015; Vaughan et al. 2018). In the following sections, we elaborate on how the three categories might be evaluated and applied in practice.
Quality OF Climate Information Produced.
Consensus seasonal rainfall forecasts represent the primary climate information produced in most of the RCOFs. These forecasts are presented as probabilities assigned to three categories: below normal, normal, and above normal (WMO 2009). Most RCOFs attempt to evaluate the previous season’s forecast by counting at each location how closely the observed rainfall category was to the category that had the highest probability. This procedure has some intuitive appeal—it measures how often the most likely category occurred—but it ignores the probabilistic nature of seasonal forecasts and involves pitfalls when there are three or more categories (Mason 2012). An alternative is to count at how many locations the category with the highest probability was observed, and to repeat the count for the categories with the second highest and the lowest probabilities (WMO 2018).
A series of forecasts can be evaluated more meaningfully than a single forecast. There have been attempts to evaluate series of probabilistic seasonal climate forecasts produced at RCOFs and elsewhere (Berri et al. 2005; Vizard et al. 2005; Livezey and Timofeyeva 2008; Mason and Chidzambwa 2008; Barnston et al. 2010; Korecha and Sorteberg 2013; Hyvärinen et al. 2015; Min et al. 2017). The ranked probability and Brier skill scores (Broecker 2012) are often used for probabilistic forecasts, but give overly negative and difficult-to-interpret results (Wilks 2000; Mason 2004, 2012). Applying measures of discrimination—how much the forecast differs given different outcomes (Murphy 1993)—may be more suitable (Mason and Weigel 2009. Measures of bias and other systematic errors can be obtained from procedures such as reliability analyses involving assessments of whether the probabilities provide an accurate indication of the uncertainty that the forecasts are communicating (Wilks 2000; Wilks and Godfrey 2002; Mason and Chidzambwa 2008; Barnston and Mason 2011; Peng et al. 2012). These analyses can be used to identify possible ways of improving the forecasts. However, even the longest-running RCOFs have a history of no more than about 20 years, rendering sample sizes too small for in-depth analyses of probabilistic forecasts. It will likely be necessary to pool forecasts from different seasons and multiple locations to address the sample size issue, then attempt to smooth out sampling problems. For example, the slope of a linearized reliability curve can provide a useful measure of how much the probability that a specific category will occur increases as the forecast probability increases (Wilks and Murphy 1998). Regardless of the sample size, it is important to define precisely the objectives of verifying forecasts, including identifying the specific attributes of forecast quality that are of interest, rather than the prevalent interest in a single summary measure of quality (Murphy 1991) that is often difficult to interpret (Mason 2008).
One way the WMO could immediately further the evaluation of the quality of RCOF-type forecasts is by implementing standards for verifying them, thereby steering practice away from suboptimal procedures (WMO 2018). Standards are already largely implemented for the verification of dynamical model forecasts (Graham et al. 2011), but were not originally formulated with RCOF-type forecasts in mind. The Standardized Verification System for Long-Range Forecasts (SVSLRF) has been useful for the verification of many of the inputs to the consensus-building process for the RCOFs, but the RCOFs have not yet implemented an agreed-upon set of verification procedures for the forecasts they produce. The WMO (2018) recently published guidance on how to adapt and apply SVSLRF to the subjective probabilistic forecasts that are a typical output of the RCOFs (Buizer et al. 2000; Ogallo 2010). This guidance includes suggestions on how to verify individual forecasts. An early draft of this guidance was introduced to representatives from many of the RCOFs in Nanjing, China, in 2013, at the International Training Workshop on Verification of Operational Seasonal Forecasts. A few additional training workshops with a similar objective have been held at individual RCOFs including in the Caribbean, Africa, and Europe. Now that the guidance has been published, it would be beneficial to implement a more concerted training program to promote, and, where necessary, correct, forecast verification procedures, and to make archives of past forecasts and their verification information available. This implementation is best achieved through the WMO Regional Climate Centres (RCC) network.
In addition to evaluating forecasts as they are strictly intended to be read, it may be useful to evaluate the forecasts as they are interpreted. Reinterpretations of the forecasts may be as simple as taking an area-average forecast as a location-specific forecast or interpreting “above normal” rainfall forecasts as indications of flood risk (e.g., Coughlan de Perez et al. 2017). It is important to evaluate how the forecasts are translated into information that supports decisions, rather than to evaluate the forecasts per se, because it is this translated information that is acted upon. The evaluation of the usability of climate products more generally, is discussed in more detail in the third evaluative category.
Legitimacy OF RCOF Processes
RCOF processes have engaged a broad set of users and supported capacity building. These processes help climate information become more useable by addressing issues related to salience, credibility, and legitimacy (Cash et al. 2003; Meinke et al. 2006). It is therefore important for any evaluation to assess the strengths and weaknesses of, as well as opportunities for and barriers to, these RCOF processes. Yet, development of a set of standardized metrics to assess the legitimacy of processes across all RCOFs is neither possible nor desirable. In comparison with standards for measuring the quality of climate information outputs, indicators of the legitimacy of the processes of coproducing usable climate information are less tangible and more conceptual in nature (Wall et al. 2017). Further, it would be inappropriate to prescribe rigid evaluation measures because the processes in each RCOF differ greatly due to regional context, including different institutional arrangements, the number and type of participants, and the basic format of the forum (Daly and Dessai 2018). Any attempt to evaluate RCOF processes should be tailored to the particular region. Nonetheless, following the stated goals of the RCOFs, we suggest that context-sensitive measures should be developed to evaluate the legitimacy of the processes of scientific consensus, user engagement, and capacity building. Consistency in measures would be valuable over time to observe long-term changes.
Given that the RCOFs produce regional products and often bring together decision-makers from different countries, a key goal of the RCOFs is to build scientific consensus among representatives from NMHSs in a climate forecast. The development of the consensus forecast itself represents a process of coproduction among the forecasters, merging diverse knowledge and expertise across national, regional, and global scales (Daly and Dessai 2018). Further, NMHSs play a variety of key roles in the implementation of the RCOFs and they are also the designated authorities for delivering weather and climate information in their respective countries. Thus, the consensus process is vital to enhancing the social and political legitimacy of the regional consensus forecasts at both regional and national scales.
Interviews and surveys among climate information producers who participate in the RCOFs can be used to evaluate how consensus is reached. These should include assessing the extent to which forecasters feel that their perspectives and information were considered, debated, and even included in the consensus forecast. Surveys can also assess whether there are mutually agreed upon and useful standards in place to determine what constitutes “credible” and “legitimate” contributions to the consensus process, as well as how these inputs are considered, negotiated, and integrated within the consensus forecast. Participants can also be asked what inputs and procedures should not be included. Standards can be reviewed and may be adjusted periodically to reflect new understandings.
Engagement of both producers and users of seasonal climate forecasts is another important feature of the RCOFs. Successful coproduction of knowledge is more than just putting people together in the same room (Lemos 2015). Important aspects to be identified and measured include the type and quality of interactions, whether the collaboration allows users to build trust and confidence in the information provided, and whether participants perceived their voices were heard (Bruno Soares 2017; Wall et al. 2017). For example, Wall et al. (2017) have proposed evaluating participants’ perceptions of whether there were equitable opportunities for participation, as well as overall satisfaction with the level of engagement among stakeholders. These dimensions of user engagement could be measured qualitatively, through interviews, or more quantitatively, through surveys. These data could be complemented by simple metrics, such as the number of stakeholders who participate, where they come from, and the diversity of sectors or types of institutions they represent (e.g., governmental, nongovernmental, private sector).
Finally, RCOFs can provide vital capacity-building and networking opportunities (Gerlak et al. 2018). In most regions, RCOFs have provided a platform for building and enhancing the scientific capacities of national and regional forecasters. There is growing recognition, however, that NMHSs in many regions around the world face key capacity gaps in the technical production, translation, transfer, and facilitation of the use of climate information (Mahon et al. 2019), and that the RCOF process may help mitigate these capacity gaps. Capacity building and networking through dedicated trainings to develop specific skills, as well as through processes of knowledge sharing, debate, and dialogue can help stimulate social learning around key topics and help mitigate these gaps. There is, thus, a need to evaluate what capacities are being built and whether (and how) these capacities are contributing to the central goals of the RCOFs. This evaluation can be done through self-assessments of capacity gaps from RCOF participants, along with an evaluation of RCOF policies and practices for capacity building by social scientists. To enhance authentic dialogue and collaboration, it is also critical to understand and assess the broader range of skills and capabilities required regionally to span boundaries between scientists and stakeholders as well as what is needed to link scientific information and decision-making. Building these broad capacities will improve the legitimacy of RCOF processes and, ultimately, the usability of the resulting products.
As one example of evaluating the RCOF process, an interdisciplinary team of university researchers collaborated with conveners of the Caribbean RCOF (CariCOF) to assess the quality of the climate products, the usability of these products, and the importance of building more process-oriented evaluations for RCOFs (Guido et al. 2016; Gerlak et al. 2018). Before CariCOF, participants reported they had difficulty interpreting and explaining the forecasts to others, but participation in CariCOF created a space for mutual learning among the scientists and decision-makers from diverse sectors. The CariCOF interactions consequently promoted specific information-brokering activities that helped individuals communicate and translate climate information beyond the CariCOF. Based on these findings, RCOFs should deliberately evaluate user-engagement practices to better determine how scientific and technical information is translated and taken up. The diverse kinds of activities held at RCOFs can then be studied through interviews and surveys to determine how meaningful the participants found the activities and to what extent they felt engaged.
Usability OF Climate Information
The third category—usability of climate information—directly aligns with the core goal of producing a forecast that can contribute directly to the end goal of RCOFs: improved adaptive decision-making and risk management (WMO 2009). The quality of the climate information provided, and of the legitimacy of the RCOF processes that are followed help to determine the usability of seasonal forecasts or related products. But to increase and facilitate use, RCOF products must be responsive to users’ contexts and needs at the appropriate scales to support their decision-making. They must also meet essential criteria related to the credibility and legitimacy of the knowledge that is produced.
To enable practical uptake and use of RCOF processes and products, it is therefore critical to evaluate whether they are responsive to users’ needs. This evaluation can include, for example, assessment of whether RCOF processes help improve user understanding of climate science and forecasts; the users’ engagement and general satisfaction in relation to the process itself; and the usability of the products provided. As an example, approaches evaluating user satisfaction have been applied under the GFCS Adaptation Program in Africa, where qualitative assessments of the usability (i.e., the perceived salience, credibility, and legitimacy among potential users of the services provided) were carried out in Tanzania and Malawi (e.g., Daly et al. 2016; West et al. 2018).
Nonetheless, the identification and engagement of potential users within RCOFs has been a persistent challenge since their inception (NOAA 1998; Ogallo et al. 2008; Daly and Dessai 2018). In some regions, such as the Greater Horn of Africa and the Caribbean, numerous regional users engage directly in RCOF processes on a regular basis. In such cases, surveys and interviews can be developed relatively easily to evaluate users’ perspectives on the legitimacy of RCOF processes and perceived usability of information produced. In other regions, however, user participation in RCOFs may be inconsistent, or even lacking completely. While NMHS representatives are critical to RCOFs and consensus forecasts, without representative user groups participating in RCOF events and accessing its products, the ultimate reach and utility of that information is reduced (Gerlak et al. 2018). Thus, RCOFs must undertake stakeholder mapping and needs assessments to determine who potential users are, what kinds of climate information they require, and how best to facilitate their engagement in RCOF processes.
When the needs of all users, especially climate-vulnerable populations are represented in RCOF processes and outcomes, usability of the climate information improves. However, processes such as the RCOF inherently prioritize the knowledge of certain institutions and actors over others, potentially leaving out the needs and priorities of those most vulnerable to climate risk (Haines 2019). Engaging in interdisciplinary vulnerability assessments that frame climate vulnerability as a complex social process, and not only as a product of exposure, is an important step to ensure that RCOFs promote climate adaptation (Carr and Owusu-Daaku 2016; Gerlak and Greene 2019).
Assessing the usability of information improves knowledge about how climate information and climate products, such as those produced through the RCOFs, may influence adaptation and risk management (McNie 2013; Daly et al. 2016). However, the relation between the usability of the climate information and the outcomes and benefits of such use is neither linear nor straightforward (Bruno Soares et al. 2018) even when “use” is interpreted as an instrumental concept that represents solving a specific problem or informing a particular decision pathway (Bruno Soares 2017). It is thus critical to have evaluation processes in place that enable understanding of how RCOF forecasts and related products are used (or not), allowing user feedback to be systematically collected and the provision of climate information to be adjusted (Gerlak et al. 2018). These data may entail detailed interviews or surveys with stakeholders over time to determine if end users’ understanding of climate science has improved, and whether decisions can be linked to forecasts and other climate service products shared through RCOFs (McNie 2013). This process will require the adoption of standard operating procedures for both receiving and responding to users’ requests in a transparent, unbiased, and culturally sensitive manner. Having such an evaluation process in place can also allow RCOFs to be used as testbeds where, for example, the introduction of a new product can lead to greater mutual understanding for both providers and users. It can also be an important step in an iterative process that refines the product or generates new products altogether—thereby shifting toward demand-driven, rather than supply-driven, climate services (Lourenço et al. 2016).
Next steps for evaluation.
Identifying evaluative categories for RCOF processes based on their goals (see Fig. 2) can enable an adaptive, yet broadly comparable approach for assessing the value and contributions of the different RCOFs. Nonetheless, it will be important to develop coordinated evaluation studies to assess RCOF processes across multiple regions and over time. Given the diverse activities that go on at RCOFs, data should be gathered across several sources using a variety of assessment methods, including, for example, observation, interviews, focus groups, and pre- and postevent surveys (Desai and Potter 2016; Clifford et al. 2016; Tall et al. 2018). The evaluation can be further aided by developing baseline conditions (Tall et al. 2018) prior to testing the impact of RCOF products and processes. These approaches require expertise in, among other techniques, facilitation, survey design, and qualitative data analysis. Ideally, indicators should be both process-based and outcome-based to understand the legitimacy of RCOF processes and their associated outcomes, and the relationship between the two (Bours et al. 2014; Wall et al. 2017). Importantly, identifying problems with the contributing goals could help to explain why there may be problems with the quality or usability of the climate information (i.e., the core and end goals).
While evaluations are resource intensive, the regional significance of RCOFs make them a prototype for climate services and a useful test case. The WMO, as well as other RCOF funders and organizers, should take immediate steps to implement sustained evaluation procedures, beginning with a small number of pilot projects at selected RCOFs. The evaluation should be codesigned by both producers and users of climate services to effectively measure the success of the RCOFs in ways that are likely to foster coproduction of knowledge, illustrate the value of investing in RCOFs, and contribute toward sustainability of the RCOFs. Such evaluation can also avoid overstating the societal benefits, while also taking account of benefits that cannot be captured or estimated solely through economic valuations. As stated earlier, given that no two RCOFs are alike in their processes, we recommend a mix of indicators and approaches that can be adjusted to suit the context and expected use of the evaluation selected (e.g., IDRC 2012; Bours et al. 2014; Bruno Soares et al. 2018).
Ultimately, however, what may be most important is how processes of evaluation are undertaken. While we have proposed three broad evaluative categories to frame the evaluation of RCOFs, stakeholders should play an active role in defining the indicators to be measured. Indeed, the evaluation process itself can be underpinned by principles of coproduction to enable coevaluation of RCOF processes, outputs, and outcomes among producers and users of climate information (Bruno Soares et al. 2018). A participatory, coproduction manner where producers and users of climate services together design the evaluation metrics and processes can ensure that evaluative metrics are sufficiently tailored to accurately reflect the regional context, while allowing for broad comparison and the identification of generalized trends in the quality of climate information, legitimacy of processes, and usability of climate information produced across regions. Such knowledge can inform processes of adaptive learning and adjustment to improve RCOFs in the future. Just as importantly, coevaluation can place the onus on participants themselves to play an active role in defining and realizing the goals of the RCOFs, thereby increasing shared ownership among all stakeholders.
Recalibrating the goals of the RCOFs?
Much has changed since the RCOFs began two decades ago. The climate services landscape is increasingly populated with diverse actors and activities and many new climate service products in addition to seasonal forecasts (Lourenço et al. 2016). Beyond measuring how successful RCOFs are at meeting their objectives, there is also a fundamental question whether those objectives remain well suited to the current climate services context and for a changing climate. The process of evaluating the RCOFs may contribute to a realignment, if necessary, by providing a space and process for organizers, participants, and funders to reflect upon the constituent goals and components of the RCOFs. Ultimately, evaluating the RCOFs can test our assumptions about what the RCOFs should be doing and may help resolve some long-standing tensions around the appropriate scale for user engagement, the role of scientific consensus, and the scale and scope of climate information needed to effectively support climate change adaptation (Guido et al. 2016; Gerlak et al. 2018; Bruno Soares 2017; Daly and Dessai 2018).
To capture the full potential benefits of the RCOFs, we suggest that their place in the climate services system must be better articulated, their goals must be coproduced by stakeholders, and decisions around who participates, the design of the RCOFs, and the ongoing activities must be aligned with the stated goals. Although quality of climate information and its use to support users’ decision-making is at the heart of the RCOFs, there is significant additional value in how the RCOFs bring people together, build trust and capacity, and foster scientific networking. Indeed, the participatory processes around the seasonal forecast achieves much more than the scientific goal and can bring credibility, legitimacy, and saliency to the climate service product produced and how it is used (Cash et al. 2006; Meinke et al. 2006). It also promotes active engagement and builds capacity at regional and national levels.
Furthermore, the benefits and value of the RCOFs are often realized indirectly (and are less tangible to evaluate), through increasing the capacities of national meteorological services to produce higher-quality information and to make forecasts more usable for decision-making. Similarly, RCOFs are increasingly being linked to National Climate Outlook Forums in some countries, where additional value may be added to RCOF products as part of a multitiered climate service delivery system under WMO.
To be both credible and relevant to stakeholders engaged in RCOF processes and to help ensure that learning of participants can take place, mechanisms are needed to ensure continuity between RCOF events, especially to ensure that information is updated and collaboration and interactions continue (Gerlak et al. 2018). This includes examining whether the RCOFs are supported by or embedded within appropriate institutions, as well as assessing whether clear roles and responsibilities have been determined and adopted. Updating the climate information (including the forecast) between RCOF sessions is an intended task of WMO RCCs, although in practice these updates are not available everywhere because the RCC network has not yet been fully implemented.
In this paper, we have identified a particular set of goals based on how RCOFs are currently conceptualized and implemented by national and regional climate information producers, with support from the WMO. We acknowledge that the understanding of what users require from the consensus forecast and other climate service products and, as importantly, from RCOF processes, remains vague, especially as the larger field of climate services is shifting toward user demand-driven services (Lourenço et al. 2016). As such, we suggest that it will be crucial to develop more inclusive processes for defining the objectives of the RCOFs on an ongoing basis, in order to better reflect the perspectives of both producers and users of seasonal climate forecasts, and to embrace a more service-oriented culture that empowers users and recognizes their experiences and perspectives (Alexander and Dessai 2019).
As the WMO is increasingly integrating RCOFs within its multilevel climate services infrastructure under the GFCS, evaluating the RCOFs may help to address barriers to realizing the benefits of climate services. Ultimately, improving and evaluating the RCOFs can help the broader enterprise of climate services. The RCOFs must also strategically align with, and effectively build upon, other complementary efforts and initiatives to develop climate services across institutional scales if they are to ever achieve the end goal of improved climate risk management and adaptation. If the RCOFs fail to align, they risk becoming like Aesop’s gnat—liable to have far less impact than is thought within the RCOF community.
Acknowledgments
This work was funded by Grant/Cooperative Agreement NA13OAR4310184 from the U.S. National Oceanic and Atmospheric Administration (NOAA). The views expressed herein are those of the authors and do not necessarily reflect the views of NOAA or any of its sub-agencies. Contributions by Meaghan Daly were supported by the U.K. Economic and Social Research Council Centre for Climate Change Economics and Policy (CCCEP) Phase II (ES/K006576/1).
References
Aldrian, E. C., and Coauthors, 2010: Regional climate information for risk management. Procedia Environ. Sci., 1, 369–383, https://doi.org/10.1016/j.proenv.2010.09.024.
Alexander, M., and S. Dessai, 2019: What can climate services learn from the broader services literature? Climatic Change, 157, 133–149, https://doi.org/10.1007/s10584-019-02388-8.
Barnston, A. G., and S. J. Mason, 2011: Evaluation of IRI’s seasonal climate forecasts for the extreme 15% tails. Wea. Forecasting, 26, 545–554, https://doi.org/10.1175/WAF-D-10-05009.1.
Barnston, A. G., S. Li, S. J. Mason, D. G. DeWitt, L. Goddard, and X. Gong, 2010: Verification of the first 11 years of IRI’s seasonal climate forecasts. J. Appl. Meteor. Climatol., 49, 493–520, https://doi.org/10.1175/2009JAMC2325.1.
Berri, G. J., P. L. Antico, and L. Goddard, 2005: Evaluation of the Climate Outlook Forums’ seasonal precipitation forecasts of southeast South America during 1998–2002. Int. J. Climatol., 25, 365–377, https://doi.org/10.1002/joc.1129.
Bours, D., C. McGinn, and P. Pringle, 2014: Guidance note 2: Selecting indicators for climate change adaptation programming. SEA Change Community of Practice and UKCIP, 10 pp., https://ukcip.ouce.ox.ac.uk/wp-content/PDFs/MandE-Guidance-Note2.pdf.
Brasseur, G. P., and L. Gallardo, 2016: Climate services: Lessons learned and future prospects. Earth’s Future, 4, 79–89, https://doi.org/10.1002/2015EF000338.
Broecker, J., 2012: Probability forecasts. Forecast Verification: A Practitioner’s Guide in Atmospheric Science, 2nd ed. I. T. Jolliffe and D. B. Stephenson, Eds., Wiley-Blackwell, 119–139.
Bruno Soares, M., 2017: Assessing the usability and potential value of seasonal climate forecasts in land management decisions in the southwest UK: Challenges and reflections. Adv. Sci. Res., 14, 175–180, https://doi.org/10.5194/asr-14-175-2017.
Bruno Soares, M., M. Daly, and S. Dessai, 2018: Assessing the value of seasonal climate forecasts for decision-making. Wiley Interdiscip. Rev.: Climate Change, 9, e523, https://doi.org/10.1002/wcc.523.
Buizer, J., J. Foster, and D. Lund, 2000: Global impacts and regional actions: Preparing for the 1997/98 El Niño. Bull. Amer. Meteor. Soc., 81, 2121–2139, https://doi.org/10.1175/1520-0477(2000)081<2121:GIARAP>2.3.CO;2.
Carr, E. R., and K. N. Owusu-Daaku, 2016: The shifting epistemologies of vulnerability in climate services for development: The case of Mali’s agrometeorological advisory programme. Area, 48, 7–17, https://doi.org/10.1111/area.12179.
Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, J. Jäger, and R. B. Mitchell, 2003: Knowledge systems for sustainable development. Proc. Natl. Acad. Sci. USA, 100, 8086–8091, https://doi.org/10.1073/pnas.1231332100.
Cash, D. W., J. C. Borck, and A. G. Patt, 2006: Countering the loading-dock approach to linking science and decision making: Comparative analysis of El Niño/Southern Oscillation (ENSO) forecasting systems. Sci. Technol. Hum. Values, 31, 465–494, https://doi.org/10.1177/0162243906287547.
Clifford, N., M. Cope, T. Gillespie, and S. French, 2016: Key Methods in Geography .Sage, 752 pp.
Coughlan de Perez, E., E. Stephens, K. Bischiniotis, M. van Aalst, B. van den Hurk, S. Mason, H. Nissan, and F. Pappenberger, 2017: Should seasonal rainfall forecasts be used for flood preparedness? Hydrol. Earth Syst. Sci., 21, 4517–4524, https://doi.org/10.5194/hess-21-4517-2017.
Daly, M., and S. Dessai, 2018: Examining the role of user engagement in the Regional Climate Outlook Forums: Implications for co-production of climate services. Sustainability Research Institute Paper No. 113, Centre for Climate Change Economics and Policy Working Paper No. 329, University of Leeds, 30 pp., www.cccep.ac.uk/wp-content/uploads/2018/03/Working-Paper-329-Daly-Dessai.pdf.
Daly, M., J. West, and P. Yanda, 2016: Establishing a baseline for monitoring and evaluating user satisfaction with climate services in Tanzania. CICERO Rep. 2016:02, 54 pp., https://pub.cicero.oslo.no/cicero-xmlui/handle/11250/2382516.
Desai, V., and R. Potter, 2016: Doing Development Research .Sage, 324 pp.
Feder, G., R. Murgai, and J. B. Quizon, 2004: Sending farmers back to school: The impact of farmer field schools in Indonesia. Rev. Agric. Econ., 26, 45–62. https://doi.org/10.1111/j.1467-9353.2003.00161.x.
Garfin, G., T. J. Brown, T. Wordell, and E. Delgado, 2016: The making of national seasonal wildfire outlooks. Climate in Context: Science and Society Partnering for Adaptation, A. S. Parris et al., Eds., Amer. Geophys. Union, 143–172.
Georgeson, L., M. Maslin, and M. Poessinouw, 2017: Global disparity in the supply of commercial weather and climate information services. Sci. Adv., 3, e1602632, https://doi.org/10.1126/SCIADV.1602632.
Gerlak, A. K., and C. Greene, 2019: Interrogating vulnerability in the global framework for climate services. Climatic Change, 157, 99–114, https://doi.org/10.1007/s10584-019-02384-y.
Gerlak, A. K., and Coauthors, 2018: Building a framework for process-oriented evaluation of Regional Climate Outlook Forums. Wea. Climate Soc., 10, 225–239, https://doi.org/10.1175/WCAS-D-17-0029.1.
Graham, R. J., and Coauthors, 2011: Long-range forecasting and the Global Framework for Climate Services. Climate Res ., 47, 47–55, https://doi.org/10.3354/cr00963.
Guido, Z., V. Rountree, C. Greene, A. Gerlak, and A. Trotman, 2016: Connecting climate information producers and users: Boundary organization, knowledge networks, and information brokers at Caribbean Climate Outlook Forums. Wea. Climate Soc., 8, 285–298, https://doi.org/10.1175/WCAS-D-15-0076.1.
Haines, S., 2019: Managing expectations: Articulating expertise in climate services for agriculture in Belize. Climatic Change, 157, 43–59, https://doi.org/10.1007/s10584-018-2357-1.
Hansen, J., S. J. Mason, L. Sun, and A. Tall, 2011: Review of seasonal climate forecasting for agriculture in sub-Saharan Africa. Exp. Agric., 47, 205–240, https://doi.org/10.1017/S0014479710000876.
Hegger, D., M. Lamers, A. van Zeijl-Rozema, and C. Dieperink, 2012: Conceptualising joint knowledge production in regional climate change adaptation projects: Success conditions and levers for action. Environ. Sci. Policy, 18, 52–65, https://doi.org/10.1016/j.envsci.2012.01.002.
Hewitt, C., S. J. Mason, and D. Walland, 2012: The global framework for climate services. Nat. Climate Change, 2, 831–832, https://doi.org/10.1038/nclimate1745.
Hinkel, J., and A. Bisaro, 2015: A review and classification of analytical methods for climate change adaptation. Wiley Interdiscip. Rev.: Climate Change, 6, 171–188, https://doi.org/10.1002/wcc.322.
Hyvärinen, O., L. Mtilatila, K. Pilli-Sihvola, A. Venäläinen, and H. Gregow, 2015: The verification of seasonal precipitation forecasts for early warning in Zambia and Malawi. Adv. Sci. Res., 12, 31–36, https://doi.org/10.5194/asr-12-31-2015.
IDRC, 2012: Identifying the intended user(s) and use(s) of an evaluation. International Development Research Centre, 4 pp., www.betterevaluation.org/sites/default/files/idrc.pdf.
Jolliffe, I. T., 2007: Uncertainty and inference for verification measures. Wea. Forecasting, 22, 637–650, https://doi.org/10.1175/WAF989.1.
Jolliffe, I. T., and D. B. Stephenson, 2012: Introduction. Forecast Verification: A Practitioner’s Guide in Atmospheric Science, 2nd ed. I. T. Jolliffe and D. B. Stephenson, Eds., Wiley-Blackwell, 1–12.
Korecha, D., and A. Sorteberg, 2013: Validation of operational seasonal rainfall forecast in Ethiopia. Water Resour. Res., 49, 7681–7697, https://doi.org/10.1002/2013WR013760.
Kumar, A., 2010: On the assessment of the value of the seasonal forecast information. Meteor. Appl., 17, 385–392, https://doi.org/10.1002/met.167.
Lemos, M. C., 2015: Usable climate knowledge for adaptive and co-managed water governance. Curr. Opin. Environ. Sustain., 12, 48–52, https://doi.org/10.1016/j.cosust.2014.09.005.
Lemos, M. C., C. J. Kirchhoff, and V. Ramprasad, 2012: Narrowing the climate information usability gap. Nat. Climate Change, 2, 789–794, https://doi.org/10.1038/nclimate1614.
Livezey, R. E., and M. M. Timofeyeva, 2008: The first decade of long-lead U.S. seasonal forecasts: Insights from a skill analysis. Bull. Amer. Meteor. Soc., 89, 843–854, https://doi.org/10.1175/2008BAMS2488.1.
Lourenço, T. C., R. Swart, H. Goosen, and R. Street, 2016: The rise of demand-driven climate services. Nat. Climate Change, 6, 13–14, https://doi.org/10.1038/nclimate2836.
Mahon, R., and Coauthors, 2019: Fit for purpose? Transforming National Meteorological and Hydrological Services into National Climate Service Centers. Climate Serv ., 13, 14–23, https://doi.org/10.1016/j.cliser.2019.01.002.
Martinez, R., B. J. Garanganga, A. Kamga, Y. Luo, S. Mason, J. Pahalad, and M. Rummukainen, 2010: Regional climate information for risk management: Capabilities. Procedia Environ. Sci., 1, 354–368, https://doi.org/10.1016/j.proenv.2010.09.023.
Mason, S. J., 2004: On using “climatology” as a reference strategy in the Brier and ranked probability skill scores. Mon. Wea. Rev., 132, 1891–1895, https://doi.org/10.1175/1520-0493(2004)132<1891:OUCAAR>2.0.CO;2.
Mason, S. J., 2008: Understanding forecast verification statistics. Meteor. Appl ., 15, 31–40, https://doi.org/10.1002/met.51.
Mason, S. J., 2012: Seasonal and longer-range forecasts. Forecast Verification: A Practitioner’s Guide in Atmospheric Science, 2nd ed. I. T. Jolliffe and D. B. Stephenson, Eds., Wiley-Blackwell, 203–220.
Mason, S. J., and S. Chidzambwa, 2008: Position paper: Verification of African RCOF forecasts. RCOF Review 2008, IRI Tech. Rep. 09-02, 26 pp., https://core.ac.uk/download/pdf/161435162.pdf.
Mason, S. J., and A. P. Weigel, 2009: A generic forecast verification framework for administrative purposes. Mon. Wea. Rev., 137, 331–349, https://doi.org/10.1175/2008MWR2553.1.
McNie, E., 2013: Delivering climate services: Organizational strategies and approaches for producing useful climate-science information. Wea. Climate Soc., 5, 14–26, https://doi.org/10.1175/WCAS-D-11-00034.1.
Meadow, A. M., D. B. Ferguson, Z. Guido, A. Horangic, G. Owen, and T. Wall, 2015: Moving toward the deliberate coproduction of climate science knowledge. Wea. Climate Soc., 7, 179–191, https://doi.org/10.1175/WCAS-D-14-00050.1.
Meinke, H., R. Nelson, P. Kokic, R. Stone, R. Selvaraju, and W. Baethgen, 2006: Actionable climate knowledge: From analysis to synthesis. Climate Res ., 33, 101–110, https://doi.org/10.3354/cr033101.
Min, Y.-M., V. N. Kryjov, S. M. Oh, and H.-J. Lee, 2017: Skill of real-time operational forecasts with the APCC multi-model ensemble prediction system during the period 2008–2015. Climate Dyn ., 49, 4141–4156, https://doi.org/10.1007/s00382-017-3576-2.
Murphy, A. H., 1991: Forecast verification: Its complexity and dimensionality. Mon. Wea. Rev., 119, 1590–1601, https://doi.org/10.1175/1520-0493(1991)119<1590:FVICAD>2.0.CO;2.
Murphy, A. H., 1993: What is a good forecast? An essay on the nature of goodness in weather forecasting. Wea. Forecasting, 8, 281–293, https://doi.org/10.1175/1520-0434(1993)008<0281:WIAGFA>2.0.CO;2.
NOAA, 1998: An Experiment in the Application of Climate Forecasts: NOAA-OGP Activities Related to the 1997-98 El Niño Event .NOAA Office of Global Programs, 142 pp.
Njau, L. N., 2010: Seasonal-to-interannual climate variability in the context of development and delivery of science-based climate prediction and information services worldwide for the benefit of society. Procedia Environ. Sci., 1, 411–420, https://doi.org/10.1016/j.proenv.2010.09.029.
O’Brien, K., S. Eriksen, L. P. O. Nygaard, and A. Schjolden, 2007: Why different interpretations of vulnerability matter in climate change discourses. Climate Policy, 7, 73–88, https://doi.org/10.1080/14693062.2007.9685639.
Ogallo, L., 2010: The mainstreaming of climate change and variability information into planning and policy development for Africa. Procedia Environ. Sci., 1, 405–410, https://doi.org/10.1016/j.proenv.2010.09.028.
Ogallo, L., P. Bessemoulin, J.-P. Ceron, S. J. Mason, and S. J. Connor, 2008: Adapting to climate variability and change: The Climate Outlook Forum process. WMO Bull ., 57, 93–102. https://public.wmo.int/en/bulletin/adapting-climate-variability-and-change-climate-outlook-forum-process.
Peng, P., A. Kumar, M. S. Halpert, and A. G. Barnston, 2012: An analysis of CPC’s operational 0.5-month lead seasonal outlooks. Wea. Forecasting, 27, 898–917, https://doi.org/10.1175/WAF-D-11-00143.1.
Perrels, A., T. H. Frei, F. Espejo, L. Jamin, and A. Thomalla, 2013: Socio-economic benefits of weather and climate services in Europe. Adv. Sci. Res, 10, 65–70, https://doi.org/10.5194/asr-10-65-2013.
Siregar, P. R., and T. A. Crane, 2011: Climate information and agricultural practice in adaptation to climate variability: The case of climate field schools in Indramayu, Indonesia. Cult. Agric. Food Environ., 33, 55–69, https://doi.org/10.1111/j.2153-9561.2011.01050.x.
Steynor, A., J. Padgham, C. Jack, B. Hewitson, and C. Lennard, 2016: Co-exploratory climate risk workshops: Experiences from urban Africa. Climate Risk Manage ., 13, 95–102, https://doi.org/10.1016/j.crm.2016.03.001.
Tall, A., 2010: Climate forecasting to serve communities in West Africa. Procedia Environ. Sci., 1, 421–431, https://doi.org/10.1016/j.proenv.2010.09.030.
Tall, A., J. Y. Couilbaly, and M. Diop, 2018: Do climate services make a difference? A review of evaluation methodologies and practices to assess the value of climate information services for farmers: Implications for Africa. Climate Serv ., 11, 1–12, https://doi.org/10.1016/j.cliser.2018.06.001.
Tang, S., and S. Dessai, 2012: Usable science? The U.K. Climate Projections 2009 and decision support for adaptation planning. Wea. Climate Soc., 4, 300–313, https://doi.org/10.1175/WCAS-D-12-00028.1.
Vaughan, C., and S. Dessai, 2014: Climate services for society: Origins, institutional arrangements, and design elements for an evaluation framework. Wiley Interdiscip. Rev.: Climate Change, 5, 587–603, https://doi.org/10.1002/wcc.290.
Vaughan, C., S. Dessai, and C. Hewitt, 2018: Surveying climate services: What can we learn from a bird’s eye view? Wea. Climate Soc., 10, 373–395, https://doi.org/10.1175/WCAS-D-17-0030.1.
Vizard, A. L., G. A. Anderson, and D. J. Buckley, 2005: Verification and value of the Australian Bureau of Meteorology township seasonal rainfall forecasts in Australia, 1997–2005. Meteor. Appl., 12, 343–355, https://doi.org/10.1017/S135048270500191X.
Wall, T. U., A. M. Meadow, and A. Horganic, 2017: Developing evaluation indicators to improve the process of coproducing usable climate science. Wea. Climate Soc., 9, 95–107, https://doi.org/10.1175/WCAS-D-16-0008.1.
West, J., M. Daly, and P. Yanda, 2018: Evaluating user satisfaction with climate services in Tanzania 2014-2016: A summary report to the Global Framework for Climate Services Adaptation Programme in Africa. CICERO Rep. 2018:07, 65 pp., https://pub.cicero.oslo.no/cicero-xmlui/handle/11250/2500793.
Wilks, D. S., 2000: Diagnostic verification of the Climate Prediction Center long-lead outlooks, 1995–98. J. Climate, 13, 2389–2403, https://doi.org/10.1175/1520-0442(2000)013<2389:DVOTCP>2.0.CO;2.
Wilks, D. S., and A. H. Murphy, 1998: A case study of the use of statistical models in forecast verification: Precipitation probability forecasts. Wea. Forecasting, 13, 795–810, https://doi.org/10.1175/1520-0434(1998)013<0795:ACSOTU>2.0.CO;2.
Wilks, D. S., and C. M. Godfrey, 2002: Diagnostic verification of the IRI net assessment forecasts, 1997–2000. J. Climate, 15, 1369–1377, https://doi.org/10.1175/1520-0442(2002)015<1369:DVOTIN>2.0.CO;2.
WMO, 2000: Coping with the climate: A way forward—Summary and proposals for action. IRI Pub. IRI-CW/01/2, 31 pp., www.wmo.int/pages/prog/wcp/wcasp/documents/PretoriaSumRpt2.pdf.
WMO, 2008: Part II: User liaison in RCOFs. WMO Doc., 7 pp., www.wmo.int/pages/prog/wcp/ccl/opace/documents/RCOF-PP-partII.pdf.
WMO, 2009: Regional climate outlook forums. WMO Pamphlet, 2 pp., www.wmo.int/pages/prog/wcp/wcasp/documents/RCOF_Flyer1.4_July2009_EN.pdf.
WMO, 2011: Climate knowledge for action: A global framework for climate services—Empowering the most vulnerable. WMO Rep., 20 pp., www.wmo.int/gfcs/sites/default/files/FAQ/HLT/HLT_FAQ_en.pdf.
WMO, 2014a: Implementation plan of the global framework for climate services. WMO Rep., 81 pp., https://library.wmo.int/doc_num.php?explnum_id=4028.
WMO, 2014b: Annex to the implementation plan of the global framework for climate services–User interface platform component. WMO Rep., 49 pp., https://gfcs.wmo.int/sites/default/files/Components/User%20Interface%20Platform//GFCS-ANNEXES-UIP-FINAL-14210_en.pdf.
WMO, 2017: Global RCOF review meeting report. WMO Workshop Rep., 56 pp., www.wmo.int/pages/prog/wcp/wcasp/meetings/documents/rcofs2017/Report_RCOF_Review_2017_final.pdf.
WMO, 2018: Guidance on verification of operational seasonal climate forecasts. WMO- 1220, 66 pp., https://library.wmo.int/doc_num.php?explnum_id=4886.