1. Introduction
Some claim that forecasts have “zero value” unless they improve societal outcomes (Lazo et al. 2020, p. 4; Williamson et al. 2002). Since the tragic tornado outbreaks in central Alabama and Joplin, Missouri, in 2011, the National Weather Service (NWS) has increasingly emphasized the importance of supporting community partners who help to protect public safety (National Academies of Sciences, Engineering, and Medicine 2018; Uccellini and Ten Hoeve 2019). The NWS 2019–22 strategic plan calls for the agency to “evolve” toward “a partner and customer-centric service delivery model,” including greater emphasis on decision support (National Weather Service 2019, p. 16). This call reflects the 2017 Weather Research and Forecasting Innovation Act that expanded the provision of consistent, high quality impact-based decision support services (IDSS) for NWS forecasters across the United States (National Weather Service 2019).
According to the NWS director, IDSS “develops trusted relationships, captures external needs, and provides actionable information and interpretation to enable partners’ decision-making” (Uccellini and Ten Hoeve 2019, p. 10). In practice, IDSS includes a variety of activities such as 1) training and exercises; 2) pre-event scenario planning; 3) specialized briefings, emails, and graphics; 4) onsite support; and 5) after-action event reviews (Hosterman et al. 2019). At the core of IDSS is forecasters developing and sustaining relationships with their partners instead of simply providing partners with forecasts (Uccellini and Ten Hoeve 2019). A critical challenge to the effective implementation of IDSS, including relationship maintenance, is a lack of social science research evaluating the success of IDSS (Lazo et al. 2020; Uccellini and Ten Hoeve 2019). This paper addresses that call for research through a cross-sectional survey with 119 NWS forecasters and managers in the central and southern regions of the United States. Findings uncover how NWS forecasters and management team members evaluate the importance of IDSS. Findings also provide a new instrument for NWS field offices to assess and improve their relationships with core partners.
2. Literature review
In this section, we first review the limited literature on how forecasters and their partners perceive IDSS, which leads to our first research question. We then review the literature on the relationship management paradigm, which leads to our second research question and four hypotheses.
a. Perceptions of IDSS
A National Academies of Sciences, Engineering, and Medicine (2018) report called for more research on how forecasters, emergency managers, broadcast media, and others “interact and communicate among themselves” (p. 26). Researchers have begun to answer this call through examining forecasters’ interactions with three of their core partners: emergency managers, broadcast meteorologists, and trained storm spotters.
b. Emergency managers
Emergency managers are “an essential link” between NWS forecasters and community members (Hosterman et al. 2019, p. 8). Effective IDSS builds trust between NWS forecasters and emergency managers (Hosterman et al. 2019; Lazo et al. 2020; Uccellini and Ten Hoeve 2019), which reduces negative societal impacts of severe weather (Lazo et al. 2020). In a case study, researchers found that IDSS helped build trust between NWS forecasters and emergency managers and improved decision-making for two winter storms (Hosterman et al. 2019). New York City (New York) emergency managers reported that IDSS enabled them to quickly take appropriate mitigation measures and to craft improved messages (Hosterman et al. 2019). In another study examining the same two winter storms, researchers further concluded that the IDSS provided reduced costs by more than $107 million and reduced recovery time by five days (Lazo et al. 2020). Clearly, IDSS shows great promise for achieving the NWS mission of saving lives, property, and enhancing the national economy.
On the other hand, when IDSS is ineffective, trust can be broken and the weather enterprise, as a whole, functions poorly (Baumgart et al. 2008; Demuth et al. 2012; Mehta et al. 2013). Ineffective IDSS occurs when NWS forecasters provide confusing, highly technical information in their forecasts and other products (Demuth et al. 2012; Mehta et al. 2013). Sometimes confusion arises when there is significant uncertainty in forecasts (Demuth et al. 2020). Other times, NWS forecasters do not have a clear sense of their target audiences, and they consequently fail to tailor products for their partners’ unique decision-making needs (Baumgart et al. 2008; Demuth et al. 2012).
The NWS can provide training and personal assistance to emergency managers to build trust and help improve understanding of weather products (Bostrom et al. 2016; Mehta et al. 2013). Additionally, NWS forecasters can present forecasts in multiple ways to diminish biased processing among core partners (Wernstedt et al. 2019), including in ways that emergency managers can directly disseminate forecasts to the general public (Hoss and Fischbeck 2016).
Overall, a growing body of research points to how forecasters can improve IDSS for emergency managers. More research is needed to understand how forecasters assess the value of IDSS for their core partners, including emergency managers and broadcast meteorologists, as discussed in the next section.
c. Broadcast meteorologists
NWS forecasters and emergency managers jointly communicate with broadcast meteorologists. Together they inform members of the public on actions to take in response to forecasts (Demuth et al. 2012; Hosterman et al. 2019). Broadcast media also provide useful information to forecasters and emergency managers, especially during rapidly moving events (Baumgart et al. 2008). In terms of long-term relationship building, some forecasters have recognized their media partners as being especially instrumental in helping them master new technology, including social media (Liu et al. 2020a).
Like the research on emergency managers, studies point to some challenges NWS forecasters face in delivering effective IDSS to broadcast meteorologists (Demuth et al. 2012; Mehta et al. 2013). In a study on the hurricane warning system, researchers found that many emergency managers and NWS forecasters focused on operational decisions rather than public communication, which frustrated some broadcast meteorologists who needed more frequent and useful updates for their viewers (Demuth et al. 2012). Another tension identified is that broadcast meteorologists often desire to release information immediately, whereas forecasters and emergency managers prefer to fully vet information (Demuth et al. 2012; Liu et al. 2020a). Research further finds that sometimes forecasters perceive broadcast meteorologists as sensationalizing severe weather, in part because the media business model is driven by viewership (Liu et al. 2020a). To combat these tensions, researchers have called for involving broadcast media in more scenario planning, which in turn would better prepare communities to mitigate their risks (Mehta et al. 2013).
Overall, more research is needed to assess how forecasters perceive the IDSS that they provide broadcast meteorologists, and their relationships with their media partners. The same research gap occurs for trained storm spotters, which we discuss in the next section.
d. Trained storm spotters
Since the 1970s, the NWS has trained citizens to serve as weather spotters to collect, confirm, verify, or supplement radar and other data (National Weather Service 2020). Trained storm spotters play a vital role in achieving the NWS mission of protecting life and property when these spotters provide “ground truth” information during severe weather (Cavanaugh et al. 2016; Klenow and Reibestein 2014; McCarthy 2002). Accordingly, trained spotters are a core partner for NWS forecasters, especially when it comes to decisions to warn (Liu et al. 2020b; McCarthy 2002).
NWS forecasters can build strong partnerships with trained spotters by having respect as a foundation, working together to meet diverse publics’ information needs, and building strong relationships through face-to-face interactions in and outside of the office (Liu et al. 2020b). Notably, if strong relationships exist, trained spotters can help NWS forecasters translate complex weather data for the general public (Liu et al. 2020b), which can be critical for public understanding of severe weather warnings (Krennert et al. 2018). Trained spotters also can help forecasters identify problems in detection algorithms (Tuovinen et al. 2020) and decide when to issue warnings (Klenow and Reibestein 2014; Liu et al. 2020b).
Not all research positively assesses the role of trained spotters in weather forecasting. Some forecasters have expressed concern that trained spotters may “dilute the perceived expertise of the NWS” (Liu et al. 2020b, p. 274). Other research finds that some trained spotters provide poor quality data to the NWS, which is why training is imperative (Klenow and Reibestein 2014; Krennert et al. 2018). Overall, more research is needed to unpack how forecasters perceive the role of trained spotters in fulfilling the NWS vision of a weather-ready nation.
e. Assessment of IDSS
NWS leadership has called for research to evaluate the success of IDSS (Uccellini and Ten Hoeve 2019), especially given the uncertainty that forecasters express about IDSS metrics (National Weather Service 2017). Qualitative research revealed that IDSS has shifted forecasters’ roles including how they spend their time and how they develop forecasts (Demuth et al. 2020). As reviewed in the prior sections, a growing body of scholars examine forecasters’ relationships with their core partners. What is missing from the prior literature is a quantitative examination of how NWS forecasters and managers assess IDSS overall. Such a global assessment enables the NWS to reliably evaluate forecasters’ commitment to the weather-ready nation vision. Such a global assessment also enables understanding of how forecasters and managers may differ in their approaches to IDSS, given the different roles of the NWS workforce (National Weather Service 2017). Therefore, we ask the following research question (RQ):
RQ1: How do NWS forecasters and management team members characterize the importance of IDSS in their offices?
f. Relationship management paradigm
In addition to a global evaluation of IDSS, scholars have called for research that connects “different social and organizational worlds to foster innovation, provide two-way communication among multiple sectors, and integrate production of science with user needs” (Feldman and Ingram 2009, p. 15). For nearly four decades, communication scholars have investigated how organizations can build strong relationships with their publics, as reviewed in the following section. This research provides the foundation to assess the quality of forecasters’ relationships with their core partners. Without strong partner relationships, IDSS will not succeed (National Weather Service 2019).
An organization–public relationship is “the state which exists between an organization and its key publics that provides economic, social, political, and/or cultural well-being to all parties involved, and is characterized by mutual positive regard” (Ledingham and Bruning 1998, p. 62). As Shen (2017) noted, not all relationships are perceived as positive. Quality, long-term relationships occur when organizations and their partners have mutual understanding and perceive mutual benefits in continuing the relationship (Ledingham et al. 1999; Ledingham 2003). Furthermore, partners are more likely to employ an organization’s products or services when they have a strong relationship with the organization (Ledingham and Bruning 1998). Other positive outcomes of quality relationships include improving employee morale and job satisfaction (Kazoleas and Wright 2001) as well as contributing to successful conflict resolution (Huang 2001) and a strong sense of community (Kim and Cho 2019).
Most of the prior research has examined dyadic relationships (i.e., one organization and one partner), with scholars increasingly calling for research examining multiparty relationships (Cheng 2018; Heath 2013; Walters and Bortree 2012), as we examine in this study. A variety of dimensions have been proposed to assess relationships, whether they be dyadic or multiparty, with the most widely examined dimensions being trust, distrust, control mutuality, satisfaction, and commitment (Hon and Grunig 1999; Shen 2017; Yang and Taylor 2015). We review these dimensions below.
Trust is at the heart of effective IDSS (Uccellini and Ten Hoeve 2019) and is essential for quality organizational partnerships (Grunig et al. 1992). In the relationship management paradigm, trust is defined as “one party’s level of confidence in and willingness to open oneself to the other party” (Hon and Grunig 1999, p. 3). Dimensions of trust include integrity, dependability, and competence (Hon and Grunig 1999; Shen 2017). Establishing trust includes dialogue, openness, and a willingness to admit mistakes (Hung-Baesecke and Chen 2020). Government risk communicators are trusted when they are competent, exhibit care for others, are credible and reliable, and are committed to their organizations’ missions (Liu and Mehta 2021).
The second relationship dimension is distrust. Distrust is not the same as a low level of trust; instead, distrust is a separate relational dimension because relational parties can have both trust and distrust in each other (Cho 2006; Shen 2017). Distrust is defined as perceptions that partners have “sinister intentions” in an organizational relational context, which can include not prioritizing partners’ interests when making decisions, exploiting partners’ vulnerabilities, and engaging in unreliable behaviors (Shen 2017, p. 998). In a hazards context, distrust may emerge from negative stereotypes, which can be combated through counter messaging (Peters et al. 1997). Distrust also emerges in organizations with closed climates (Yang et al. 2015).
The third relationship dimension is control mutuality, which is “the degree to which parties agree on who has the rightful power to influence one another” (Hon and Grunig 1999, p. 3). Stable relationships require that all parties have some influence over each other, even though in many organizational relationships power is not evenly distributed (Grunig et al. 2002; Hon and Grunig 1999). Control mutuality further recognizes the importance of all parties perceiving that their partners respect and listen to their opinions (Shen 2017) and that, together, relational partners contribute to a more fully functioning society (Heath 2006; Liu et al. 2020a).
The fourth relationship dimension is satisfaction, which is “the extent to which each party feels favorably toward the other because positive expectations about the relationship are reinforced” (Hon and Grunig 1999, p. 3). Satisfaction includes perception of mutual benefits among relational parties (Shen 2017) and favorable assessments of partners’ communication (Grunig et al. 2002). Huang (2001) noted that satisfaction measures emotions (i.e., feelings) whereas the other relationship dimensions measure cognition (i.e., thoughts).
The last relationship dimension is commitment, which is “the extent to which each party believes and feels that the relationship is worth spending energy to maintain and promote” (Hon and Grunig 1999, p. 3). Commitment includes dedication to continuity from all parties (Hon and Grunig 1999). When commitment exists, relational partners feel like they are “part of the family” and there is a long-lasting bond among relational partners (Morgan and Hunt 1994; Shen 2017).
g. Connecting relationship management to IDSS
As previously noted, forecasters must commit to building strong relationships with their core partners for IDSS to succeed in creating a weather-ready nation (National Weather Service 2019). However, prior research has not assessed how forecasters evaluate their core relationships. Therefore, given the prior literature review of IDSS and relationship dimensions, we propose the following hypotheses (H):
H1: Trust (i), control mutuality (ii), commitment (iii), and relationship satisfaction (iv) are positively associated with effective IDSS communication.
H2: Distrust is negatively associated with effective IDSS communication.
Given the prior literature review, we also ask the following question:
RQ2: What, if any, are the differences between respondents’ perceptions of their relationships with three of their core partners (i.e., emergency managers, broadcast meteorologists, and trained spotters) on the five relationship dimensions?
Last, we make the following predictions:
H3: Trust (i), control mutuality (ii), commitment (iii), and relationship satisfaction (iv) are positively associated with using partner reports in the decision to warn.
H4: Distrust is negatively associated with using partner reports in the decision to warn.
3. Method
a. Participants and procedures
We surveyed NWS forecasters and management team members in the southern and central regions via the online survey platform Qualtrics. We began data collection in the southern region in March 2019, given the scope of our funding vehicle. In consultation with our program managers and NWS partners, we expanded data collection to the central region in June 2019. Data collection continued through July 2019. We used surveys because they are ideal for generating robust conclusions from a naturalistic setting (Allen et al. 2009) and can provide important descriptive and explanatory information for applied research problems (Singleton and Straits 2005). The team adopted a participatory action approach to designing the survey instrument and recruitment (Ivankova 2015; Whyte 1991).
Using the participatory action approach (Ivankova 2015; Whyte 1991), we designed the survey instrument and recruited participants in three stages. First, the team developed survey items from the literature review and a pilot test involving 32 in-depth interviews with forecasters and members of management. Next, we discussed the survey instrument with management team members at three Weather Forecast Offices in the southern region and the science and training branch chief at the NWS southern region headquarters. Working in concert with NWS management team members at this stage made it more likely that the results obtained are relevant to our target audience and applicable to the day-to-day realities of the weather enterprise. The final version of the survey was approved by the University of Maryland Institutional Review Board and the NWS southern region union steward. Given that we expanded into the central region after the survey instrument was developed, our survey instrument was reviewed and approved by central region headquarters, but no changes were made. Third, we recruited participants in collaboration with our NWS partners, who distributed our survey so that the link was not perceived as spam. Multiple survey reminders were distributed, and we closed data collection after five months (i.e., March–July 2019). In total, 144 forecasters and members of management accessed the survey across the two regions, and we received a total of 119 usable responses. The data are from the authors’ VORTEX–Southeast (VORTEX-SE) funded larger project on forecasters’ decisions to warn on tornadoes, relationships with core partners, and tornado risk communication.
Our survey measured a variety of demographic and background variables. For categorical responses, we provide the percentage of the most common response and information about missing data in parentheses. Respondents were 40 years of age on average (M = 40.54, SD = 9.17, and 14 cases had missing data; SD indicates standard deviation), were mostly White (n = 97; 81.5%; 20 respondents selected either “prefer not to answer” or did not report their race), and reported their gender as man (n = 90; 75.6%; 15 either selected “prefer not to answer” or did not report their gender). On average, respondents have been with the NWS for 15 years (M = 15.37, SD = 9.30, and nine cases had missing data) and have issued an average of 10 tornado warnings over the past three years (M =10.58; SD = 12.04). Fifty-nine respondents were from the central region (49.60%), 42 respondents indicated being in the southern region (35.30%), and 18 did not report their region (15.10%). Respondents reported their current position as meteorologist in charge (n = 3; 2.5%), science and operations officer (n = 11; 9.2%), warning coordination meteorologist (n = 8; 6.7%), lead forecaster (n = 41; 34.5%), journey forecaster (n = 28; 23.5%), and meteorological intern (n = 14; 11.8%). Four respondents wrote in the corresponding textbox that they were meteorologists (3.4%), and nine did not report their current positions (7.6%). We retained one respondent (i.e., 0.80%) in the sample who reported being a senior hydrologist after consultation with one of our NWS partners prior to data analysis.
b. Measures
1) IDSS assessment
Respondents answered 12 questions about IDSS, developed from our pilot interview study with 32 forecasters and conversations with our NWS research partners. Specifically, we asked respondents the extent to which they agree/disagree with a series of statements about IDSS. Our pilot study indicated that 3 of these (described below) 12 items represent the concept of positive communication. We examined the other 9 items separately. See Table 1 for descriptive statistics and 95% confidence intervals (CIs) of the IDSS items.
IDSS descriptive statistics [mean, with standard error (SE) in parentheses] and 95% CIs. Results were bootstrapped with 1000 samples. Responses were on a 7-point scale where higher numbers indicate greater statement agreement. For composites, higher numbers indicate a greater amount of the concept. Here, NMean_Forecasters = 83, NMean_Management = 22, and NMean_Total = 112.
2) Effective IDSS communication
We measured effective IDSS communication with three items, developed from our pilot study with 32 forecasters and conversations with NWS research partners: 1) “In order to make IDSS successful, we should focus on making personal connections with our core partners.” 2) “In order to make IDSS successful, we should employ proactive communication with our core partners.” 3) “In order to make IDSS successful, we need to have face time with our core partners.” A principal components analysis indicated that the three items loaded to the composite and were retained. The items formed a reliable composite. We averaged all three items.
3) Relationship management paradigm
We adapted existing scale items (Shen 2017) to measure the five dimensions of relationships outlined in the relationship management paradigm (i.e., trust, distrust, control mutuality, commitment, and satisfaction). Typically, in the relationship management paradigm, researchers will measure the organization’s publics on these items to understand the relationship from their publics’ perspectives. However, given that the goal of the current study is to understand the relationship building processes from the organizational members’ perspective, we needed to adapt the item wording for that purpose. In other words, to understand forecasters’ perceptions of their offices’ relationships in the context of IDSS the items needed to be modified. The study’s complete instrument is available online (https://drum.lib.umd.edu/handle/1903/26675), as well as an instrument for measuring publics’ perception of relationships, which can be used in future research.
We prepared the survey items for analysis using the following procedure. First, we computed a series of principal components analyses to ensure that the scale items load appropriately to the composite for each partner grouping. Next, we computed reliability analyses for each of the dimensions for each partner grouping to ensure the variable’s internal consistency. For dimensions with two items we computed Pearson’s bivariate correlation as the indicator of reliability and for dimensions with three or more items, we computed omega using Hayes and Coutts’s (2020) OMEGA macro for SPSS statistical analysis software. (Tables 3 and 4 provide reliability estimates, means, standard deviations, and correlations for the variables segmented by partner grouping.)
These preliminary analyses revealed two issues with using these items to measure the five dimensions of relationships in the study’s context. First, for the partner grouping trained spotters, there were consistent issues with the component structure for two of the three-item dimensions (i.e., distrust and control mutuality). Specifically, for the distrust and control mutuality dimensions the scale items loaded to the component, but the follow-up reliability analysis indicated that there are internal consistency issues and the scales were not reliable as indicated by these omegas being less than 0.70 (i.e., trained spotter distrust omega = 0.69; trained spotter control mutuality omega = 0.66). Second, there were issues with one of the items for the commitment variable for both emergency managers and broadcast meteorologists. There was one item (“In dealing with our trained spotters/emergency managers/broadcast meteorologists, my office has a tendency to throw its weight around”) that did not load to the component for either of these partner groupings. To address this issue, we removed this item from analysis for both partner groupings.
Interestingly, for the trained spotters group, the analysis did not indicate any issues with this specific item. This difference in the component structure alongside the previously reported analytic issues for this partner grouping led us to remove this partner grouping from our analysis as these preliminary analyses seem to indicate that forecasters understand their relationship with trained spotters differently than their relationships with emergency managers or broadcast meteorologists. Hence, the following analyses only examine the research questions and hypotheses for emergency managers and broadcast meteorologists. We comment on this analysis in our discussion section and provide directions for future research.
(i) Trust
Trust was measured with two items. Reliability analyses indicated that these two items should be combined for both for emergency managers and broadcast meteorologists. We took an average of the two items for each partner group separately.
(ii) Distrust
Distrust was measured with three items. Reliability analyses indicated that these three items should be combined for both emergency managers and broadcast meteorologists. We averaged the three items for each partner group separately.
(iii) Control mutuality
Control mutuality was measured with three items. As discussed above, our preliminary analyses indicated that one of the items did not load to the primary factor and was removed for analysis. The factor loadings and the subsequent reliability estimates indicated that the other two items could be combined for each partner group. We computed those averages.
(iv) Commitment
Commitment was measured with three items. Reliability analyses indicated that these three items should be combined for both emergency managers and broadcast meteorologists. We averaged the three items for each partner group separately.
(v) Satisfaction
Satisfaction was measured with two items. Reliability analyses indicated that these two items should be combined for both emergency managers and broadcast meteorologists. We took an average of the two items for each partner group separately.
4) Partner reports and warnings
The ultimate goal of IDSS is to improve the warning process (National Weather Service 2019). Therefore, we used the following single item measure for each group (emergency managers and broadcast media) to assess the potential correlation between partner-use reports and relationship dimensions, developed from conversations with our NWS research partners. This measure was “Reports from emergency managers/broadcast meteorologists positively influence my decision to warn.”
4. Results
a. Characterization of IDSS
RQ1 asked how do NWS forecasters and management team members characterize the importance of IDSS in their home offices? To answer our research question, we computed the means using a bootstrapping procedure with 1000 samples for our effective IDSS communication composite, as well as for the other nine items used to measure forecasters’ and management team members’ perceptions of IDSS. Specifically, we computed means for the composite and nine items for the entire sample (i.e., forecasters and members of management together), for forecasters separately, and for members of management separately (see Table 1). Using a bootstrapping procedure is advantageous as it estimates the 95% CIs, which allows us to compare participants’ responses with established benchmarks, including scale points, as well as to make comparisons between groups of independent scores. We compared respondents’ scores with the scale midpoint (i.e., 4) to assess participants’ agreement with the IDSS statements, as well as compare scores between forecasters and members of management.
Results indicate that together forecasters and members of management at least somewhat agree with each of the concepts/statements related to IDSS shown in Table 1, as indicated by that each of the 95% CIs is above the scale midpoint (i.e., each 95% CI is above 4). Specifically, results indicate that effective IDSS communication (i.e., communication that is proactive, includes face time with partners, and is focused on fostering personal connections) is seen as the core of IDSS success (see row 1 of Table 1). Results also show that respondents agree that IDSS training is needed for best practices in risk communication, using social media, and relationship building with core partners (see rows 3–5 of Table 1). Respondents agree that offices need to prioritize communication channels that allow them to reach multiple partners simultaneously and that IDSS is not the same as decision-making (see rows 6 and 7 of Table 1).
While forecasters and members of management at least somewhat agree that IDSS is the future of forecasting (see row 8 of Table 1), there seems to be divergence between members of management and forecasters for three of the IDSS statements. In examining row 2 of Table 1, it is seen that forecasters seem to more strongly agree that managers should be flexible about the IDSS they assign forecasters. Managers seem to more strongly agree that their office prioritizes communication in the hiring process and that their office has tried new staffing strategies to provide IDSS (rows 9 and 10 of Table 1). However, given the overlap in the 95% CIs, we calculated the proportion of overlap based upon recommendations from Cumming and Finch (2005) to determine whether means were significantly different between the groups (i.e., between forecasters and members of management). None of the proportions of overlap were below the 0.50 benchmark. Hence, we err on the side of caution and determine that the perceptions between the groups on these items are not statistically different.
b. Core partners relationship evaluation
RQ2 asked what, if any, are the differences between forecasters’ perceptions of their relationships with core partners on the five relationship dimensions? To answer this research question, we conducted five dependent t tests (i.e., one for each relationship dimension), using a bootstrapping procedure with 1000 samples. Table 2 provides the statistical results from these analyses, including model information [e.g., t and degrees of freedom (df)] and estimates of effect size.
Dependent t-test results for the five-relationship dimension segmented by partner grouping. Responses were on a 7-point scale where higher numbers indicate higher levels of the concept. The effect size, Cohen’s d, was computed using an effect-size calculator (https://memory.psych.mun.ca/models/stats/effect_size.shtml).
As shown in Table 2, the five dependent t tests were statistically significant. First, results indicate that forecasters and members of management perceive their offices as creating relationships with their core partners in line with best practices from the relationship management paradigm. Specifically, all positively valenced dimensions (i.e., trust, control mutuality, commitment, and satisfaction) were above the scale midpoint for each partner grouping and the negatively valenced dimension (i.e., distrust) was significantly below in scale midpoint for each partner grouping. Put differently, forecasters and members of management see their offices fostering relationships with emergency managers and broadcast meteorologists that are high in trust, control mutuality, commitment, satisfaction, and low in distrust.
Second, the findings indicate a consistent pattern of differences between the partner groupings for each of the five relationship dimensions that varies based on the dimensions’ valence. Specifically, for positively valenced dimensions, results show that participants perceived their offices as trying to foster higher levels of these dimensions for emergency managers relative to broadcast meteorologists. A different pattern emerged for the negatively valenced dimension of distrust. Specifically, results show that scores for distrust were consistently low across both partner groups, but broadcast meteorologists were more distrusted than emergency managers (see Table 2).
c. Relationships and forecasters’ decisions to warn
To test H1(i)–(iv), H2, H3(i)–(iv), and H4, we used Pearson’s bivariate correlations. Specifically, we calculated these correlations for both of the partner groupings (i.e., emergency managers and broadcast meteorologists). Tables 3 and 4 present these analyses and the descriptive statistics for all variables included in these analyses. There are several correlations that meet Cohen’s (1988) criteria for meaningful effects in correlational research that were not statistically significant given the small sample size. Specifically, Cohen asserts that correlations of 0.10 are representative of small, but meaningful effects, whereas correlations of 0.30 indicate a moderate effect. We interpret the correlations with small effect sizes with caution and note that they should be replicated in future research.
Descriptive statistics, reliability estimates, and correlations for relationship dimensions and use of partner reports in the decision to warn for emergency managers. When bivariate correlations are used for reliability estimates, the statistical significance is provided (**: p < 0.01, *: p < 0.05, and +: p < 0.10).
Descriptive statistics, reliability estimates, and correlations for relationship dimensions and use of partner reports in the decision to warn for broadcast meteorologists. The correlations between variables 6row, 3column and variables 6 row, 4 column show the same correlation with differing significance levels because of rounding. When bivariate correlations are used for reliability estimates, the statistical significance is provided (**: p < 0.01, *: p < 0.05, and +: p < 0.10).
H1 predicted that trust (i), control mutuality (ii), commitment (iii), and relationship satisfaction (iv) are positively associated with effective IDSS communication. Results show that trust is positively associated with effective IDSS communication for emergency managers and broadcast meteorologists. H1(i) was supported. Control mutuality is positively associated with effective IDSS communication for broadcast meteorologists. The same pattern emerges for emergency managers as broadcast meteorologists, but this finding is trending toward statistical significance. H1(ii) is partially supported. Commitment is positively associated with effective IDSS communication for emergency managers and broadcast meteorologist. However, these results should be interpreted with caution because the correlation is trending toward statistical significance. H1(iii) is cautiously supported. Satisfaction is positively associated with effective IDSS communication for emergency managers and broadcast meteorologists. H1(iv) was supported.
H2 predicted that distrust would be negatively associated with effective IDSS communication. Distrust is negatively associated with effective IDSS communication for emergency managers. The same pattern emerges for broadcast meteorologists, but this finding is trending toward statistical significance. These results mostly support H2.
H3 predicted that trust (i), control mutuality (ii), commitment (iii), and relationship satisfaction (iv) are positively associated with using partner reports in the decision to warn. Trust significantly predicts the decision to warn for both emergency managers and broadcast meteorologists. H3(i) was supported. Control mutuality predicts partner reports for broadcast meteorologists, but not for emergency managers. H3(ii) was partially supported. Commitment does not predict using partner reports in the decision to warn for either emergency managers or broadcast meteorologists. H3(iii) was not supported. Satisfaction does not predict using partner reports in the decision to warn for either emergency managers or broadcast meteorologists. H3(iv) was not supported. Interestingly, and not hypothesized, effective IDSS communication positively predicts using partner reports in the decision to warn for broadcast meteorologists, but not for emergency managers.
H4 predicted that distrust is negatively associated with using partner reports in the decision to warn. Our data do not support this hypothesis for either partner group, meaning that distrust with emergency managers or broadcast meteorologists was not associated with forecasters using their reports in their decisions to warn.
5. Discussion
Past research found that forecasters’ motivation to serve the mission of the NWS is “off the charts,” but full implementation of the weather-ready nation vision requires an evolution from a purely science driven approach to a customer-service approach to forecasting (National Weather Service 2017; Uccellini and Ten Hoeve 2019, p. 9). IDSS is the primary vehicle to support this evolution, and our study serves as one of the first quantitative evaluations of how forecasters evaluate IDSS through a relational lens. A major contribution of our study is to provide a practical tool for Weather Forecast Offices (WFOs) to assess the status of their IDSS programs, including the status of their relationships with core partners [see our online supplement (https://drum.lib.umd.edu/handle/1903/26675)].
Relationships are situational and change over time (Hon and Grunig 1999; Shen 2017). Therefore, ongoing assessment is imperative for identifying areas for improvement. This is the first study to adapt and validate the relational dimensions from the communication literature to the forecast context. This also is the first study to develop and validate a new IDSS instrument. These two contributions provide a user-friendly tool that can be employed during integrated warning team (IWT) meetings, training, and in other contexts to provide immediate relational quality feedback. A workforce assessment concluded that there is uncertainty on how to evaluate IDSS effectiveness and partner types across the NWS (National Weather Service 2017), and our study begins to address this need. Below we discuss the additional implications of our research.
a. How forecasters and management characterize IDSS
Our first research question investigated how forecasters and managers characterize the importance of IDSS in their home offices. Respondents reported a need for high levels of effective communication in order for IDSS to be successful. Effective communication includes forecasters making personal connections, having face time with core partners, and proactively communicating with core partners. These findings support the customer-service vision of the evolved NWS (Uccellini and Ten Hoeve 2019) while recognizing the importance of science in the warning program. The findings further point to specific communicative actions WFOs can encourage to strengthen their IDSS, which may inform staffing strategies. For example, personal connections and face time can occur at out-of-the-office events like cohosting booths with emergency managers at community events and other social events. Proactive communication should include embedding forecasters with their partners in the lead-up to potential severe weather events. After these activities, forecasters can use our instrument to assess whether their relationships have improved.
These findings also point to how core partners can strengthen their relationships with the NWS. For example, broadcast meteorologists can invite forecasters to tour their offices to facilitate face time and personal connections. Emergency managers can host joint training exercises with forecasters and broadcast meteorologists as part of IWT activities.
Findings revealed three priority areas for training to help support IDSS. Forecasters expressed a need for more training in risk communication, how to build relationships with core partners, and social media best practices. Prior research has identified the need for more training in risk communication and social media (e.g., Liu et al. 2020a; Sherman-Morris et al. 2018), and this study adds the need for relationship building training. Our findings also add that training needs to help offices prioritize communication channels that allow them to reach multiple offices and partners simultaneously.
b. The role of relationships in IDSS
Our second research question and four hypotheses investigated how participants perceive their relationships with core partners along the five relational dimensions identified in the communication literature. These dimensions are trust, distrust, control mutuality, commitment, and relationship satisfaction (Hon and Grunig 1999; Shen 2017; Yang and Taylor 2015).
Scholars have called for more IDSS research from a communication perspective (Lazo et al. 2020), reflecting an appreciation for the importance of effective risk communication in the weather enterprise (National Weather Service 2019). A growing body of literature identifies effective message strategies for communicating severe weather to various publics (e.g., Ash et al. 2014; Liu et al. 2020a; Olson et al. 2019). However, as Grunig (1993) noted, relationships provide the foundation for organizational communication. Without strong positive relationships, messages are unlikely to succeed (Grunig 1993).
Our findings indicate that effective IDSS communication is positively associated with trust, control mutuality, commitment, and satisfaction for both emergency managers and broadcast meteorologists. It should be noted that several of these correlations were approaching statistical significance but exceed Cohen (1988) benchmarks for a statistically small, but meaningful effects. These analyses should be replicated in future research. Notwithstanding, these findings point to the relational drivers of effective IDSS communication that WFOs can focus on to improve and/or maintain partner relationships. For example, the findings revealed that forecasters perceive that control mutuality (i.e., the extent to which relational partners influence each other) is important in their relationships with broadcast meteorologists. This complements previous qualitative research (Demuth et al. 2012) suggesting that broadcast meteorologists need to feel like they are fully included in warning programs, rather than excluded (i.e., control mutuality). Broadcast meteorologists and emergency managers both need to be committed to the warning program for effective IDSS to occur, pointing to the importance of shared risk communication goals.
Our results further indicate that trust with emergency managers and broadcast meteorologists are positively associated with using reports from those partners in forecasters’ decisions to warn about tornadoes. Control mutuality is positively associated with using reports from broadcast meteorologists in forecasters’ decisions to warn. These findings point to the importance of establishing strong partner relationships because relational dimensions are positively associated with using partners’ reports on decisions to warn. As prior research found, forecasters need external data to complement their models when deciding whether to issue a severe weather warning (Daipha 2015; Klockow-McClain et al. 2020). Without strong partnerships, our findings suggest that forecasters are less likely to receive and use quality external data.
c. The unique relationship with trained spotters
Our analysis finds that forecasters have different relationships with trained spotters than with emergency managers and broadcast meteorologists. Indeed, the relationships with trained spotters are so different that they cannot be reliably measured using the same relationship dimensions as for emergency managers and broadcast meteorologists.
Past research indicates that some forecasters distrust the quality of data provided by trained spotters (Klenow and Reibestein 2014; Krennert et al. 2018). Some forecasters even question the value of trained spotters in the weather enterprise (Liu et al. 2020b). This may be because forecasters view emergency managers and broadcast meteorologists as having more expertise in weather science than lay audiences. Conversely, forecasters may view trained spotters as lay audiences with lower expertise in weather science than broadcast meteorologists and emergency managers. Prior research supports taking different approaches to communicating risk with expert and lay audiences (Slovic et al. 1995). Future research is needed to confirm whether emergency managers and broadcast meteorologists have different levels of expertise and whether both groups’ meteorology knowledge is higher compared to lay audiences’ knowledge. If so, research is needed to uncover how forecasters should communicate risks differently to experts compared to lay audiences. Different risk communication strategies may also be needed for various expert groups. For example, some emergency managers may have lower weather and climate expertise than some broadcast meteorologists.
Our findings further suggest a need to better understand how to build relationships with trained spotters prior to partnering with them to communicate risk. Without strong relationships, communication is unlikely to be effective (Grunig 1993). Therefore, future research is needed to understand how forecasters can optimally build and maintain relationships with trained spotters, including the unique dimensions of these relationships.
6. Limitations
Like all research, this study has limitations. First, our study design is cross-sectional, which precludes us from examining how these perceptions change over time. Next, we have a relatively small sample size, despite extensive efforts to partner with NWS leadership in the southern and central regions to distribute our survey. As our partners indicated, there may be social science research fatigue among some forecasters. Additionally, NOAA regulations prohibited the team offering an incentive to participate in the survey (e.g., lunch); incentives are a common method for increasing survey sample sizes (Nardi 2018). Ultimately, our relatively small sample size reduced the power for our statistical analyses. Indeed, several correlations were in the small to moderate range based on Cohen (1988) established benchmarks (i.e., 0.10–0.30), but did not reach the traditional statistical benchmark (i.e., p < 0.05). Additional efforts are needed to incentivize and energize forecasters to participate in survey research, including forecasters from all regions of the country. Last, our partner-use measures were a single item, which did not allow us to assess their reliability. However, they are high in face validity, which was confirmed by our research partners.
7. Conclusions
Relationships are the heart of effective organizational communication, including for the NWS. This study is the first to develop and validate measures of positive IDSS communication, including core relational dimensions for successful partnerships. Future research can apply these measures to research with emergency managers and broadcast meteorologists to identify how these partners view their relationships with NWS forecasters. For the NWS to truly evolve into an agency that is focused on customer service, it must couple strong relationships with strong threat messaging.
Acknowledgments
We thank the National Oceanic and Atmospheric Administration (NOAA) for funding this research through the VORTEX-SE Program (Award NA17OAR4590194). The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NOAA. We also acknowledge the following National Weather Service (NWS) partners for their generous support in developing the study’s survey instrument and in helping us to recruit participants: John J. Brost, Science and Training Branch Chief at NWS Southern Regional Headquarters; John DeBlock, Warning Coordination Meteorologist at NWS Birmingham; Daniel Hawblitzel, Science and Operations Officer at NWS Nashville; Krissy Hurley, Warning Coordination Meteorologist at NWS Nashville; Randall Graham, Deputy Chief, Science and Technology Integration, NWS Central Regional Headquarters; Kevin Laws, Science and Operations Officer at NWS Birmingham; David Nadler, Meteorologist in Charge, NWS Peachtree; Steve Nelson, Science and Operations Officer at NWS Peachtree; Gregory Patrick, Chief of the Science and Technology Service Division, NWS Southern Regional Headquarters; Keith Stellman, Meteorologist in Charge, NWS Peachtree; and Larry Vannozzi, Meteorologist in Charge at NWS Nashville.
Data availability statement
This study received approval from the University of Maryland Institutional Review Board (IRB) to collect human subjects’ data. In line with that approval, only researchers identified in the IRB package have access to the survey data collected. However, we have made the data collection instrument available as an online supplement at the Digital Repository of the University of Maryland (DRUM; https://drum.lib.umd.edu/handle/1903/26675) so that forecasters and managers can use our instrument to assess the status of their impact-based decision support services (IDSS), including the strength of their relationships with core partners. This instrument can also be used by scientists and others for future research.
REFERENCES
Allen, M., S. Titsworth, and S. K. Hunt, 2009: Quantitative Research in Communication. Sage, 256 pp.
Anthony, K. E., K. R. Cowden-Hodgson, H. D. O’Hair, R. L. Health, and G. M. Eosco, 2014: Complexities in communication and collaboration in the hurricane warning system. Commun. Stud., 65, 468–483, https://doi.org/10.1080/10510974.2014.957785.
Ash, K. D., R. L. Schumann, and G. C. Bowser, 2014: Tornado warning trade-offs: Evaluating choices for visually communicating risk. Wea. Climate Soc., 6, 104–118, https://doi.org/10.1175/WCAS-D-13-00021.1.
Baumgart, L. A., E. J. Bass, B. Philips, and K. Kloesel, 2008: Emergency management decision making during severe weather. Wea. Forecasting, 23, 1268–1279, https://doi.org/10.1175/2008WAF2007092.1.
Bostrom, A., R. E. Morss, J. K. Lazo, J. K. Demuth, H. Lazrus, and R. Hudson, 2016: A mental models study of hurricane forecast and warning production, communication, and decision-making. Wea. Climate Soc., 8, 111–129, https://doi.org/10.1175/WCAS-D-15-0033.1.
Cavanaugh, D., M. Huffman, J. Dunn, and M. Fox, 2016: Connecting the dots: A communication model of the north Texas integrated warning team during the 15 May 2013 tornado outbreak. Wea. Climate Soc., 8, 233–245, https://doi.org/10.1175/WCAS-D-15-0047.1.
Cheng, Y., 2018: Looking back, moving forward: A review and reflection of the organization-public relationship (OPR) research. Public Relat. Rev., 44, 120–130, https://doi.org/10.1016/j.pubrev.2017.10.003.
Cho, J., 2006: The mechanism of trust and distrust formation and their relations outcomes. J. Retailing, 82, 25–35, https://doi.org/10.1016/j.jretai.2005.11.002.
Cohen, J., 1988: Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Erlbaum Associates, 599 pp.
Cumming, G., and S. Finch, 2005: Inference by eye: Confidence intervals and how to read picture of data. Amer. Psychol., 60, 170–180, https://doi.org/10.1037/0003-066X.60.2.170.
Daipha, P., 2015: Masters of Uncertainty: Weather Forecasters and the Quest for Ground Truth. University of Chicago Press, 271 pp.
Demuth, J. L., and Coauthors, 2020: Recommendations for developing useful and usable convection-allowing model ensemble information for NWS forecasters. Wea. Forecasting, 35, 1381–1406, https://doi.org/10.1175/WAF-D-19-0108.1.
Demuth, J. L., R. E. Morss, B. H. Morrow, and J. K. Lazo, 2012: Creation and communication of hurricane risk information. Bull. Amer. Meteor. Soc., 93, 1133–1145, https://doi.org/10.1175/BAMS-D-11-00150.1.
Feldman, D. L., and H. M. Ingram, 2009: Making science useful for decision makers: Climate forecasts, water management, and knowledge networks. Wea. Climate Soc., 1, 9–21, https://doi.org/10.1175/2009WCAS1007.1.
Grunig, J. E., 1993: Image and substance: From symbolic to behavioral relationships. Public Relat. Rev., 19, 121–139, https://doi.org/10.1016/0363-8111(93)90003-U.
Grunig, L. A., J. E. Grunig, and W. P. Ehling, 1992: What is an effective organization? Excellence in Public Relations and Communication Management: Contributions to Effective Organizations, J. E. Grunig, Ed., Lawrence Erlbaum Associates, Inc., 65–89.
Grunig, L. A., J. E. Grunig, and D. M. Dozier, 2002: Excellent Public Relations and Effective Organizations: A Study of Communication Management in Three Countries. Lawrence Erlbaum, 668 pp.
Hayes, A. F., and J. J. Coutts, 2020: Use omega rather Cronbach’s alpha for estimating reliability. Commun. Methods Meas., 14 (1), 1–24, https://doi.org/10.1080/19312458.2020.1718629.
Heath, R. L., 2006: Onward into more fog: Thoughts on public relations’ research directions. J. Public Relat. Res., 18, 93–114, https://doi.org/10.1207/s1532754xjprr1802_2.
Heath, R. L., 2013: The journey to understand and champion OPR takes many roads, some not yet well traveled. Public Relat. Rev., 39, 426–431, https://doi.org/10.1016/j.pubrev.2013.05.002.
Hon, L. C., and J. E. Grunig, 1999: Guidelines for measuring relationships in public relations. Institute for Public Relations Doc., 40 pp., https://www.instituteforpr.org/wp-content/uploads/Guidelines_Measuring_Relationships.pdf.
Hoss, F., and P. Fischbeck, 2016: Increasing the value of uncertain weather and river forecasts for emergency managers. Bull. Amer. Meteor. Soc., 97, 85–97, https://doi.org/10.1175/BAMS-D-13-00275.1.
Hosterman, H. R., J. K. Lazo, J. M. Sprague-Hilderbrand, and J. E. Adkins, 2019: Using the National Weather Service impact-based decision support services to prepare for extreme winter storms. J. Emerg. Manage., 17, 455–467, https://doi.org/10.5055/jem.2019.0439.
Huang, Y.-H., 2001: OPRA: Cross-cultural, multiple-item scale for measuring organization-public relationships. J. Public Relat. Res., 13, 61–20, https://doi.org/10.1207/S1532754XJPRR1301_4.
Hung-Baesecke, C.-J. F., and Y.-R. R. Chen, 2020: Explicating trust and its relation to dialogue at a time of divided societies. Public Relat. Rev., 46, 101890, https://doi.org/10.1016/j.pubrev.2020.101890.
Ivankova, N. V., 2015: Mixed Methods Applications in Action Research. Sage, 446 pp.
Kazoleas, D., and A. Wright, 2001: Improving corporate and organizational communication: A new look at developing and implementing the communication audit. Handbook of Public Relations, R. L. Heath, Ed., Sage, 471–478.
Kim, M., and M. Cho, 2019: Examining the role of sense of community: Linking local government public relationships and community-building. Public Relat. Rev., 45, 297–306, https://doi.org/10.1016/j.pubrev.2019.02.002.
Klenow, D. J., and J. L. Reibestein, 2014: Eyes of the sky: Situating the role of storm spotters in the waring and response network. Homeland Secur. Emerg. Manage., 11, 437–458, https://doi.org/10.1515/jhsem-2014-0011.
Klockow-McClain, K. E., R. A. McPherson, and R. P. Thomas, 2020: Cartographic design for improved decision making: Trade-offs in uncertainty visualization for tornado threats. Ann. Assoc. Amer. Geogr., 110, 314–333, https://doi.org/10.1080/24694452.2019.1602467.
Krennert, T., K. Rainer, G. Pistotnik, A. M. Holzer, F. Zeiler, and M. Stampfl, 2018: Trusted spotter network in Australia: A new standard to utilize crowdsourced weather and impact observations. Adv. Sci. Res., 15, 77–80, https://doi.org/10.5194/asr-15-77-2018.
Lazo, J. K., H. R. Hosterman, J. M. Sprague-Hilderbrand, and J. E. Adkins, 2020: Impact-based decision support services and the socioeconomic impacts of winter storms. Bull. Amer. Meteor. Soc., 101, E626–E639, https://doi.org/10.1175/BAMS-D-18-0153.1.
Ledingham, J. A., 2003: Explicating relationship management as a general theory of public relations. J. Public Relat. Res., 15, 181–198, https://doi.org/10.1207/S1532754XJPRR1502_4.
Ledingham, J. A., and S. D. Bruning, 1998: Relationship management and public relations: Dimensions of an organization–public relationship. Public Relat. Rev., 24, 55–65, https://doi.org/10.1016/S0363-8111(98)80020-9.
Ledingham, J. A., S. D. Bruning, and L. J. Wilson, 1999: Time as an indicator of the perceptions and behavior of members of a key public: Monitoring and predicting organization-public relationships. J. Public Relat. Res., 11, 167–183, https://doi.org/10.1207/s1532754xjprr1102_04.
Liu, B. F., and A. M. Mehta, 2021: From the periphery and toward a centralized model for trust in government risk and disaster communication. J. Risk Res., https://doi.org/10.1080/13669877.2020.1773516, in press.
Liu, B. F., A. Atwell Seate, I. Iles, and E. Herovic, 2020a: Tornado warning: Understanding the National Weather Service’s communication strategies. Public Relat. Rev., 46, 101879, https://doi.org/10.1016/j.pubrev.2019.101879.
Liu, B. F., A. Atwell Seate, I. Iles, and E. Herovic, 2020b: Eyes of the storm: How citizen scientists contribute to government forecasting and risk communication. Wea. Climate Soc., 12, 263–277, https://doi.org/10.1175/WCAS-D-19-0131.1.
McCarthy, D. H., 2002: The role of ground-truth reports in the warning decision-making process during the 3 May 1999 Oklahoma tornado outbreak. Wea. Forecasting, 17, 647–649, https://doi.org/10.1175/1520-0434(2002)017<0647:TROGTR>2.0.CO;2.
Mehta, V. M., C. L. Knutson, N. J. Rosenberg, J. R. Olsen, N. A. Wall, T. K. Bernadt, and M. J. Hayes, 2013: Decadal climate information needs of stakeholders for decision support in water and agriculture production sectors: A case study in the Missouri River basin. Wea. Climate Soc., 5, 27–42, https://doi.org/10.1175/WCAS-D-11-00063.1.
Morgan, R. M., and S. D. Hunt, 1994: The commitment–trust theory of relationship marketing. J. Mark., 58, 20–38, https://doi.org/10.1177/002224299405800302.
Nardi, P. M., 2018: Doing Survey Research: A Guide to Quantitative Methods. 4th ed. Routledge, 272 pp.
National Academies of Sciences, Engineering, and Medicine, 2018: Integrating Social and Behavioral Sciences within the Weather Enterprise. National Academies Press, 198 pp., https://www.nap.edu/catalog/24865/integrating-social-and-behavioral-sciences-within-the-weather-enterprise.
National Weather Service, 2017: Operations and workforce analysis catalog. NOAA Doc., 123 pp., https://www.weather.gov/media/nws/OWA_Catalog_09072017.pdf.
National Weather Service, 2019: Building a Weather-Ready Nation: 2019–2022 strategic plan. NOAA Doc., 23 pp., https://www.weather.gov/media/wrn/NWS_Weather-Ready-Nation_Strategic_Plan_2019-2022.pdf.
National Weather Service, 2020: NWS SKYWARN storm spotter program. NOAA, https://www.weather.gov/SKYWARN#:~:text=SKYWARN%C2%AE%20is%20a%20volunteer,to%20the%20National%20Weather%20Service.
Olson, M. K., J. Sutton, S. C. Vos, R. Prestley, S. L. Renshaw, and C. T. Butts, 2019: Build community before the storm: The National Weather Service’s social media engagement. J. Contingencies Crisis Manage., 27, 359–373, https://doi.org/10.1111/1468-5973.12267.
Peters, R. G., V. T. Covello, and D. B. McCallum, 1997: The determinants of trust and credibility in environmental risk communication: An empirical study. Risk Anal., 17, 43–54, https://doi.org/10.1111/j.1539-6924.1997.tb00842.x.
Shen, H., 2017: Refining organization-public relationship quality measurement in student and employee samples. J. Mass Commun. Quart., 94, 994–1010, https://doi.org/10.1177/1077699016674186.
Sherman-Morris, K., H. Lussenden, A. Kent, and C. MacDonald, 2018: Perceptions of social science among NWS warning coordination meteorologists. Wea. Climate Soc., 10, 597–612, https://doi.org/10.1175/WCAS-D-17-0079.1.
Singleton, R. A., Jr., and B. C. Straits, 2005: Approaches to Social Research. 4th ed. Oxford University Press, 640 pp.
Slovic, P., T. Malmfors, D. Krewski, C. K. Mertz, N. Neil, and S. Barlett, 1995: Intuitive toxicology. II. Expert and lay judgments of chemical risks in Canada. Risk Anal., 15, 661–675, https://doi.org/10.1111/j.1539-6924.1995.tb01338.x.
Tuovinen, J.-P., H. Hohti, and D. M. Schultz, 2020: Enlarging the severe hail database in Finland by using a radar-based hail detection algorithm and email surveys to limit underreporting and population biases. Wea. Forecasting, 35, 711–721, https://doi.org/10.1175/WAF-D-19-0142.1.
Uccellini, L. W., and J. E. Ten Hoeve, 2019: Evolving the National Weather Service to build a Weather-Ready Nation: Connecting observations, forecasts, and warnings to decision-makers through impact-based decision support services. Bull. Amer. Meteor. Soc., 100, 1923–1942, https://doi.org/10.1175/BAMS-D-18-0159.1.
Walters, R. D., and D. S. Bortree, 2012: Advancing relationship management theory: Mapping the continuum of relationship types. Public Relat. Rev., 100, 1923–1942, https://doi.org/10.1016/J.PUBREV.2011.08.018.
Wernstedt, K., P. S. Roberts, J. Arvai, and K. Redmond, 2019: How emergency managers (mis?)interpret forecasts. Disasters, 43, 88–109, https://doi.org/10.1111/disa.12293.
Williamson, R. A., H. Hertzfeld, J. Cordes, and J. Logsdon, 2002: The socioeconomic benefits of earth science and applications research: Reducing the risks and costs of natural disasters in the USA. Space Policy, 18, 57–65, https://doi.org/10.1016/S0265-9646(01)00057-1.
Whyte, W. F., 1991: Participatory Action Research. Sage, 247 pp.
Yang, A., and M. Taylor, 2015: Looking over, looking out, and moving forward: Positioning public relations in theorizing organizational network ecologies. Commun. Theory, 25, 91–115, https://doi.org/10.1111/comt.12049.
Yang, S.-U., M. Kang, and H. Cha, 2015: A study on dialogic communication, trust, and distrust: Testing a scale for measuring organization–public dialogic communication (ODC). J. Public Relat. Res., 27, 175–192, https://doi.org/10.1080/1062726X.2015.1007998.