1. Introduction
a. Importance of climate change and the public’s sentiments/emotions about the topic
Climate change has consistently been an important issue of concern to the public. Meanwhile, public opinion will determine, to some extent, how well electoral-based countries make their policies. However, traditional voting methods do not take advantage of the growing popularity of social media and internal climate change discussions (Kirilenko and Stepchenkova 2014). The new generation of Americans grew up in the age of social media and has developed the habit of expressing their opinions on social media. Therefore, social media is a good research data source to understand people’s honest opinions on climate change. Research on tweets on social issues has also been conducted for a long time. Mandel et al. (2012) discussed the public response to Hurricane Irene and started a trend of tweet disaster response research. Later, public opinions of different stages of disaster were analyzed through tweets, including preparedness, emergency response, impact, and recovery (Huang and Xiao 2015). Twitter promotes the potential for social and political action to address climate change. Its diverse user community allows a variety of participants to initiate these conversations, including nongovernmental organizations, politicians, celebrities, and grassroots movements (Fownes et al. 2018). Organizing social movements has become easier with Twitter and other social media, from raising awareness to coordinating participation and interaction (Carlson and Strandberg 2005).
Twitter data can be analyzed in terms of “media ecology.” Retweeting a tweet and commenting on it are common in the Twitter ecology. The main sources shared in tweets are well studied, such as the four categories of individuals and organizations mentioned in the previous paragraph. Most of the sources cited in climate change tweets come from professional news organizations, and the content is mainly news including climate change and relevant scientific findings (Veltri and Atanasova 2017). In addition, researchers studying the hashtags of protests found that tweets show a complex network of protests and are deeply embedded in the surrounding protest ecology as well as changing over time (Segerberg and Bennett 2011). It may prove that Twitter has enough influence on the politics of controversy to be uniquely valuable in the study of present-day society.
b. Sentiment and emotion analysis for understanding of public opinion
Sentiment analysis is one of the systematic research methods to analyze the sentiment of a given text. In this process, algorithms and computer technologies are used to systematically identify, extract, and classify subjective information from a text, including opinions, attitudes, and emotions. Emotion analysis is a more specialized subcategory of sentiment analysis, usually included in sentiment analysis (Lei and Liu 2021). On the other hand, sentiment analyses are based on the detection and extraction of subjective polarity from two opposites (Taboada et al. 2011). Emotion analysis examines a variety of specific emotions, such as anger, anxiety, disgust, fear, joy, and sadness (Plutchik 1962, 2001).
Sentiment analysis identifies the emotions and opinions expressed in a text (Medhat et al. 2014). Different feature sets were implied in investigations of sentiment analysis on Twitter (Koto and Adriani 2015). The international academic community discusses various fields of topics, such as medical crowdfunding (Durand et al. 2018). As a high-level political indicator of public opinion, sentiment analysis also plays an essential role in election prediction (Singh et al. 2020). It would also provide insight into the types of discussions in the climate change Twitter corpus.
c. Summary of existing analysis on climate change
On climate change, Twitter seems to be a better source of public opinion for analysis than scientific literature is (Cody et al. 2015). Twitter is able to capture immediate reactions to new or unexpected events (scientific information, politics, and extreme weather), often with geographic specificity (Fownes et al. 2018). Van der Linden (2017) discusses the various effects of climate change risk perceptions and the impact of cognitive, emotional, social, and cultural factors. It is a precursor to studying policies and social media, reflecting voters’ opinions on climate change topics. Furthermore, some research studies examine Twitter’s medium-specific characteristics, such as “@mentions,” hashtags, retweets, or the geographic location of tweets (Pearce et al. 2019). Dahal et al. (2019) analyzed the geographical distribution of different public opinions using geotags of tweets, including time stamps and geographical coordinates.
Researchers can measure the intensity of collective emotions on various climate change topics by analyzing the content of tweets (Fownes et al. 2018). Specifically, emotions expressed in tweets may play a significant role in spreading information and building solidarity about climate change (Fownes et al. 2018; Medhat et al. 2014; Koto and Adriani 2015). The different study objectives allowed the researcher to analyze public opinion on climate change at different levels (keywords, opinion groups, countries, etc.). According to the study, words containing the word climate are typically more negative than the average tweet (Cody et al. 2015).
Through corpus analysis of diverse opinion groups, it was noted that although climate change deniers posted fewer tweets, their words were filled with negative emotions, where anger and sarcasm took overwhelming prominence. In addition, the sentiment analysis of a dataset of tweets related to climate change shows how sentiment changes to the various subtopics of climate change. In response to energy policies, various groups of climate change opinion in Spain and the United Kingdom have expressed different emotions (Loureiro and Alló 2020). The public in the United Kingdom is satisfied with energy policies and has a more positive discourse, evoking a feeling of anticipation rather than fear as in Spain. Selecting the appropriate range of subjects is necessary for “big data” emotion analysis. A large-scale study would obscure the finer features of the corpus, which should be interpreted and discussed.
It has been a common phenomenon in the field to study tweets. However, there has been criticism of this method. One of the main criticisms is that active users in tweets will have a greater weight in the study because of their more vocal opinions (Leetaru et al. 2013). In addition, tweets may be posted by “bots” or official accounts that do not accurately reflect the voters’ views (Kollanyi et al. 2016). It is also possible for Twitter to connect users with similar views through hashtags, thereby amplifying group views rather than individual ones (Pearce et al. 2014; Ruths and Pfeffer 2014). So, for a more convincing study, manual filtering is essential. It should remove tweets suspected of being bots and official accounts. It is not easy to avoid high evaluation of active users’ opinions. Nevertheless, since those users also tend to be more willing to participate in offline discussions, it would make such studies relatively reasonable. In addition, this paper selects a particular period when the discourse of politicians stimulates discussion among inactive users to some extent.
Even though tweets on climate change have been discussed, these discussions have not yet been well correlated with attitudes about climate change, as captured in discussions (Fownes et al. 2018). Thus, this paper uses sentiment and emotion analysis in the research section to show a comprehensive view of different attitudes on climate change from both sides of the issue. It uses R language to analyze attitudes about climate change of opinion-holders in tweets corpus preprocess manually during a period when the concealed opinion emerged. In this paper, we address two main questions:
-
How are sentiments and emotions of climate change discourse distributed?
-
How are emotions of different opinions associated with each other?
2. Method
a. Study period
The dataset comprises tweets about climate change collected between 27 April 2015 and 21 February 2018. During this period, the right-wing discourse returned to the social media spotlight, as compared with the previous trend of perceived “political correctness” formed by left-wing positions on climate change issues (Jang and Hart 2015). There have been debates on Twitter about whether climate change is real and whether it is the result of human activities (Fownes et al. 2018). Studying the social media discourse of this period is a crucial component of the research process so as to avoid reliance only on opinions expressed in polls commissioned by mainstream sources. As a result of this political event, the actual public opinion of right-wing voters has been revealed, consideration of which can lead to more effective suggestions with regard to environmental concerns.
b. Dataset of tweets used
The data were collected by C. Bauch of the University of Waterloo with the assistance of the Canadian Foundation for Innovation (in 2019). In total, there were 43 943 tweets. Three reviewers independently tagged each tweet. Only tweets that were deemed consistent by all three reviewers are included in the dataset (the rest have been excluded). Each tweet is marked from the following categories:
-
2 (news): links to tweets with factual news about climate change,
-
1 (support): this tweet supports the “belief” in climate change,
-
0 (neutral): the tweet neither supports nor refutes the belief in climate change, or
-
−1 (oppose): the tweet does not believe in climate change.
The data were downloaded from Kaggle as a “comma-separatedvalues” (csv) file titled twitter_sentiment_analysis.csv. This file collects tweets with manual annotation, which labels the opinion groups to which the tweets belong in column 1, puts the contents of the tweet in column 2, and puts the identifier (ID) of each tweet in column 3.
The news media are an important source of information for tweets. We observe that a large number of tweets are quoting news in order to develop a commentary or are directly retweeting news content or headlines to express their views. Therefore, it is difficult to display the full ecology of tweets without including news. Moreover, as carefully processed utterance, the news, on the one hand phrases words very carefully, and thus increases the value of analysis in relation to the tweets of ordinary (non–news media) people. On the other hand, it has a different status and position from “the people,” and therefore it is unfair to analyze them altogether. As a result, we distinguish it in the analysis of sentiment and emotion for comparison and reference.
c. Data annotations
After importing the corpus with package readr (Wickham et al. 2022), the Twitter dataset underwent a filtering process to clean up the data collected. We used the map function of the TM package (Feinerer et al. 2008) to make all of the cases lower cases and to remove punctuation, number words, English stop words, and redundant whitespace.
We performed word frequency analysis and obtained relevant topic words based on the results for further analysis. These theme words could help us to analyze the objects and reasons for the subsequent sentiment analysis. The following sentiment and emotion analysis used the sentimentr package (Rinker 2017), and the results were merged and saved as a csv file.
1) Sentiment annotations
The sentiment analysis made use of the sentiment_by function in the sentimentr package. Sentiment_by allows us to obtain the average sentiment score of a given text. The function returns a data structure with four columns: element_id is the ID of the given text; sentence_id labels the ID of the sentence, which is equal to element_id; word_count gives the number of words in the sentence; and ave_sentiment is the sentiment score of the sentence, as shown in Fig. 1. The Jocker’s lexicon in the syuzhet package is the default sentiment lexicon for the new version of this function.
2) Emotion annotations
We used the function of emotion_by to extract the tweets as the object. Emotion_by analyzes the emotions of text by grouping. Its return value, in addition to element_id, sentence_id, and word_count, consistent with sentiment_by, has three exclusive parameters: emotion_type, the type of emotion from the lexicon; emotion_count, which counts the number of emotion words of that emotion type, and ave_emotion, which is the score of emotion words of that emotion type, as shown in Fig. 2. The default lexicon for this function is hash_nrc_emotions.
Given the characteristics of sentiment computation tools, sentimentr intends to balance accuracy and efficiency. A vital issue in sentiment computation seems to be the handling of negation, at which sentimentr functions better than comparable packets (Naldi 2019). In the functions of the emotion series, the scope of negation determination might be set. In the author’s prepractice, the default negation scope was so broad that it is oversensitive and challenging to distinguish emotion from negation. In Table 1, no should function in the first clause. However, it is four words ahead of the disgust signal word sham, thus making the function identify this tweet as negation. Therefore, in this research, we decided that the preceding two words and the last two words are appropriate negation-detecting scope.
Example of misjudgment.
3. Results
a. Keywords and important topics
The high-frequency topic words are divided into nouns, verbs, and adjectives (Fig. 3). The nouns are not only describing the subject but also the sources of information, such as Trump, scientist, and science, whereas the verbs believe, say (says, said), think, and adjective real can show that at least in the English-language media the credibility of global warming and climate change is still worth arguing. Global warming is a representative word that can be interpreted as climate change in some contexts, whereas climate change is used in most contexts. Linking climate change to global warming is considered by some to be a misconception that fuels opponents’ skepticism about the issue (Fownes et al. 2018). In everyday life, climate change refers to climate anomalies and other keywords such as weather, environment, and new.
Other words such as future, effect, help, and threat reflect the public’s concern about the future of the global environment and daily life. Interestingly, this also seems to be a political issue, as president, realDonaldTrump (Trump’s Twitter account), Obama, and China appear in the high-frequency word list. Such focus on political topics indicates a high public concern.
b. Sentiments and their distributions (or variations) across opinion groups
Table 2 reflects the sentiment of the discourse of both supporters and opponents, with supporters acknowledging the scientific consensus that human activities contribute to climate change while opponents do not. In the table, most tweets show the polarity of sentiment, with only a few discourses presented as neutral in the sentiment analysis. The results of the chi-squared test show that the p value is 1.255 365 × 10−144, which is much lower than 0.05, thus showing a significant difference.
Sentiment tokens across different opinions groups.
Although fewer in numbers than supporters, opponents had a higher negative expression (Fig. 4), whereas supporters also had less than one-half of their discourse falling toward the negative sentiment. The data show positive value in supporters in that they are more positive in sentiment, but the number 1685.616 828 is relatively lower than for the neutral groups. It means that, inside this group, positive and negative sentiments are roughly comparable. The original corpus also contained news. Similarly, the news group had more than one-half of negative expressions.
On the other hand, neutrals did not have much sentiment preference, and there was no significant difference in the percentage of their positive and negative expressions (Fig. 4). Interestingly, the neutral texts present higher positive sentiment than supporters, whereas the news exhibit high negative sentiment, in some ways comparable to that of the opponents.
Logistic regression models were developed to investigate the association between sentiment and opinion groups. The number of tokens with different sentiments as the dependent variable and the group as the independent variable were subjected to logistic regression analysis. If the tweet’s sentiment is positive, then the positive column is noted as 1 and the rest of the values are 0, as shown in Fig. 5.
It can be found in Fig. 6 that both positive and negative sentiments present correlations with the opponents’ discourse, with positive sentiments presenting a more negligible correlation in comparison. The number of supporters’ discourses did not present a significant correlation with a particular sentiment, probably because the number of positive and negative sentiments, in terms of number, was approximately the same among supporters.
The amount of tweeting is different for different opinion groups, so again we take the mean value of Twitter sentiment as the dependent variable for logistic regression, as in the example in Fig. 7. The negative column is filled in with the absolute value of the sentence’s sentiment. Neutral discourses are discarded because they are marked as zero and are unable and unnecessary to be counted.
We found in Fig. 8 that the words of supporters were associated with positive sentiment, whereas the discourses of opponents were associated with negative sentiment. This result roughly matches our expectations.
c. Relationship between emotions across opinion groups
In Fig. 9, the dots’ color and size indicate the correlation. The matrix is symmetric, and the diagonal is positively correlated because it shows the correlation of each variable with itself.
Emotion_negateds present the negation of emotions. In the first example of Table 3, threat is identified as a signal of fear, but not puts a reversal into the whole meaning and thus, it was judged as fear_negated. However, negating is not always a reversal, but it may also be a common way of sarcasm, such as in the second example of Table 3. This should explain why emotions and their negation show a similar correlation with other emotions in Fig. 9, such as anger to disgust and anger_negated to disgust_negated. Among all of the correlations, anger, disgust, and sadness show a higher relation with each other, while a similar correlation exists among joy, trust, and anticipation. These emotions always group together in the tweets, shaping the overall attitude of the population toward climate change. It will be explored in greater depth in the following emotion model analysis in section 3d.
Example of negation.
d. Emotions and their distributions (or variations) across opinion groups
Various emotions show different distributions across different opinion groups. The sum adds up all the ave_emotion values of each emotion. For example, all of the fear tweets were selected in supporters and ave_emotions of each were added up. Figure 10 shows that fear shows a significantly higher sum of emotions in all groups, especially supporters. Other emotions except negations present the same status. Groups of opposition and neutral show a similar sum of emotions and are lower than supporters at the same time. The news group surprisingly express slightly higher emotions than the opposition and neutral groups.
However, since the total tweet number of supporters is more than the other groups, it is necessary to consider the averages of each emotion across different groups. The average is the quotient of the total number of previous emotions divided by the total number of tweets from each group. In Fig. 11, fear shows a significantly higher average of emotions in all groups, especially in the news. Trust, anticipation, joy, and surprise are nearly identical across these groups. However, anger, sadness, and disgust present different statuses; they are higher in opposition and neutral groups but lower in support and news groups.
To better explore the internal relationship of emotions in climate change discourse, we conducted a correlation analysis using the R software language to probe how emotions influence each other. In addition, we tried to build a logistic regression model between opinion group codes and emotions and statistically analyze the relationship between different opinion groups and emotions. We selected the appropriate parameters and then the summary function was used to test this model. In this model, emotions are continuous variables and opinion group codes are numerical, where supporters are positive in number and opponents are negative. In this logistic regression model, the opinion group codes are the independent variable and the emotions are the dependent variable.
The results in the first model showed that 7 of the 16 predictor variables are significant (with an asterisk after the p value). Since some predictor variables are insignificant, the model is fitted for the second time, with those variables excluded.
The second model’s seven predictors (anger, disgust, fear, sadness, surprise, and surprise_negated) are significant in Fig. 12. Surprise_negated has a p value of 0.001 58 and a relatively high standard deviation. Confidence intervals for the standard deviations of the coefficients can be checked in Fig. 13.
Negative emotions such as anger, disgust, and sadness negatively correlate with the opinion group codes. The negative values of the coding represent opposition or disbelief in climate change, indicating that the skeptics feel angry, disgusted, and sad about climate change. At the same time, fear and surprise and their negatives show positive correlations with opinion group codes, suggesting that supporters are fearful and shocked at climate topics and may not wonder about some other facts. The relatively high correlation of fear and shock with position indicates that the expression of these emotions is more common among this group. Negative emotions are dominant and reflected in both supporters and opponents.
4. Discussion
Analysis of different opinion groups’ attitudes on climate change has been missing in previous studies (Fownes et al. 2018). In this study, we used a corpus with opinion group annotations to delineate opinion groups. It effectively balances the research of attitudes in different opinion groups. Besides, there is no direct correlation between sentiments and stances; therefore, typical sentiment analysis cannot reflect the unique sentiment preferences of different opinion groups. According to the polarization analysis of sentiments, supporters and opponents showed high negative sentiments on this topic, only directed at different targets.
This paper explores the distribution of keywords in the climate change discourse, showing that it is a highly politicized issue. Supporters and opponents express emotions toward the same objects but may give distinct emotions. Research analysis supports that political figures and their discourse have received discussions, such as Trump and Hillary. In addition, combined with emotion analysis, we find that the presence of global warming as an alternative term has the potential to create skepticism among opponents.
Sentiment analysis tells us that negative sentiment is prevalent in issues and that, despite the low voice, opponents have a higher percentage of negative expressions than do supporters. The correlations between the opponent’s words and both positive and negative sentiments reflect the polarity of the opponent’s expression. On the one hand, they firmly and positively affirm their own side’s discourse; on the other hand, they resolutely deny the others’ point of view so that they are extreme and fervently insist on their own stance. Therefore, they actively participate in the discussion and expose their views even when public opinion is generally unfavorable.
The second sentiment analysis model explains our general impression that supporters are more positive while opponents are more negative, although it is not evident in the first model where the tweet number of each sentiment is the dependent variable. Furthermore, supporters present even less positive than neutral discourse in total value. Again, this suggests that there is not a remarkably strong link between opinion stance and discourse sentiment. The expression of sentiment is more accessible, with more personal preference and choice than entirely the collective influence of opinion groups.
The higher positive sentiment presented by neutral texts proves that neutrals are more inclined to state facts than to deny them. The negation of a fact can be seen as an increase in sentiment. Negating a proposition has a higher emotional value than a declarative positive expression of the same meaning. The exceedingly high negative sentiment of the news is likely to be interpreted as whether it encourages a pro– or anti–climate change position, and the news plays a relatively significant role in driving the polarization of opinions.
The findings of our emotion distribution and correlation analysis suggest that negative emotions (anger, disgust, and sadness) and positive emotions (joy, trust, and anticipation) are interrelated with the same type of emotions. Among them, the three negative emotions and surprise showed correlations with different opinion groups, with negative and opponents producing stronger associations, as did surprise and supporters.
Through specific examples of tweets, we explored the possibilities behind this distribution. As the examples in Table 4 show, skeptics are very angry at what they believed to be Democrats’ or the elite’s fabrications of climate change (e.g., element 864) and their perceived levying of more taxes for their profits and their sponsors (e.g., element 4884). It differs from what skeptics intuitively feel (e.g., element 338). They are disgusted by what they believe to be the elite’s profiting from this (e.g., element 593) and the elite’s (perceived) hypocritical behavior (e.g., element 2255) and express sadness at the country’s degradation. These three feelings are linked and often appear simultaneously in a single sentence.
Examples of emotions in the opposition group.
As the examples in Table 5 show, a positive correlation with fear indicates that fear and antipathy are expressed in the news media and the supportive population of climate change (e.g., element 98). As for surprise, those believers are surprised at climate anomalies (e.g., element 131), the anti-intellectual behavior of skeptics (e.g., element 26533), and Trump’s political discourse (e.g., element 61). Besides, emotion detected negation of surprise, probably because of the expression not believe. However, it could also be interpreted as their belief to be proved that Trump’s counterintuitive rhetoric is just political showmanship to deceive his voters, when in fact, he recognizes the existence of climate change and is selfishly preparing for it (e.g., element 622).
Examples of emotions in supporters.
In general, the topic continues to provoke discussion on Twitter. There is a relatively weak correlation between positive emotions and any group, implying that supporters and denialists are dissatisfied with the status quo.
The emotion_by function is mainly based on the NRC sentiment lexicon. The results show that the tweet text contains all eight emotions in the NRC emotion lexicon. The emotion analysis used in the study all adopted a lexical approach, determining emotions based on words alone, ignoring the role of grammatical and syntactic dimensions. These packages compare the words that appear in the text with one or more lexicons, matching positive and negative words in the lexicon to determine a certain level of intensity (positive or negative). Researchers should note that it is important to interpret these data with care (Jungherr 2015). These lexicons themselves may be biased in their interpretation of emotions; the word emotions varies from different contexts, while misunderstanding is possible due to the lack of syntactic support. Sentimentr indeed offers the capability of negation identification. However, negation is always brutal to identify correctly, in addition to the ability to identify sarcasm. There is still a need for manual correction of the current machine judgment.
A future direction for improvement would be to develop field-specific lexicons for climate change, but it would require the continued efforts of subsequent researchers. At the syntactic level, there seems to be a better way for matrix selection to reflect the role of grammar. Accordingly, if the highlight function is capable of the emotion function, it would allow the researcher to make more fine-grained adjustments to the judgment of emotion. Furthermore, a study of climate change would be rather general and could be divided into more specific topics for a more detailed investigation. An ephemeral study of this topic is also necessary, such as whether Americans’ opinions were different in 2022 when rare and severe extreme climate/weather events were experienced in the world. It may provide a comprehensive view of small and immediate changes in opinion or attitude, such as when users temporarily abandon their political ideology (Lin et al. 2013).
5. Conclusions
This paper uses R tools to conduct sentiment and emotion analysis on the tweet corpus from 2015 to 2018 to present the overall tendency of citizens’ attitudes toward climate change topics. The keyword research finds that people focus on the message’s source. “Climate change” has often been conflated with “global warming” in the popular consciousness. Most tweets show sentiment preference, but there is no direct correspondence between sentiment and opinion groups. Negative sentiments are found in both opinion groups, with only a higher percentage among opponents than supporters.
Negative emotions are dominant and are reflected in both supporters and opponents. Besides, between all the emotions, anger, disgust, and sadness show a higher relation with each other, while a similar correlation exists among joy, trust, and anticipation. Supporters express fear and surprise about extreme weather and opponents’ behavior, while opponents show anger, disgust, and sadness about what they perceive to be politician’s manufactured climate change stories that do not align with the politicians’ real feelings.
Data availability statement.
The initial corpus during this study is openly available from Kaggle at https://www.kaggle.com/datasets/edqian/twitter-climate-change-sentiment-dataset.
REFERENCES
Carlson, T., and K. Strandberg, 2005: The 2004 European parliament election on the web: Finnish actor strategies and voter responses. Inf. Polity, 10, 189–204, https://doi.org/10.3233/IP-2005-0075.
Cody, E. M., A. J. Reagan, L. Mitchell, P. S. Dodds, and C. M. Danforth, 2015: Climate change sentiment on Twitter: An unsolicited public opinion poll. PLOS ONE, 10, e0136092, https://doi.org/10.1371/journal.pone.0136092.
Dahal, B., S. A. P. Kumar, and Z. Li, 2019: Topic modeling and sentiment analysis of global climate change tweets. Soc. Network Anal. Min., 9, 24, https://doi.org/10.1007/s13278-019-0568-8.
Durand, W. M., J. L. Peters, A. E. M. Eltorai, S. Kalagara, A. J. Osband, and A. H. Daniels, 2018: Medical crowdfunding for organ transplantation. Clin. Transplant., 32, e13267, https://doi.org/10.1111/ctr.13267.
Feinerer, I., K. Hornik, and D. Meyer, 2008: Text mining infrastructure in R. J. Stat. Software, 25 (5), 1–54, https://doi.org/10.18637/jss.v025.i05.
Fownes, J. R., C. Yu, and D. B. Margolin, 2018: Twitter and climate change. Sociol. Compass, 12, e12587, https://doi.org/10.1111/soc4.12587.
Huang, Q., and Y. Xiao, 2015: Geographic situational awareness: Mining tweets for disaster preparedness, emergency response, impact, and recovery. ISPRS Int. J. Geo-Inf., 4, 1549–1568, https://doi.org/10.3390/ijgi4031549.
Jang, S. M., and P. S. Hart, 2015: Polarized frames on “climate change” and “global warming” across countries and states: Evidence from Twitter big data. Global Environ. Change, 32, 11–17, https://doi.org/10.1016/j.gloenvcha.2015.02.010.
Jungherr, A., 2015: Analyzing Political Communication with Digital Trace Data. Springer, 220 pp.
Kirilenko, A. P., and S. O. Stepchenkova, 2014: Public microblogging on climate change: One year of Twitter worldwide. Global Environ. Change, 26, 171–182, https://doi.org/10.1016/j.gloenvcha.2014.02.008.
Kollanyi, B., P. N. Howard, and S. C. Woolley, 2016: Bots and automation over Twitter during the first U.S. presidential debate. Oxford Internet Institute, University of Oxford, https://demtech.oii.ox.ac.uk/research/posts/bots-and-automation-over-twitter-during-the-first-u-s-presidential-debate/.
Koto, F., and M. Adriani, 2015: A comparative study on Twitter sentiment analysis: Which features are good? Natural Language Processing and Information Systems, C. Biemann et al., Eds., Lecture Notes in Computer Science, Vol. 9103, Springer, 453–457.
Leetaru, K., S. Wang, G. Cao, A. Padmanabhan, and E. Shook, 2013: Mapping the global Twitter heartbeat: The geography of Twitter. First Monday, 18, 5–6, https://doi.org/10.5210/fm.v18i5.4366.
Lei, L., and D. Liu, 2021: Conducting Sentiment Analysis. Cambridge University Press, 75 pp.
Lin, Y.-R., D. Margolin, B. Keegan, and D. Lazer, 2013: Voices of victory: A computational focus group framework for tracking opinion shift in real time. Proc. 22nd Int. Conf. on World Wide Web, Rio de Janeiro, Brazil, Association for Computing Machinery, 737–748, https://doi.org/10.1145/2488388.2488453.
Loureiro, M. L., and M. Alló, 2020: Sensing climate change and energy issues: Sentiment and emotion analysis with social media in the U.K. and Spain. Energy Policy, 143, 111490, https://doi.org/10.1016/j.enpol.2020.111490.
Mandel, B., A. Culotta, J. Boulahanis, D. Stark, B. Lewis, and J. Rodrigue, 2012: A demographic analysis of online sentiment during Hurricane Irene. Proc. Second Workshop on Language in Social Media, Montreal, QC, Canada, Association for Computational Linguistics, 27–36, https://aclanthology.org/W12-2104.pdf.
Medhat, W., A. Hassan, and H. Korashy, 2014: Sentiment analysis algorithms and applications: A survey. Ain Shams Eng. J., 5, 1093–1113, https://doi.org/10.1016/j.asej.2014.04.011.
Naldi, M., 2019: A review of sentiment computation methods with R packages. arXiv, 1901.08319v1, https://doi.org/10.48550/arXiv.1901.08319.
Pearce, W., K. Holmberg, I. Hellsten, and B. Nerlich, 2014: Climate change on Twitter: Topics, communities and conversations about the 2013 IPCC working group 1 report. PLOS ONE, 9, e94785, https://doi.org/10.1371/journal.pone.0094785.
Pearce, W., S. Niederer, S. M. Özkula, and N. Sánchez Querubín, 2019: The social media life of climate change: Platforms, publics, and future imaginaries. Wiley Interdiscip. Rev.: Climate Change, 10, e569, https://doi.org/10.1002/wcc.569.
Plutchik, R., 1962: The Emotions: Facts, Theories and a New Model. Random House, 204 pp.
Plutchik, R., 2001: The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Amer. Sci., 89, 344–350.
Rinker, T., 2017: Package ‘sentimentr’. Github, https://github.com/trinker/sentimentr.
Ruths, D., and J. Pfeffer, 2014: Social media for large studies of behavior. Science, 346, 1063–1064, https://doi.org/10.1126/science.346.6213.1063.
Segerberg, A., and W. L. Bennett, 2011: Social media and the organization of collective action: Using Twitter to explore the ecologies of two climate change protests. Commun. Rev., 14, 197–215, https://doi.org/10.1080/10714421.2011.597250.
Singh, P., Y. K. Dwivedi, K. S. Kahlon, A. Pathania, and R. S. Sawhney, 2020: Can twitter analytics predict election outcome? An insight from 2017 Punjab assembly elections. Gov. Inf. Quart., 37, 101444, https://doi.org/10.1016/j.giq.2019.101444.
Taboada, M., J. Brooke, M. Tofiloski, K. Voll, and M. Stede, 2011: Lexicon-based methods for sentiment analysis. Comput. Linguist., 37, 267–307, https://doi.org/10.1162/COLI_a_00049.
van der Linden, S., 2017: Determinants and Measurement of Climate Change Risk Perception, Worry, and Concern. Social Science Research Network, 53 pp.
Veltri, G. A., and D. Atanasova, 2017: Climate change on Twitter: Content, media ecology and information sharing behaviour. Public Understanding Sci., 26, 721–737, https://doi.org/10.1177/0963662515613702.
Wickham, H., J. Hester, and J. Bryan, 2022: Readr: Read rectangular text data. Accessed 20 September 2022, https://readr.tidyverse.org/.