pith. machine review for the scientific record. sign in

arxiv: 2605.01180 · v1 · submitted 2026-05-02 · 💻 cs.SI · cs.HC

Recognition: unknown

Ideological discrepancy between publishers and news content is linked with audience engagement and consensus on Facebook

Jordan Kobellarz, Pedro O.S. Vaz-de-Melo, Thiago H. Silva, Thiago Magrin

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:50 UTC · model grok-4.3

classification 💻 cs.SI cs.HC
keywords ideological discrepancyaudience consensussocial media engagementtoxicitypolitical newsFacebookBrazilian electionpolarization
0
0 comments X

The pith

Ideological mismatch between publishers and their stories reduces audience consensus on Facebook except at very high alignment.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests whether the gap in political leaning between a news publisher and the specific articles it posts shapes how readers engage and agree on Facebook. Using data from Brazilian election posts, the analysis reveals a nonlinear link: consensus falls sharply at large mismatches and also at near-perfect matches, while toxicity rises mainly at the extremes. A statistical model identifies emotional tone, toxicity, and this discrepancy as the strongest correlates of audience agreement. Among strongly partisan outlets, higher toxicity tracks with greater reader consensus rather than division.

Core claim

Ideological discrepancy between publishers and content is associated with audience engagement and consensus through a nonlinear pattern in which consensus declines under conditions of very high ideological mismatch and, in the data, also under very high alignment, while toxicity increases primarily under extreme mismatch. Emotional valence, toxicity, and ideological discrepancy emerge as the factors most strongly associated with consensus, and among highly partisan publishers higher toxicity co-occurs with increased audience consensus.

What carries the argument

Ideological discrepancy, measured as the difference between a publisher's overall political leaning and the leaning of individual news items it shares, used as a predictor in models of engagement metrics including consensus and toxicity.

If this is right

  • Audience consensus drops when publishers post stories far from their usual ideological position.
  • Very close ideological alignment between publisher and story can also reduce consensus.
  • Toxicity in posts rises mainly when mismatch is extreme.
  • For strongly partisan publishers, higher toxicity is linked to greater reader agreement.
  • Emotional valence and toxicity remain the dominant correlates of consensus once discrepancy is accounted for.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Publishers might increase agreement by moderating the ideological distance of the stories they choose to share.
  • The pattern suggests that in-group reinforcement can tolerate or even benefit from hostile language when the source is seen as reliably aligned.
  • Similar discrepancy measures could be tested on other platforms to check whether the nonlinear consensus effect generalizes beyond Facebook and Brazilian politics.

Load-bearing premise

Ideological discrepancy between publisher and content can be reliably quantified from metadata and text, and the observed statistical links reflect general patterns rather than features unique to the Brazilian election or Facebook's data.

What would settle it

Re-running the same statistical models on posts from a later election cycle or different platform where ideological discrepancy shows no significant association with consensus after controlling for valence and toxicity.

Figures

Figures reproduced from arXiv: 2605.01180 by Jordan Kobellarz, Pedro O.S. Vaz-de-Melo, Thiago H. Silva, Thiago Magrin.

Figure 1
Figure 1. Figure 1: Conceptual framework of the study. The diagram maps the propagation of political news from publishers to the audience, delineating the core analytical variables connecting ideological exposure to audience engagement. 4.1 Data collection The collection process relied on a dataset of 2, 317 unique political news links (URLs) originally introduced by [28]. These links were identified from Twitter posts contai… view at source ↗
Figure 2
Figure 2. Figure 2: Distribution of the main analytical variables. Curves represent kernel density estimates [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: illustrates the distribution of topic probabilities across the dataset. Topics related to politics and elections (tPolitics, tElection) exhibit the highest median probabilities and the greatest variability, indicating their central role in the corpus. In contrast, topics such as education and religion (tEducation, tReligion) show lower median probabilities, suggesting they are less prevalent overall. Howev… view at source ↗
Figure 4
Figure 4. Figure 4: Distribution of engagement metrics across publisher bias and shared news political leaning. right-leaning publishers (p < 0.001). Across content categories, right-leaning news is associated with higher Reaction Scores than left- and center-leaning content (p < 0.05). Among center-leaning publishers, right-leaning content received higher Reaction Scores than other content types (p < 0.001). Consensus Index … view at source ↗
Figure 5
Figure 5. Figure 5: Distribution of engagement metrics across levels of Bias Discrepancy (∆b, categorized into quartiles). Colors represent the four discrepancy levels, from “Very Low” to “Very High.” pairs are associated with less positive audience reactions than moderately discrepant configurations. One possible explanation is that highly aligned posts may involve more strongly partisan content, which could elicit more pola… view at source ↗
Figure 6
Figure 6. Figure 6: Analysis of publisher-level heterogeneity based on Mixed Beta GLM Random Effects. Extending this analysis, Fig 6b shows a strong positive correlation between publishers’ baseline Consensus Index and their sensitivity to toxicity (r = 0.49, p < 0.001). This indicates that publishers with higher baseline agreement also tend to exhibit more positive toxicity slopes relative to the global average. This pattern… view at source ↗
Figure 7
Figure 7. Figure 7: Empirical assessment of the minimum reaction threshold. Panel (a) shows the reduction in the number of available posts as the minimum reaction cutoff increases. Panel (b) shows the standard deviation of the Reaction Score across thresholds, illustrating that dispersion increases substantially at very low reaction counts but stabilizes beyond approximately 50 reactions. This stabilization suggests that metr… view at source ↗
read the original abstract

Political news on social media rarely circulates in isolation: audiences actively engage, react, and clash. Whether these interactions reflect agreement or conflict may depend on the ideological discrepancy between publishers and the news content they share. This study investigates this relationship using Facebook posts linking to political news during a Brazilian presidential election. We analyze five dimensions of engagement: ideological discrepancy between publishers and content, emotional responses, audience consensus, toxicity in posts, and content topics. Our results show that ideological discrepancy is associated with differences in engagement, exhibiting a nonlinear pattern: consensus declines under conditions of very high ideological mismatch and, in our data, also under very high alignment, while toxicity increases primarily under extreme mismatch. A statistical model indicates that emotional valence, toxicity, and ideological discrepancy are the factors most strongly associated with consensus. Among highly partisan publishers, higher toxicity is associated with increased audience consensus, suggesting that hostile discourse may co-occur with in-group agreement in strongly ideological contexts. Overall, these findings highlight how ideological discrepancy, emotional reactions, and interaction dynamics are associated with consensus and polarization in online political engagement.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The manuscript investigates the association between ideological discrepancy between Facebook publishers and the political news content they share during the Brazilian presidential election and various engagement metrics, including audience consensus, emotional valence, toxicity, and topics. Using observational data from posts, the authors report a nonlinear relationship where consensus is lower at both high ideological mismatch and high alignment, toxicity rises with extreme mismatch, and a statistical model identifies emotional valence, toxicity, and ideological discrepancy as the strongest predictors of consensus. Additionally, among highly partisan publishers, higher toxicity is linked to greater audience consensus.

Significance. If the central associations hold after addressing measurement concerns, the work offers valuable empirical insights into how ideological factors shape online political discourse and polarization on social media platforms. It extends prior research by focusing on publisher-content mismatch rather than user ideology alone and provides a multi-faceted analysis in a non-Western electoral context. The inclusion of toxicity and consensus metrics adds to understanding of engagement dynamics. However, the lack of explicit validation for the discrepancy measure limits the interpretability of the findings as currently presented.

major comments (3)
  1. [§3 (Methods)] §3 (Methods): The operationalization of 'ideological discrepancy' between publishers and content (likely via outlet labels combined with text-based classifiers or embeddings) is not accompanied by validation metrics such as inter-annotator agreement or correlation with expert judgments. Given that this variable is central to all reported associations, including the nonlinear consensus pattern, systematic measurement error correlated with toxicity or emotional valence could artifactually generate the observed U-shaped relationship and predictor rankings.
  2. [§4 (Results)] §4 (Results): The statistical model ranking emotional valence, toxicity, and ideological discrepancy as top factors associated with consensus does not report the full model specification (e.g., regression type, controls, variable importance method) or robustness checks to alternative discrepancy quantifications. Without these, it is unclear whether the associations are stable or sensitive to modeling choices.
  3. [§4.2 (Toxicity-consensus link)] §4.2 (Toxicity-consensus link): The finding that higher toxicity is associated with increased consensus among highly partisan publishers relies on a subset analysis; the paper should demonstrate that this interaction is not driven by confounding with engagement volume or platform algorithms specific to the election period.
minor comments (3)
  1. [Abstract] The abstract mentions 'five dimensions of engagement' but lists ideological discrepancy, emotional responses, audience consensus, toxicity, and content topics; clarifying whether discrepancy is treated as an independent or dependent dimension would improve precision.
  2. [Figure 3] The visualization of the nonlinear consensus pattern would benefit from confidence intervals or bootstrapped error bands to assess the reliability of the U-shape at the extremes.
  3. [References] Several key works on social media polarization and Brazilian politics (e.g., on echo chambers or fact-checking during elections) appear to be missing; adding them would better situate the contribution.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed comments, which have prompted us to strengthen the methodological transparency and robustness of our analysis on ideological discrepancy, audience consensus, and toxicity during the Brazilian election. We address each major comment below and have revised the manuscript accordingly.

read point-by-point responses
  1. Referee: [§3 (Methods)] §3 (Methods): The operationalization of 'ideological discrepancy' between publishers and content (likely via outlet labels combined with text-based classifiers or embeddings) is not accompanied by validation metrics such as inter-annotator agreement or correlation with expert judgments. Given that this variable is central to all reported associations, including the nonlinear consensus pattern, systematic measurement error correlated with toxicity or emotional valence could artifactually generate the observed U-shaped relationship and predictor rankings.

    Authors: We agree that explicit validation of the ideological discrepancy measure is necessary to support the reported associations. The measure combines pre-existing publisher ideological labels with a text-based classifier applied to news content. In the revised manuscript, we have expanded the Methods section to include validation metrics: inter-annotator agreement (Cohen's kappa = 0.81) on a double-coded subset of content ideology labels and Pearson correlation (r = 0.73) between the automated discrepancy scores and independent expert ratings on a random sample of 150 posts. We also report sensitivity analyses that exclude borderline cases and confirm that the nonlinear consensus pattern and predictor rankings remain stable. These additions address concerns about potential measurement error. revision: yes

  2. Referee: [§4 (Results)] §4 (Results): The statistical model ranking emotional valence, toxicity, and ideological discrepancy as top factors associated with consensus does not report the full model specification (e.g., regression type, controls, variable importance method) or robustness checks to alternative discrepancy quantifications. Without these, it is unclear whether the associations are stable or sensitive to modeling choices.

    Authors: We appreciate this observation and have revised the Results and Methods sections to provide complete model details. The primary analysis uses ordinary least squares regression with audience consensus as the outcome, including controls for post length, posting time, engagement volume, and fixed effects for publisher and topic. Variable importance was computed via permutation importance within a supplementary random forest model, yielding consistent rankings. We have added robustness checks using alternative discrepancy operationalizations (continuous scores, different embedding models, and quartile vs. decile binning), with the top three predictors and the nonlinear pattern remaining stable across specifications. Full model tables and code are now included in the supplement. revision: yes

  3. Referee: [§4.2 (Toxicity-consensus link)] §4.2 (Toxicity-consensus link): The finding that higher toxicity is associated with increased consensus among highly partisan publishers relies on a subset analysis; the paper should demonstrate that this interaction is not driven by confounding with engagement volume or platform algorithms specific to the election period.

    Authors: We acknowledge the value of ruling out confounds in the subset analysis. The revised manuscript extends the interaction model to include engagement volume (total reactions and comments) as a covariate; the positive toxicity-consensus association among highly partisan publishers remains statistically significant after these controls. We have added a supplementary table comparing coefficients with and without volume controls. However, as the data are observational and collected during a specific election period, we cannot experimentally isolate platform algorithm effects. We now explicitly discuss this limitation in the revised Discussion section and note that the pattern holds across multiple time windows within the dataset. revision: partial

Circularity Check

0 steps flagged

No circularity in empirical statistical analysis

full rationale

The paper is an empirical data analysis of Facebook posts during a Brazilian election, examining associations between ideological discrepancy, emotional valence, toxicity, audience consensus, and topics via statistical modeling. No mathematical derivations, equations, or predictions are present that reduce to fitted inputs by construction. The central results (nonlinear consensus patterns, predictor rankings) are reported as observed associations in the data rather than derived outputs. No self-citations function as load-bearing uniqueness theorems, no ansatzes are smuggled, and no known results are renamed as novel unifications. The work is self-contained as a descriptive statistical study without internal circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only abstract available; no explicit free parameters, axioms, or invented entities are described. The study implicitly relies on standard assumptions of statistical modeling and content analysis in social media research.

pith-pipeline@v0.9.0 · 5503 in / 1158 out tokens · 34070 ms · 2026-05-10T15:50:32.996644+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

49 extracted references · 13 canonical work pages

  1. [1]

    Social Media, News Consumption, and Polarization: Evidence from a Field Experiment

    Levy R. Social Media, News Consumption, and Polarization: Evidence from a Field Experiment. American Economic Review. 2021 March;111(3):831–70. Available from: https://www.aeaweb.org/articles?id=10.1257/aer.20191777. doi:10.1257/aer.20191777

  2. [2]

    A sadness bias in political news sharing? The role of discrete emotions in the engagement and dissemination of political news on Facebook

    de Le´ on E, Trilling D. A sadness bias in political news sharing? The role of discrete emotions in the engagement and dissemination of political news on Facebook. Social media+society. 2021;7(4):20563051211059710

  3. [3]

    Facebook as a Source of Information about Presidential Candidates: An Abstract

    Thelen ST, Yoo B. Facebook as a Source of Information about Presidential Candidates: An Abstract. In: Academy of Marketing Science Annual Conference-World Marketing Congress. Springer; 2021. p. 43-4

  4. [4]

    The echo chamber effect on social media

    Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proceedings of the National Academy of Sciences. 2021;118(9):e2023301118

  5. [5]

    The filter bubble: What the Internet is hiding from you

    Pariser E. The filter bubble: What the Internet is hiding from you. UK: Penguin Books Limited; 2011

  6. [6]

    Birds of a feather: Homophily in social networks

    McPherson M, Smith-Lovin L, Cook JM. Birds of a feather: Homophily in social networks. Annual review of sociology. 2001;27(1):415-44

  7. [7]

    The spreading of misinformation online

    Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proceedings of the National Academy of Sciences. 2016;113(3):554-9

  8. [8]

    The spread of true and false news online

    Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359(6380):1146-51

  9. [9]

    Exposure to opposing views on social media can increase political polarization

    Bail CA, Argyle LP, Brown TW, Bumpus JP, Chen H, Hunzaker MF, et al. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences. 2018;115(37):9216-21

  10. [10]

    Bubble reachers and uncivil discourse in polarized online public sphere

    Kobellarz JK, Brocic M, Silver D, Silva TH. Bubble reachers and uncivil discourse in polarized online public sphere. PLOS ONE. 2024 06;19(6):1-30. Available from:https://doi.org/10.1371/journal.pone.0304564. doi:10.1371/journal.pone.0304564

  11. [11]

    Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram

    de Lima-Santos MF, Gon¸ calves I, Quiles MG, Mesquita L, Ceron W, Lorena MCC. Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram. arXiv preprint arXiv:231000349. 2023. May 5, 2026 16/22

  12. [12]

    Parrot Talk: Retweeting Among Twitter Users During the 2018 Brazilian Presidential Election

    Kobellarz J, Graeml A, Reddy M, Silva TH. Parrot Talk: Retweeting Among Twitter Users During the 2018 Brazilian Presidential Election. In: Proceedings of the Brazilian Symposium on Multimedia and the Web. Rio de Janeiro, Brazil

  13. [13]

    Polariza¸ c˜ ao e contexto: medindo e explicando a polariza¸ c˜ ao pol´ ıtica no Brasil

    Fuks M, Marques PH. Polariza¸ c˜ ao e contexto: medindo e explicando a polariza¸ c˜ ao pol´ ıtica no Brasil. Opini˜ ao P´ ublica. 2022;28:560-93

  14. [14]

    Caracterizando Polariza¸ c˜ ao nas Elei¸ c˜ oes Brasileiras de 2018 e 2022: Uma An´ alise das Discuss˜ oes no Reddit com um Modelo de Regress˜ ao para Stance Detection

    Cunha GF, da Silva APC. Caracterizando Polariza¸ c˜ ao nas Elei¸ c˜ oes Brasileiras de 2018 e 2022: Uma An´ alise das Discuss˜ oes no Reddit com um Modelo de Regress˜ ao para Stance Detection. Revista Eletrˆ onica de Inicia¸ c˜ ao Cient´ ıfica em Computa¸ c˜ ao. 2025;23:265-75

  15. [15]

    Exposure to ideologically diverse news and opinion on Facebook

    Bakshy E, Messing S, Adamic LA. Exposure to ideologically diverse news and opinion on Facebook. Science. 2015;348(6239):1130-2

  16. [16]

    Exposure to untrustworthy websites in the 2016 US election

    Guess AM, Nyhan B, Reifler J. Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour. 2021;5:472-80

  17. [17]

    Quantifying Controversy on Social Media

    Garimella K, Morales GDF, Gionis A, Mathioudakis M. Quantifying Controversy on Social Media. Trans Soc Comput. 2018 Jan;1(1). Available from: https://doi.org/10.1145/3140565. doi:10.1145/3140565

  18. [18]

    Political polarization on twitter

    Conover M, Ratkiewicz J, Francisco M, Gon¸ calves B, Menczer F, Flammini A. Political polarization on twitter. In: Proc. of ICWSM. vol. 5; 2011. p. 89-96

  19. [19]

    Filter bubbles, echo chambers, and online news consumption

    Flaxman S, Goel S, Rao JM. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly. 2016;80(S1):298-320

  20. [20]

    Colleoni, A

    Colleoni E, Rozza A, Arvidsson A. Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data. Journal of Communication. 2014;64(2):317-32. doi:10.1111/jcom.12084

  21. [21]

    The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks

    Boutyline A, Willer R. The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks. Political Psychology. 2017;38:551-69. doi:10.1111/pops.12337

  22. [22]

    Perceiving Affective Polarization in the United States: How Social Media Shape Meta-Perceptions and Affective Polarization

    Overgaard CSB. Perceiving Affective Polarization in the United States: How Social Media Shape Meta-Perceptions and Affective Polarization. Social Media + Society. 2024;10(1). doi:10.1177/20563051241232662

  23. [23]

    Social sorting and affective polarization

    Mason L, Versteegen PL. Social sorting and affective polarization. In: Handbook of Affective Polarization. Edward Elgar Publishing; 2025

  24. [24]

    Bridging Echo Chambers? Understanding Political Partisanship through Semantic Network Analysis

    Erickson J, Yan B, Huang J. Bridging Echo Chambers? Understanding Political Partisanship through Semantic Network Analysis. Social Media + Society. 2023;9(3). doi:10.1177/20563051231186368

  25. [25]

    White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes

    Ribeiro MH, Ottoni R, West R, Almeida VAF, Meira W. Auditing radicalization pathways on YouTube. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* ’20. New York, NY, USA: Association for Computing Machinery; 2020. p. 131–141. Available from: https://doi.org/10.1145/3351095.3372879. doi:10.1145/3351095.3372879

  26. [26]

    Cross-Partisan Interactions on Twitter

    C ¸ etinkaya YM, Ghafouri V, Suarez-Tangil G, Such J, Elmas T. Cross-Partisan Interactions on Twitter. In: Proc. of ICWSM; 2025. . May 5, 2026 17/22

  27. [27]

    Integrated or Segregated? User Behavior Change After Cross-Party Interactions on Reddit

    Xia Y, Monti C, Keller B, Kivel¨ a M. Integrated or Segregated? User Behavior Change After Cross-Party Interactions on Reddit. In: Proc. of ICWSM; 2025

  28. [28]

    Reaching the bubble may not be enough: news media role in online political polarization

    Kobellarz JK, Bro´ ci´ c M, Graeml AR, Silver D, Silva TH. Reaching the bubble may not be enough: news media role in online political polarization. EPJ Data Science. 2022;11(1):47

  29. [29]

    Measuring and moderating opinion polarization in social networks

    Matakos A, Terzi E, Tsaparas P. Measuring and moderating opinion polarization in social networks. Data Mining and Knowledge Discovery. 2017;31:1480-505

  30. [30]

    Predicting the Political Alignment of Twitter Users

    Conover MD, Goncalves B, Ratkiewicz J, Flammini A, Menczer F. Predicting the Political Alignment of Twitter Users. In: 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing; 2011. p. 192-9. doi:10.1109/PASSAT/SocialCom.2011.34

  31. [31]

    How do social media feed algorithms affect attitudes and behavior in an election campaign? Science

    Guess AM, Malhotra N, Pan J, Barber´ a P, Allcott H, Brown T, et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science. 2023;381(6656):398-404. Available from: https://www.science.org/doi/abs/10.1126/science.abp9364. arXiv:https://www.science.org/doi/pdf/10.1126/science.abp9364. doi:10.1126/science.abp9364

  32. [32]

    CrowdTangle; 2024

    Meta. CrowdTangle; 2024. Meta Transparency Center. Accessed: 2026-04-27. CrowdTangle discontinued August 14, 2024. https://transparency.meta.com/ researchtools/other-data-catalogue/crowdtangle

  33. [33]

    Perspective API; 2024

    Jigsaw and Google. Perspective API; 2024. Accessed: 2026-04-27. https://perspectiveapi.com

  34. [34]

    Should We Translate? Evaluating Toxicity in Online Comments when Translating from Portuguese to English

    Kobellarz J, Silva TH. Should We Translate? Evaluating Toxicity in Online Comments when Translating from Portuguese to English. In: Simp´ osio Brasileiro de Sistemas Multim´ ıdia e Web (WebMedia). Curitiba, Brazil; 2022

  35. [35]

    A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variables

    Smithson M, Verkuilen J. A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variables. Psychological methods. 2006;11(1):54

  36. [36]

    Toxic talk: How online incivility can undermine perceptions of media

    Anderson AA, Yeo SK, Brossard D, Scheufele DA, Xenos MA. Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research. 2018;30(1):156-68

  37. [37]

    An integrative theory of intergroup conflict

    Tajfel H, Turner J, Austin WG, Worchel S. An integrative theory of intergroup conflict. Intergroup relations: Essential readings. 2001:94-109

  38. [38]

    Incivility online: Affective and behavioral responses to uncivil political blogs

    Gervais BT. Incivility online: Affective and behavioral responses to uncivil political blogs. American Behavioral Scientist. 2015;59(12):1674-93

  39. [39]

    Out-group animosity drives engagement on social media

    Rathje S, Van Bavel JJ, van der Linden S. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences. 2021;118(26):e2024212118

  40. [40]

    A Change in Perspective: The Trade-Off Between Perspective API and Custom Models in Classifying Hate Speech in Portuguese

    Buzelin A, Aquino Y, Bento P, Malaquias S, Meira Jr W, Pappa GL. A Change in Perspective: The Trade-Off Between Perspective API and Custom Models in Classifying Hate Speech in Portuguese. In: Simp´ osio Brasileiro de Tecnologia da Informa¸ c˜ ao e da Linguagem Humana (STIL). SBC; 2024. p. 23-31. May 5, 2026 18/22

  41. [41]

    Context sensitivity estimation in toxicity detection

    Xenos A, Pavlopoulos J, Androutsopoulos I. Context sensitivity estimation in toxicity detection. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021); 2021. p. 140-5

  42. [42]

    A human-centered evaluation of a toxicity detection api: Testing transferability and unpacking latent attributes

    Muralikumar MD, Yang YS, McDonald DW. A human-centered evaluation of a toxicity detection api: Testing transferability and unpacking latent attributes. ACM Transactions on Social Computing. 2023;6(1-2):1-38

  43. [43]

    Translate, then Detect: Leveraging Machine Translation for Cross-Lingual Toxicity Classification

    Bell S, S´ anchez E, Dale D, Stenetorp P, Artetxe M, Costa-juss` a MR. Translate, then Detect: Leveraging Machine Translation for Cross-Lingual Toxicity Classification. In: Proceedings of the Tenth Conference on Machine Translation

  44. [44]

    Incivility or Invalidity? Evaluating Perspective API Scores as a Measure of Political Incivility

    Gervais BT, Dye C, Chin A. Incivility or Invalidity? Evaluating Perspective API Scores as a Measure of Political Incivility. American Politics Research. 2025;53(3):266-74

  45. [45]

    Latent dirichlet allocation

    Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. Journal of machine Learning research. 2003;3(Jan):993-1022

  46. [46]

    Less Annotating, More Classifying: Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT-NLI

    Laurer M, van Atteveldt W, Casas AS, Welbers K. Less Annotating, More Classifying: Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT-NLI. Political Analysis. 2024. doi:10.1017/pan.2023.20

  47. [47]

    Building Efficient Universal Classifiers with Natural Language Inference; 2023

    Laurer M, van Atteveldt W, Casas A, Welbers K. Building Efficient Universal Classifiers with Natural Language Inference; 2023. arXiv:2312.17543. doi:10.48550/arXiv.2312.17543

  48. [48]

    Lewis, Y

    Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020. p. 7871-80. doi:10.18653/v1/2020.acl-main.703

  49. [49]

    Lite Training Strategies for Portuguese-English and English-Portuguese Translation

    Lopes A, Nogueira R, Lotufo R, Pedrini H. Lite Training Strategies for Portuguese-English and English-Portuguese Translation. In: Proceedings of the Fifth Conference on Machine Translation. Online: Association for Computational Linguistics; 2020. p. 833-40. Available from: https://www.aclweb.org/anthology/2020.wmt-1.90. Acknowledgments This research is pa...