Recognition: unknown
Ideological discrepancy between publishers and news content is linked with audience engagement and consensus on Facebook
Pith reviewed 2026-05-10 15:50 UTC · model grok-4.3
The pith
Ideological mismatch between publishers and their stories reduces audience consensus on Facebook except at very high alignment.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Ideological discrepancy between publishers and content is associated with audience engagement and consensus through a nonlinear pattern in which consensus declines under conditions of very high ideological mismatch and, in the data, also under very high alignment, while toxicity increases primarily under extreme mismatch. Emotional valence, toxicity, and ideological discrepancy emerge as the factors most strongly associated with consensus, and among highly partisan publishers higher toxicity co-occurs with increased audience consensus.
What carries the argument
Ideological discrepancy, measured as the difference between a publisher's overall political leaning and the leaning of individual news items it shares, used as a predictor in models of engagement metrics including consensus and toxicity.
If this is right
- Audience consensus drops when publishers post stories far from their usual ideological position.
- Very close ideological alignment between publisher and story can also reduce consensus.
- Toxicity in posts rises mainly when mismatch is extreme.
- For strongly partisan publishers, higher toxicity is linked to greater reader agreement.
- Emotional valence and toxicity remain the dominant correlates of consensus once discrepancy is accounted for.
Where Pith is reading between the lines
- Publishers might increase agreement by moderating the ideological distance of the stories they choose to share.
- The pattern suggests that in-group reinforcement can tolerate or even benefit from hostile language when the source is seen as reliably aligned.
- Similar discrepancy measures could be tested on other platforms to check whether the nonlinear consensus effect generalizes beyond Facebook and Brazilian politics.
Load-bearing premise
Ideological discrepancy between publisher and content can be reliably quantified from metadata and text, and the observed statistical links reflect general patterns rather than features unique to the Brazilian election or Facebook's data.
What would settle it
Re-running the same statistical models on posts from a later election cycle or different platform where ideological discrepancy shows no significant association with consensus after controlling for valence and toxicity.
Figures
read the original abstract
Political news on social media rarely circulates in isolation: audiences actively engage, react, and clash. Whether these interactions reflect agreement or conflict may depend on the ideological discrepancy between publishers and the news content they share. This study investigates this relationship using Facebook posts linking to political news during a Brazilian presidential election. We analyze five dimensions of engagement: ideological discrepancy between publishers and content, emotional responses, audience consensus, toxicity in posts, and content topics. Our results show that ideological discrepancy is associated with differences in engagement, exhibiting a nonlinear pattern: consensus declines under conditions of very high ideological mismatch and, in our data, also under very high alignment, while toxicity increases primarily under extreme mismatch. A statistical model indicates that emotional valence, toxicity, and ideological discrepancy are the factors most strongly associated with consensus. Among highly partisan publishers, higher toxicity is associated with increased audience consensus, suggesting that hostile discourse may co-occur with in-group agreement in strongly ideological contexts. Overall, these findings highlight how ideological discrepancy, emotional reactions, and interaction dynamics are associated with consensus and polarization in online political engagement.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript investigates the association between ideological discrepancy between Facebook publishers and the political news content they share during the Brazilian presidential election and various engagement metrics, including audience consensus, emotional valence, toxicity, and topics. Using observational data from posts, the authors report a nonlinear relationship where consensus is lower at both high ideological mismatch and high alignment, toxicity rises with extreme mismatch, and a statistical model identifies emotional valence, toxicity, and ideological discrepancy as the strongest predictors of consensus. Additionally, among highly partisan publishers, higher toxicity is linked to greater audience consensus.
Significance. If the central associations hold after addressing measurement concerns, the work offers valuable empirical insights into how ideological factors shape online political discourse and polarization on social media platforms. It extends prior research by focusing on publisher-content mismatch rather than user ideology alone and provides a multi-faceted analysis in a non-Western electoral context. The inclusion of toxicity and consensus metrics adds to understanding of engagement dynamics. However, the lack of explicit validation for the discrepancy measure limits the interpretability of the findings as currently presented.
major comments (3)
- [§3 (Methods)] §3 (Methods): The operationalization of 'ideological discrepancy' between publishers and content (likely via outlet labels combined with text-based classifiers or embeddings) is not accompanied by validation metrics such as inter-annotator agreement or correlation with expert judgments. Given that this variable is central to all reported associations, including the nonlinear consensus pattern, systematic measurement error correlated with toxicity or emotional valence could artifactually generate the observed U-shaped relationship and predictor rankings.
- [§4 (Results)] §4 (Results): The statistical model ranking emotional valence, toxicity, and ideological discrepancy as top factors associated with consensus does not report the full model specification (e.g., regression type, controls, variable importance method) or robustness checks to alternative discrepancy quantifications. Without these, it is unclear whether the associations are stable or sensitive to modeling choices.
- [§4.2 (Toxicity-consensus link)] §4.2 (Toxicity-consensus link): The finding that higher toxicity is associated with increased consensus among highly partisan publishers relies on a subset analysis; the paper should demonstrate that this interaction is not driven by confounding with engagement volume or platform algorithms specific to the election period.
minor comments (3)
- [Abstract] The abstract mentions 'five dimensions of engagement' but lists ideological discrepancy, emotional responses, audience consensus, toxicity, and content topics; clarifying whether discrepancy is treated as an independent or dependent dimension would improve precision.
- [Figure 3] The visualization of the nonlinear consensus pattern would benefit from confidence intervals or bootstrapped error bands to assess the reliability of the U-shape at the extremes.
- [References] Several key works on social media polarization and Brazilian politics (e.g., on echo chambers or fact-checking during elections) appear to be missing; adding them would better situate the contribution.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed comments, which have prompted us to strengthen the methodological transparency and robustness of our analysis on ideological discrepancy, audience consensus, and toxicity during the Brazilian election. We address each major comment below and have revised the manuscript accordingly.
read point-by-point responses
-
Referee: [§3 (Methods)] §3 (Methods): The operationalization of 'ideological discrepancy' between publishers and content (likely via outlet labels combined with text-based classifiers or embeddings) is not accompanied by validation metrics such as inter-annotator agreement or correlation with expert judgments. Given that this variable is central to all reported associations, including the nonlinear consensus pattern, systematic measurement error correlated with toxicity or emotional valence could artifactually generate the observed U-shaped relationship and predictor rankings.
Authors: We agree that explicit validation of the ideological discrepancy measure is necessary to support the reported associations. The measure combines pre-existing publisher ideological labels with a text-based classifier applied to news content. In the revised manuscript, we have expanded the Methods section to include validation metrics: inter-annotator agreement (Cohen's kappa = 0.81) on a double-coded subset of content ideology labels and Pearson correlation (r = 0.73) between the automated discrepancy scores and independent expert ratings on a random sample of 150 posts. We also report sensitivity analyses that exclude borderline cases and confirm that the nonlinear consensus pattern and predictor rankings remain stable. These additions address concerns about potential measurement error. revision: yes
-
Referee: [§4 (Results)] §4 (Results): The statistical model ranking emotional valence, toxicity, and ideological discrepancy as top factors associated with consensus does not report the full model specification (e.g., regression type, controls, variable importance method) or robustness checks to alternative discrepancy quantifications. Without these, it is unclear whether the associations are stable or sensitive to modeling choices.
Authors: We appreciate this observation and have revised the Results and Methods sections to provide complete model details. The primary analysis uses ordinary least squares regression with audience consensus as the outcome, including controls for post length, posting time, engagement volume, and fixed effects for publisher and topic. Variable importance was computed via permutation importance within a supplementary random forest model, yielding consistent rankings. We have added robustness checks using alternative discrepancy operationalizations (continuous scores, different embedding models, and quartile vs. decile binning), with the top three predictors and the nonlinear pattern remaining stable across specifications. Full model tables and code are now included in the supplement. revision: yes
-
Referee: [§4.2 (Toxicity-consensus link)] §4.2 (Toxicity-consensus link): The finding that higher toxicity is associated with increased consensus among highly partisan publishers relies on a subset analysis; the paper should demonstrate that this interaction is not driven by confounding with engagement volume or platform algorithms specific to the election period.
Authors: We acknowledge the value of ruling out confounds in the subset analysis. The revised manuscript extends the interaction model to include engagement volume (total reactions and comments) as a covariate; the positive toxicity-consensus association among highly partisan publishers remains statistically significant after these controls. We have added a supplementary table comparing coefficients with and without volume controls. However, as the data are observational and collected during a specific election period, we cannot experimentally isolate platform algorithm effects. We now explicitly discuss this limitation in the revised Discussion section and note that the pattern holds across multiple time windows within the dataset. revision: partial
Circularity Check
No circularity in empirical statistical analysis
full rationale
The paper is an empirical data analysis of Facebook posts during a Brazilian election, examining associations between ideological discrepancy, emotional valence, toxicity, audience consensus, and topics via statistical modeling. No mathematical derivations, equations, or predictions are present that reduce to fitted inputs by construction. The central results (nonlinear consensus patterns, predictor rankings) are reported as observed associations in the data rather than derived outputs. No self-citations function as load-bearing uniqueness theorems, no ansatzes are smuggled, and no known results are renamed as novel unifications. The work is self-contained as a descriptive statistical study without internal circular steps.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Social Media, News Consumption, and Polarization: Evidence from a Field Experiment
Levy R. Social Media, News Consumption, and Polarization: Evidence from a Field Experiment. American Economic Review. 2021 March;111(3):831–70. Available from: https://www.aeaweb.org/articles?id=10.1257/aer.20191777. doi:10.1257/aer.20191777
-
[2]
A sadness bias in political news sharing? The role of discrete emotions in the engagement and dissemination of political news on Facebook
de Le´ on E, Trilling D. A sadness bias in political news sharing? The role of discrete emotions in the engagement and dissemination of political news on Facebook. Social media+society. 2021;7(4):20563051211059710
2021
-
[3]
Facebook as a Source of Information about Presidential Candidates: An Abstract
Thelen ST, Yoo B. Facebook as a Source of Information about Presidential Candidates: An Abstract. In: Academy of Marketing Science Annual Conference-World Marketing Congress. Springer; 2021. p. 43-4
2021
-
[4]
The echo chamber effect on social media
Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proceedings of the National Academy of Sciences. 2021;118(9):e2023301118
2021
-
[5]
The filter bubble: What the Internet is hiding from you
Pariser E. The filter bubble: What the Internet is hiding from you. UK: Penguin Books Limited; 2011
2011
-
[6]
Birds of a feather: Homophily in social networks
McPherson M, Smith-Lovin L, Cook JM. Birds of a feather: Homophily in social networks. Annual review of sociology. 2001;27(1):415-44
2001
-
[7]
The spreading of misinformation online
Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, et al. The spreading of misinformation online. Proceedings of the National Academy of Sciences. 2016;113(3):554-9
2016
-
[8]
The spread of true and false news online
Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018;359(6380):1146-51
2018
-
[9]
Exposure to opposing views on social media can increase political polarization
Bail CA, Argyle LP, Brown TW, Bumpus JP, Chen H, Hunzaker MF, et al. Exposure to opposing views on social media can increase political polarization. Proceedings of the National Academy of Sciences. 2018;115(37):9216-21
2018
-
[10]
Bubble reachers and uncivil discourse in polarized online public sphere
Kobellarz JK, Brocic M, Silver D, Silva TH. Bubble reachers and uncivil discourse in polarized online public sphere. PLOS ONE. 2024 06;19(6):1-30. Available from:https://doi.org/10.1371/journal.pone.0304564. doi:10.1371/journal.pone.0304564
-
[11]
Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram
de Lima-Santos MF, Gon¸ calves I, Quiles MG, Mesquita L, Ceron W, Lorena MCC. Visual Political Communication in a Polarized Society: A Longitudinal Study of Brazilian Presidential Elections on Instagram. arXiv preprint arXiv:231000349. 2023. May 5, 2026 16/22
2023
-
[12]
Parrot Talk: Retweeting Among Twitter Users During the 2018 Brazilian Presidential Election
Kobellarz J, Graeml A, Reddy M, Silva TH. Parrot Talk: Retweeting Among Twitter Users During the 2018 Brazilian Presidential Election. In: Proceedings of the Brazilian Symposium on Multimedia and the Web. Rio de Janeiro, Brazil
2018
-
[13]
Polariza¸ c˜ ao e contexto: medindo e explicando a polariza¸ c˜ ao pol´ ıtica no Brasil
Fuks M, Marques PH. Polariza¸ c˜ ao e contexto: medindo e explicando a polariza¸ c˜ ao pol´ ıtica no Brasil. Opini˜ ao P´ ublica. 2022;28:560-93
2022
-
[14]
Caracterizando Polariza¸ c˜ ao nas Elei¸ c˜ oes Brasileiras de 2018 e 2022: Uma An´ alise das Discuss˜ oes no Reddit com um Modelo de Regress˜ ao para Stance Detection
Cunha GF, da Silva APC. Caracterizando Polariza¸ c˜ ao nas Elei¸ c˜ oes Brasileiras de 2018 e 2022: Uma An´ alise das Discuss˜ oes no Reddit com um Modelo de Regress˜ ao para Stance Detection. Revista Eletrˆ onica de Inicia¸ c˜ ao Cient´ ıfica em Computa¸ c˜ ao. 2025;23:265-75
2018
-
[15]
Exposure to ideologically diverse news and opinion on Facebook
Bakshy E, Messing S, Adamic LA. Exposure to ideologically diverse news and opinion on Facebook. Science. 2015;348(6239):1130-2
2015
-
[16]
Exposure to untrustworthy websites in the 2016 US election
Guess AM, Nyhan B, Reifler J. Exposure to untrustworthy websites in the 2016 US election. Nature Human Behaviour. 2021;5:472-80
2016
-
[17]
Quantifying Controversy on Social Media
Garimella K, Morales GDF, Gionis A, Mathioudakis M. Quantifying Controversy on Social Media. Trans Soc Comput. 2018 Jan;1(1). Available from: https://doi.org/10.1145/3140565. doi:10.1145/3140565
-
[18]
Political polarization on twitter
Conover M, Ratkiewicz J, Francisco M, Gon¸ calves B, Menczer F, Flammini A. Political polarization on twitter. In: Proc. of ICWSM. vol. 5; 2011. p. 89-96
2011
-
[19]
Filter bubbles, echo chambers, and online news consumption
Flaxman S, Goel S, Rao JM. Filter bubbles, echo chambers, and online news consumption. Public opinion quarterly. 2016;80(S1):298-320
2016
-
[20]
Colleoni E, Rozza A, Arvidsson A. Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data. Journal of Communication. 2014;64(2):317-32. doi:10.1111/jcom.12084
-
[21]
Boutyline A, Willer R. The Social Structure of Political Echo Chambers: Variation in Ideological Homophily in Online Networks. Political Psychology. 2017;38:551-69. doi:10.1111/pops.12337
-
[22]
Overgaard CSB. Perceiving Affective Polarization in the United States: How Social Media Shape Meta-Perceptions and Affective Polarization. Social Media + Society. 2024;10(1). doi:10.1177/20563051241232662
-
[23]
Social sorting and affective polarization
Mason L, Versteegen PL. Social sorting and affective polarization. In: Handbook of Affective Polarization. Edward Elgar Publishing; 2025
2025
-
[24]
Bridging Echo Chambers? Understanding Political Partisanship through Semantic Network Analysis
Erickson J, Yan B, Huang J. Bridging Echo Chambers? Understanding Political Partisanship through Semantic Network Analysis. Social Media + Society. 2023;9(3). doi:10.1177/20563051231186368
-
[25]
Ribeiro MH, Ottoni R, West R, Almeida VAF, Meira W. Auditing radicalization pathways on YouTube. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* ’20. New York, NY, USA: Association for Computing Machinery; 2020. p. 131–141. Available from: https://doi.org/10.1145/3351095.3372879. doi:10.1145/3351095.3372879
-
[26]
Cross-Partisan Interactions on Twitter
C ¸ etinkaya YM, Ghafouri V, Suarez-Tangil G, Such J, Elmas T. Cross-Partisan Interactions on Twitter. In: Proc. of ICWSM; 2025. . May 5, 2026 17/22
2025
-
[27]
Integrated or Segregated? User Behavior Change After Cross-Party Interactions on Reddit
Xia Y, Monti C, Keller B, Kivel¨ a M. Integrated or Segregated? User Behavior Change After Cross-Party Interactions on Reddit. In: Proc. of ICWSM; 2025
2025
-
[28]
Reaching the bubble may not be enough: news media role in online political polarization
Kobellarz JK, Bro´ ci´ c M, Graeml AR, Silver D, Silva TH. Reaching the bubble may not be enough: news media role in online political polarization. EPJ Data Science. 2022;11(1):47
2022
-
[29]
Measuring and moderating opinion polarization in social networks
Matakos A, Terzi E, Tsaparas P. Measuring and moderating opinion polarization in social networks. Data Mining and Knowledge Discovery. 2017;31:1480-505
2017
-
[30]
Predicting the Political Alignment of Twitter Users
Conover MD, Goncalves B, Ratkiewicz J, Flammini A, Menczer F. Predicting the Political Alignment of Twitter Users. In: 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing; 2011. p. 192-9. doi:10.1109/PASSAT/SocialCom.2011.34
-
[31]
How do social media feed algorithms affect attitudes and behavior in an election campaign? Science
Guess AM, Malhotra N, Pan J, Barber´ a P, Allcott H, Brown T, et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science. 2023;381(6656):398-404. Available from: https://www.science.org/doi/abs/10.1126/science.abp9364. arXiv:https://www.science.org/doi/pdf/10.1126/science.abp9364. doi:10.1126/science.abp9364
-
[32]
CrowdTangle; 2024
Meta. CrowdTangle; 2024. Meta Transparency Center. Accessed: 2026-04-27. CrowdTangle discontinued August 14, 2024. https://transparency.meta.com/ researchtools/other-data-catalogue/crowdtangle
2024
-
[33]
Perspective API; 2024
Jigsaw and Google. Perspective API; 2024. Accessed: 2026-04-27. https://perspectiveapi.com
2024
-
[34]
Should We Translate? Evaluating Toxicity in Online Comments when Translating from Portuguese to English
Kobellarz J, Silva TH. Should We Translate? Evaluating Toxicity in Online Comments when Translating from Portuguese to English. In: Simp´ osio Brasileiro de Sistemas Multim´ ıdia e Web (WebMedia). Curitiba, Brazil; 2022
2022
-
[35]
A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variables
Smithson M, Verkuilen J. A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variables. Psychological methods. 2006;11(1):54
2006
-
[36]
Toxic talk: How online incivility can undermine perceptions of media
Anderson AA, Yeo SK, Brossard D, Scheufele DA, Xenos MA. Toxic talk: How online incivility can undermine perceptions of media. International Journal of Public Opinion Research. 2018;30(1):156-68
2018
-
[37]
An integrative theory of intergroup conflict
Tajfel H, Turner J, Austin WG, Worchel S. An integrative theory of intergroup conflict. Intergroup relations: Essential readings. 2001:94-109
2001
-
[38]
Incivility online: Affective and behavioral responses to uncivil political blogs
Gervais BT. Incivility online: Affective and behavioral responses to uncivil political blogs. American Behavioral Scientist. 2015;59(12):1674-93
2015
-
[39]
Out-group animosity drives engagement on social media
Rathje S, Van Bavel JJ, van der Linden S. Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences. 2021;118(26):e2024212118
2021
-
[40]
A Change in Perspective: The Trade-Off Between Perspective API and Custom Models in Classifying Hate Speech in Portuguese
Buzelin A, Aquino Y, Bento P, Malaquias S, Meira Jr W, Pappa GL. A Change in Perspective: The Trade-Off Between Perspective API and Custom Models in Classifying Hate Speech in Portuguese. In: Simp´ osio Brasileiro de Tecnologia da Informa¸ c˜ ao e da Linguagem Humana (STIL). SBC; 2024. p. 23-31. May 5, 2026 18/22
2024
-
[41]
Context sensitivity estimation in toxicity detection
Xenos A, Pavlopoulos J, Androutsopoulos I. Context sensitivity estimation in toxicity detection. In: Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021); 2021. p. 140-5
2021
-
[42]
A human-centered evaluation of a toxicity detection api: Testing transferability and unpacking latent attributes
Muralikumar MD, Yang YS, McDonald DW. A human-centered evaluation of a toxicity detection api: Testing transferability and unpacking latent attributes. ACM Transactions on Social Computing. 2023;6(1-2):1-38
2023
-
[43]
Translate, then Detect: Leveraging Machine Translation for Cross-Lingual Toxicity Classification
Bell S, S´ anchez E, Dale D, Stenetorp P, Artetxe M, Costa-juss` a MR. Translate, then Detect: Leveraging Machine Translation for Cross-Lingual Toxicity Classification. In: Proceedings of the Tenth Conference on Machine Translation
-
[44]
Incivility or Invalidity? Evaluating Perspective API Scores as a Measure of Political Incivility
Gervais BT, Dye C, Chin A. Incivility or Invalidity? Evaluating Perspective API Scores as a Measure of Political Incivility. American Politics Research. 2025;53(3):266-74
2025
-
[45]
Latent dirichlet allocation
Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. Journal of machine Learning research. 2003;3(Jan):993-1022
2003
-
[46]
Laurer M, van Atteveldt W, Casas AS, Welbers K. Less Annotating, More Classifying: Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT-NLI. Political Analysis. 2024. doi:10.1017/pan.2023.20
-
[47]
Building Efficient Universal Classifiers with Natural Language Inference; 2023
Laurer M, van Atteveldt W, Casas A, Welbers K. Building Efficient Universal Classifiers with Natural Language Inference; 2023. arXiv:2312.17543. doi:10.48550/arXiv.2312.17543
-
[48]
Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics; 2020. p. 7871-80. doi:10.18653/v1/2020.acl-main.703
-
[49]
Lite Training Strategies for Portuguese-English and English-Portuguese Translation
Lopes A, Nogueira R, Lotufo R, Pedrini H. Lite Training Strategies for Portuguese-English and English-Portuguese Translation. In: Proceedings of the Fifth Conference on Machine Translation. Online: Association for Computational Linguistics; 2020. p. 833-40. Available from: https://www.aclweb.org/anthology/2020.wmt-1.90. Acknowledgments This research is pa...
2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.