pith. machine review for the scientific record. sign in

arxiv: 2604.09465 · v1 · submitted 2026-04-10 · 💻 cs.SI · cs.HC

Recognition: unknown

Silence and Noise: Self-censorship and Opinion Expression on Social Media

Bruce Desmarais, Emma Carpenetti, Sarah Rajtmajer, Xinyu Wang

Pith reviewed 2026-05-10 16:21 UTC · model grok-4.3

classification 💻 cs.SI cs.HC
keywords self-censorshipopinion expressionsocial mediacommunity contextpublic discoursepolarizationmixed methodsecho chambers
0
0 comments X

The pith

Social media users self-censor more in larger audiences with low perceived support and adjust their expressed views to match group norms.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how users hold back or reshape their opinions online instead of sharing private beliefs openly. Using survey responses from 390 participants and interviews with 20 others, it ties self-censorship to community surroundings rather than only personal traits. Users who see bigger audiences, post less often, and sense little backing speak up less and conform more when they do post. This pattern matters because it shapes what reaches public view in divided settings and leaves parts of actual opinion hidden behind the visible activity on platforms.

Core claim

Self-censorship is associated with community context; social media users embedded within larger audiences, with lower posting frequency and perceived support, are less likely to express their opinions, and those who do speak often adjust their expressed views to align with perceived group norms. The study complements the rich literature on echo chambers and opinion reinforcement on social media platforms, highlighting the silence within the noise and its potential consequences for public discourse, which have become increasingly pertinent in an era where online platforms are pivotal to social and political narratives.

What carries the argument

Community context factors of audience size, posting frequency, and perceived support that determine whether users express opinions or align them with group norms.

If this is right

  • Public discourse on social media misses a portion of privately held views due to restraint in larger or less supportive groups.
  • Visible opinions tend to shift toward perceived group norms among users who choose to post.
  • This dynamic affects participation in polarized topics where platforms influence broader narratives.
  • Attention to self-censorship adds a layer beyond echo chamber reinforcement by showing active silencing within ongoing activity.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Measured public opinion from social media alone may understate the range of actual beliefs in a community.
  • Platform designs that make audience size less visible or boost signals of support could narrow the gap between private and expressed views.
  • Mixed-method self-report studies like this one point to the need for direct behavioral measures to test how often private views stay entirely offline.

Load-bearing premise

Participants' self-reported differences between publicly shared opinions and privately held beliefs accurately reflect actual behavior without substantial social desirability bias or recall errors.

What would settle it

Longitudinal tracking of actual social media posts compared against separately collected private beliefs would show no systematic difference linked to audience size, posting frequency, or perceived support.

Figures

Figures reproduced from arXiv: 2604.09465 by Bruce Desmarais, Emma Carpenetti, Sarah Rajtmajer, Xinyu Wang.

Figure 1
Figure 1. Figure 1: Participants confirmed that there exists a discrepancy between their privately held [PITH_FULL_IMAGE:figures/full_fig_p010_1.png] view at source ↗
Figure 1
Figure 1. Figure 1: Participants’ agreement with statements intended to capture: [PITH_FULL_IMAGE:figures/full_fig_p011_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Perceived polarization and willingness to express true opinion, by topic (Pol=Polarization, OE=Opinion [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
read the original abstract

Unlike the more observable phenomenon of group opinion reinforcement, self-censorship online has received comparatively less attention. Our goal in this work is to dissect the phenomena of self-censorship and to examine the implications of restrained expression for participation in public discourse, particularly in polarized contexts. We explore how social media users express their opinions online through analyses of 390 survey responses and 20 semi-structured interviews using a mixed-methods approach. We ask social media users about the differences between their publicly shared opinions and privately held beliefs, highlighting the influence of contextual factors on self-expression. Our findings show that self-censorship is associated with community context; social media users embedded within larger audiences, with lower posting frequency and perceived support, are less likely to express their opinions, and those who do speak often adjust their expressed views to align with perceived group norms. The study complements the rich literature on echo chambers and opinion reinforcement on social media platforms, highlighting the silence within the noise and its potential consequences for public discourse, which have become increasingly pertinent in an era where online platforms are pivotal to social and political narratives.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript reports a mixed-methods study based on 390 survey responses and 20 semi-structured interviews that examines self-censorship on social media. It claims that self-censorship is associated with community context: users embedded in larger audiences, with lower posting frequency and lower perceived support, are less likely to express opinions and, when they do, often adjust expressed views to align with perceived group norms. The work positions these findings as complementing echo-chamber research by highlighting restrained participation and its consequences for public discourse.

Significance. If the associations prove robust after addressing measurement and validation concerns, the study would usefully extend the literature on opinion expression by documenting contextual correlates of self-censorship and underscoring the potential for silence to shape online public debate in polarized settings.

major comments (2)
  1. [Methods] Methods section: the central associations rest entirely on self-reported differences between privately held beliefs and publicly posted opinions. No behavioral trace data (posting histories, platform metrics, or logged activity) are used to corroborate these reports, leaving the findings vulnerable to social-desirability bias, inaccurate recall, or post-hoc rationalization.
  2. [Abstract and Results] Abstract and Results: no details are supplied on statistical controls, the precise operationalization of private beliefs, sample representativeness, or handling of post-hoc adjustments. These omissions make it difficult to assess whether the reported links between audience size, posting frequency, perceived support, and self-censorship are robust.
minor comments (2)
  1. [Abstract] Abstract: 'dissect the phenomena' should read 'dissect the phenomenon'.
  2. The manuscript would be strengthened by explicitly stating the social-media platforms examined and the recruitment and demographic characteristics of the 390 respondents.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for their constructive comments, which help strengthen the manuscript. We respond to each major point below and have revised the paper to improve transparency and acknowledge limitations where appropriate.

read point-by-point responses
  1. Referee: [Methods] Methods section: the central associations rest entirely on self-reported differences between privately held beliefs and publicly posted opinions. No behavioral trace data (posting histories, platform metrics, or logged activity) are used to corroborate these reports, leaving the findings vulnerable to social-desirability bias, inaccurate recall, or post-hoc rationalization.

    Authors: We agree that the study relies on self-reported data from surveys and interviews without behavioral trace data, which leaves room for biases such as social desirability or recall error. Self-censorship is inherently unobservable through platform logs, as it manifests as non-expression; our mixed-methods design uses quantitative associations from 390 responses alongside 20 interviews to explore contextual influences and mechanisms. In the revised manuscript we have expanded the Methods and Limitations sections to discuss these biases explicitly, justify the self-report approach, and note that trace data would require a separate study. We cannot add behavioral validation without new data collection. revision: partial

  2. Referee: [Abstract and Results] Abstract and Results: no details are supplied on statistical controls, the precise operationalization of private beliefs, sample representativeness, or handling of post-hoc adjustments. These omissions make it difficult to assess whether the reported links between audience size, posting frequency, perceived support, and self-censorship are robust.

    Authors: We have revised both the abstract and Results section to supply the requested details. The abstract now notes the key predictors and outcome measures. The Results section now specifies the regression controls (demographics, platform type, and usage frequency), the exact survey items operationalizing private beliefs versus expressed opinions, the convenience sampling frame and its limits on representativeness, and confirmation that analyses followed pre-registered plans with no post-hoc adjustments. These additions should allow readers to better evaluate robustness. revision: yes

standing simulated objections not resolved
  • We do not have behavioral trace data available and cannot retroactively validate self-reported self-censorship with platform logs or posting histories.

Circularity Check

0 steps flagged

No circularity: empirical claims rest on primary survey and interview data

full rationale

The paper reports results from 390 survey responses and 20 semi-structured interviews analyzed via mixed methods. No equations, fitted parameters, predictive models, or derivations appear in the provided text. Central claims about associations between self-censorship and community context are presented as direct outputs of the collected responses rather than reductions of prior fits or self-citations. No self-definitional loops, imported uniqueness theorems, or ansatz smuggling are present. The derivation chain is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests primarily on the validity of self-reported data from surveys and interviews; no free parameters or invented entities are introduced, and the single key assumption is a standard domain assumption in social science research.

axioms (1)
  • domain assumption Self-reported differences between publicly shared opinions and privately held beliefs in surveys and interviews accurately reflect actual self-censorship behavior.
    The study design and findings depend on participants providing honest accounts of their private beliefs versus public expressions.

pith-pipeline@v0.9.0 · 5501 in / 1238 out tokens · 49215 ms · 2026-05-10T16:21:13.543048+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

137 extracted references · 4 canonical work pages

  1. [1]

    Klaus Abbink, Lata Gangadharan, Toby Handfield, and John Thrasher. 2017. Peer punishment promotes enforcement of bad social norms.Nature communications8, 1 (2017), 1–8

  2. [2]

    Hunt Allcott, Matthew Gentzkow, and Chuan Yu. 2019. Trends in the diffusion of misinformation on social media. Research & Politics6, 2 (2019), 2053168019848554

  3. [3]

    1924.Social psychology

    Floyd Henry Allport. 1924.Social psychology. Boston, Houghton

  4. [4]

    Tawfiq Ammari and Sarita Schoenebeck. 2015. Understanding and supporting fathers and fatherhood on social media sites. InProceedings of the 33rd annual ACM conference on human factors in computing systems. 1905–1914

  5. [5]

    Solomon E Asch. 1951. Effects of group pressure upon the modification and distortion of judgments.Organizational influence processes58 (1951), 295–303

  6. [6]

    Jie Bai, Qingchao Kong, Linjing Li, Lei Wang, and Daniel Zeng. 2019. Exploring cognitive dissonance on social media. In2019 IEEE International Conference on Intelligence and Security Informatics (ISI). IEEE, 143–145

  7. [7]

    Christopher A Bail, Lisa P Argyle, Taylor W Brown, John P Bumpus, Haohan Chen, MB Fallin Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. Exposure to opposing views on social media can increase political polarization.Proceedings of the National Academy of Sciences115, 37 (2018), 9216–9221

  8. [8]

    Cameron Ballard, Ian Goldstein, Pulak Mehta, Genesis Smothers, Kejsi Take, Victoria Zhong, Rachel Greenstadt, Tobias Lauinger, and Damon McCoy. 2022. Conspiracy brokers: Understanding the monetization of youtube conspiracy theories. InProceedings of the ACM Web Conference 2022. 2707–2718

  9. [9]

    Daniel Bar-Tal. 2017. Self-censorship as a socio-political-psychological phenomenon: Conception and research. Political Psychology38 (2017), 37–65. , Vol. 1, No. 1, Article . Publication date: April 2026. 26 Trovato et al

  10. [10]

    Pablo Barberá. 2020. Social media, echo chambers, and political polarization.Social media and democracy: The state of the field, prospects for reform(2020), 34–55

  11. [11]

    Pablo Barberá, John T Jost, Jonathan Nagler, Joshua A Tucker, and Richard Bonneau. 2015. Tweeting from left to right: Is online political communication more than an echo chamber?Psychological science26, 10 (2015), 1531–1542

  12. [12]

    Roy F Baumeister and Debra G Hutton. 1987. Self-presentation theory: Self-construction and audience pleasing. In Theories of group behavior. Springer, 71–87

  13. [13]

    Md Momen Bhuiyan, Michael Horning, Sang Won Lee, and Tanushree Mitra. 2021. Nudgecred: Supporting news credibility assessment on social media through nudges.Proceedings of the ACM on Human-Computer Interaction5, CSCW2 (2021), 1–30

  14. [14]

    Engin Bozdag. 2013. Bias in algorithmic filtering and personalization.Ethics and information technology15 (2013), 209–227

  15. [15]

    Petter Bae Brandtzæg, Marika Lüders, and Jan Håvard Skjetne. 2010. Too many Facebook “friends”? Content sharing and sociability versus the need for privacy in social network sites.Intl. Journal of Human–Computer Interaction26, 11-12 (2010), 1006–1030

  16. [16]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology.Qualitative research in psychology3, 2 (2006), 77–101

  17. [17]

    Marilynn B Brewer et al. 1999. The psychology of prejudice: Ingroup love or outgroup hate?Journal of social issues 55 (1999), 429–444

  18. [18]

    Axel Bruns. 2017. Echo chamber? What echo chamber? Reviewing the evidence. In6th Biennial Future of Journalism Conference (FOJ17)

  19. [19]

    Alycia Burnett, Devin Knighton, and Christopher Wilson. 2022. The self-censoring majority: How political identity and ideology impacts willingness to self-censor and fear of isolation in the United States.Social Media+ Society8, 3 (2022), 20563051221123031

  20. [20]

    Lucy H Butler, Toby Prike, and Ullrich KH Ecker. 2024. Nudge-based misinformation interventions are effective in information environments with low misinformation prevalence.Scientific Reports14, 1 (2024), 11495

  21. [21]

    Damon Centola, Robb Willer, and Michael Macy. 2005. The emperor’s dilemma: A computational model of self- enforcing norms.Amer. J. Sociology110, 4 (2005), 1009–1040

  22. [22]

    spiral of silence

    Irfan Chaudhry and Anatoliy Gruzd. 2020. Expressing and challenging racist discourse on Facebook: How social media weaken the “spiral of silence” theory.Policy & Internet12, 1 (2020), 88–108

  23. [23]

    Chhaya Chouhan, Christy M LaPerriere, Zaina Aljallad, Jess Kropczynski, Heather Lipford, and Pamela J Wisniewski

  24. [24]

    Co-designing for community oversight: Helping people make privacy and security decisions together.Proceedings of the ACM on Human-Computer Interaction3, CSCW (2019), 1–31

  25. [25]

    Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini

  26. [26]

    The echo chamber effect on social media.Proceedings of the National Academy of Sciences118, 9 (2021)

  27. [27]

    Katherine Clayton, Spencer Blair, Jonathan A Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, et al. 2020. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media.Political behavior42 (2020), 1073–1095

  28. [28]

    Bennett Clifford. 2021. Moderating Extremism: The State of Online Terrorist Content Removal Policy in the United States.Program on Extremism. Washington DC: George Washington University, December(2021)

  29. [29]

    James S Coleman. 1988. Social capital in the creation of human capital.American journal of sociology94 (1988), S95–S120

  30. [30]

    Richard S Crutchfield. 1955. Conformity and character.American psychologist10, 5 (1955), 191

  31. [31]

    Sauvik Das and Adam Kramer. 2013. Self-censorship on Facebook. InProceedings of the International AAAI Conference on Web and Social Media, Vol. 7. 120–127

  32. [32]

    Michela Del Vicario, Alessandro Bessi, Fabiana Zollo, Fabio Petroni, Antonio Scala, Guido Caldarelli, H Eugene Stanley, and Walter Quattrociocchi. 2016. The spreading of misinformation online.Proceedings of the National Academy of Sciences113, 3 (2016), 554–559

  33. [33]

    Kerry Dhakal. 2022. NVivo.Journal of the Medical Library Association: JMLA110, 2 (2022), 270

  34. [34]

    Benjamin D Douglas, Patrick J Ewell, and Markus Brauer. 2023. Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA.Plos one18, 3 (2023), e0279720

  35. [35]

    Elizabeth Dubois and Julia Szwarc. 2018. Self-Censorship, Polarization, and the—Spiral of Silence on Social Media. In Policy & Politics Conference

  36. [36]

    Houda Elmimouni, Yarden Skop, Norah Abokhodair, Sarah Rüller, Konstantin Aal, Anne Weibert, Adel Al-Dawood, Volker Wulf, and Peter Tolmie. 2024. Shielding or silencing?: An investigation into content moderation during the sheikh jarrah crisis.Proceedings of the ACM on Human-Computer Interaction8, GROUP (2024), 1–21. , Vol. 1, No. 1, Article . Publication ...

  37. [37]

    Peer Eyal, Rothschild David, Gordon Andrew, Evernden Zak, and Damer Ekaterina. 2021. Data quality of platforms and panels for online behavioral research.Behavior research methods(2021), 1–20

  38. [38]

    Vincent F Filak, Scott Reinardy, and Adam Maksl. 2009. Expanding and validating applications of the willingness to self-censor scale: Self-censorship and media advisers’ comfort level with controversial topics.Journalism & Mass Communication Quarterly86, 2 (2009), 368–382

  39. [39]

    Andrew S Fullerton. 2009. A conceptual framework for ordered logistic regression models.Sociological methods & research38, 2 (2009), 306–347

  40. [40]

    Bharath Ganesh and Jonathan Bright. 2020. Countering extremists on social media: Challenges for strategic commu- nication and content moderation. 6–19 pages

  41. [41]

    Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship. InProceedings of the 2018 world wide web conference. 913–922

  42. [42]

    Michael T Gastner, Beáta Oborny, and Máté Gulyás. 2018. Consensus time in a voter model with concealed and publicly expressed opinions.Journal of Statistical Mechanics: Theory and Experiment2018, 6 (2018), 063401

  43. [43]

    Was it something I said?

    Sherice Gearhart and Weiwu Zhang. 2015. “Was it something I said?”“No, it was something you posted!” A study of the spiral of silence theory in social media contexts.Cyberpsychology, Behavior, and Social Networking18, 4 (2015), 208–213

  44. [44]

    Parush Gera, Nadia Thomas, and Tempestt Neal. 2020. Hesitation while posting: A cross-sectional survey of sensitive topics and opinion sharing on social media. InInternational Conference on Social Media and Society. 134–140

  45. [45]

    James L Gibson and Joseph L Sutherland. 2023. Keeping your mouth shut: Spiraling self-censorship in the United States.Political Science Quarterly138, 3 (2023), 361–376

  46. [46]

    Nabeel Gillani, Ann Yuan, Martin Saveski, Soroush Vosoughi, and Deb Roy. 2018. Me, my echo chamber, and I: introspection on social media polarization. InProceedings of the 2018 World Wide Web Conference. 823–831

  47. [47]

    Tarleton Gillespie. 2020. Content moderation, AI, and the question of scale.Big Data & Society7, 2 (2020), 2053951720943234

  48. [48]

    Erving Goffman. 1949. Presentation of self in everyday life.Amer. J. Sociology55 (1949), 6–7

  49. [49]

    Steven Greene. 2004. Social identity theory and party identification.Social Science Quarterly85, 1 (2004), 136–153

  50. [50]

    Andrew Guess, Brendan Nyhan, Benjamin Lyons, and Jason Reifler. 2018. Avoiding the echo chamber about echo chambers.Knight Foundation2, 1 (2018), 1–25

  51. [51]

    2014.Social media and the’spiral of silence’

    Keith N Hampton, Harrison Rainie, Weixu Lu, Maria Dwyer, Inyoung Shin, and Kristen Purcell. 2014.Social media and the’spiral of silence’. PewResearchCenter Washington, DC, USA

  52. [52]

    Natali Helberger, Kari Karppinen, and Lucia D’acunto. 2018. Exposure diversity as a design principle for recommender systems.Information, communication & society21, 2 (2018), 191–207

  53. [53]

    Monique Hennink and Bonnie N Kaiser. 2022. Sample sizes for saturation in qualitative research: A systematic review of empirical tests.Social science & medicine292 (2022), 114523

  54. [54]

    Sara B Hobolt, Katharina Lawall, and James Tilley. 2024. The polarizing effect of partisan echo chambers.American Political Science Review118, 3 (2024), 1464–1479

  55. [55]

    Erin E Hollenbaugh. 2021. Self-presentation in social media: Review and research opportunities.Review of communi- cation research9 (2021), 80

  56. [56]

    Piers Howe, Andrew Perfors, Keith James Ransom, Bradley Walker, Nicolas Fay, Yoshihisa Kashima, and Morgan Saletta. 2023. Self-censorship appears to be an effective way of reducing the spread of misinformation on social media. InProceedings of the Annual Meeting of the Cognitive Science Society, Vol. 45

  57. [57]

    Dana Crowley Jack. 2011. Reflections on the silencing the self scale and its origins.Psychology of Women Quarterly 35, 3 (2011), 523–529

  58. [58]

    Farnaz Jahanbakhsh, Amy X Zhang, Adam J Berinsky, Gordon Pennycook, David G Rand, and David R Karger

  59. [59]

    Proceedings of the ACM on human-computer interaction5, CSCW1 (2021), 1–42

    Exploring lightweight interventions at posting time to reduce the sharing of misinformation on social media. Proceedings of the ACM on human-computer interaction5, CSCW1 (2021), 1–42

  60. [60]

    Gabriela Juncosa, Taha Yasseri, Julia Koltai, and Gerardo Iniguez. 2024. Toxic behavior silences online political conversations.arXiv preprint arXiv:2412.05741(2024)

  61. [61]

    Natascha A Karlova and Karen E Fisher. 2013. A social diffusion model of misinformation and disinformation for understanding human information behaviour. (2013)

  62. [62]

    Stephen King. 1981. Conflicts between public and private opinion.Long Range Planning14, 4 (1981), 90–105

  63. [63]

    Elizaveta Konovalova, Gaël Le Mens, and Nikolas Schöll. 2023. Social media feedback and extreme opinion expression. Plos one18, 11 (2023), e0293805

  64. [64]

    KP Krishna Kumar and G Geethakumari. 2014. Detecting misinformation in online social networks using cognitive psychology.Human-centric Computing and Information Sciences4, 1 (2014), 1–22. , Vol. 1, No. 1, Article . Publication date: April 2026. 28 Trovato et al

  65. [65]

    K Hazel Kwon, Shin-Il Moon, and Michael A Stefanone. 2015. Unspeaking on Facebook? Testing network effects on self-censorship of political expressions in social network sites.Quality & quantity49 (2015), 1417–1435

  66. [66]

    Miron Lakomy. 2023. Why do online countering violent extremism strategies not work? The case of digital jihad. Terrorism and political violence35, 6 (2023), 1261–1298

  67. [67]

    Jooyoung Lee, Sarah Rajtmajer, Eesha Srivatsavaya, and Shomir Wilson. 2021. Digital inequality through the lens of self-disclosure.Proceedings on Privacy Enhancing Technologies(2021)

  68. [68]

    fake news

    Jianing Li and Min-Hsin Su. 2020. Real talk about fake news: Identity language and disconnected networks of the US public’s “fake news” discourse on Twitter.Social Media+ Society6, 2 (2020), 2056305120916841

  69. [69]

    Soo Ling Lim and Peter J Bentley. 2022. Opinion amplification causes extreme polarization in social networks. Scientific Reports12, 1 (2022), 18131

  70. [70]

    David P MacKinnon, Amanda J Fairchild, and Matthew S Fritz. 2007. Mediation analysis.Annu. Rev. Psychol.58 (2007), 593–614

  71. [71]

    Alice E Marwick and Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience.New media & society13, 1 (2011), 114–133

  72. [72]

    Patrick E McKnight and Julius Najab. 2010. Mann-Whitney U Test.The Corsini encyclopedia of psychology(2010), 1–1

  73. [73]

    Ariadne Neureiter, Marlis Stubenvoll, Ruta Kaskeleviciute, and Jörg Matthes. 2021. Trust in science, perceived media exaggeration about COVID-19, and social distancing behavior.Frontiers in Public Health9 (2021), 670485

  74. [74]

    Tien T Nguyen, Pik-Mai Hui, F Maxwell Harper, Loren Terveen, and Joseph A Konstan. 2014. Exploring the filter bubble: the effect of using recommender systems on content diversity. InProceedings of the 23rd international conference on World wide web. 677–686

  75. [75]

    Todd G Nick and Kathleen M Campbell. 2007. Logistic regression.Topics in biostatistics(2007), 273–301

  76. [76]

    Elisabeth Noelle-Neumann. 1974. The spiral of silence a theory of public opinion.Journal of communication24, 2 (1974), 43–51

  77. [77]

    Daniel M Oppenheimer, Tom Meyvis, and Nicolas Davidenko. 2009. Instructional manipulation checks: Detecting satisficing to increase statistical power.Journal of experimental social psychology45, 4 (2009), 867–872

  78. [78]

    Mustafa Oz, Saif Shahin, and Scott B Greeves. 2024. Platform affordances and spiral of silence: How perceived differences between Facebook and Twitter influence opinion expression online.Technology in Society76 (2024), 102431

  79. [79]

    Lawrence A Palinkas, Sarah M Horwitz, Carla A Green, Jennifer P Wisdom, Naihua Duan, and Kimberly Hoagwood

  80. [80]

    Administration and policy in mental health and mental health services research42, 5 (2015), 533–544

    Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and policy in mental health and mental health services research42, 5 (2015), 533–544

Showing first 80 references.