Recognition: unknown
When Transparency Falls Short: Auditing Platform Moderation During a High-Stakes Election
Pith reviewed 2026-05-10 01:13 UTC · model grok-4.3
The pith
Social media platforms showed no significant change in content moderation around the 2024 European Parliament elections.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Platforms did not exhibit meaningful signs of adaptation in their moderation strategies, as their self-reported enforcement patterns did not change significantly around the 2024 European Parliament elections. This finding, based on analysis of over 1.5 billion actions, questions whether platforms made concrete adjustments or if the database structure obscured such changes, while also noting persistent concerns about transparency one year after the database's launch.
What carries the argument
The Digital Services Act Transparency Database, which collects self-reported moderation actions from platforms and enables statistical comparison of enforcement patterns before, during, and after the elections.
Load-bearing premise
The self-reported moderation actions accurately reflect actual platform behavior and the database structure permits detection of any meaningful adaptations during high-stakes periods.
What would settle it
Detection of statistically significant shifts in the volume, type, or targeting of moderation actions specifically tied to the election dates when using non-self-reported data sources or more granular platform records.
read the original abstract
During major political events, social media platforms encounter increased systemic risks. However, it is still unclear if and how they adjust their moderation practices in response. The Digital Services Act Transparency Database provides-for the first time-an opportunity to systematically examine content moderation at scale, allowing researchers and policymakers to evaluate platforms' compliance and effectiveness, especially at high-stakes times. Here we analyze 1.58 billion self-reported moderation actions by the eight largest social media platforms in Europe over an eight-month period surrounding the 2024 European Parliament elections. We found that platforms did not exhibit meaningful signs of adaptation in moderation strategies as their self-reported enforcement patterns did not change significantly around the elections. This raises questions about whether platforms made any concrete adjustments, or whether the structure of the database may have masked them. On top of that, we reveal that initial concerns regarding platforms' transparency and accountability still persist one year after the launch of the Transparency Database. Our findings highlight the limits of current self-regulatory approaches and point to the need for stronger enforcement and better data access mechanisms to ensure that online platforms meet their responsibilities in protecting the democratic processes.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper analyzes 1.58 billion self-reported content moderation actions from the eight largest social media platforms in Europe over an eight-month window surrounding the 2024 European Parliament elections, drawing on the Digital Services Act Transparency Database. It reports no statistically detectable shifts in enforcement patterns around the election period and concludes that platforms exhibited no meaningful adaptation in moderation strategies. The authors interpret this as raising questions about whether concrete adjustments occurred or whether database limitations (coarse categories, reporting lags) prevented detection, while also documenting persistent transparency and accountability shortfalls one year after the database launch.
Significance. If the null result on adaptation is robust, the work supplies the first large-scale, public-data audit of platform behavior during a high-stakes election under the DSA transparency regime. It supplies concrete evidence that current self-reported mechanisms may be too coarse to reveal responsiveness to systemic risks, thereby informing debates on enforcement, data-access requirements, and the adequacy of self-regulation for protecting democratic processes.
major comments (2)
- [Results] Results section: the central claim that enforcement patterns 'did not change significantly' is presented without any description of the statistical tests, effect-size thresholds, confidence intervals, multiple-comparison corrections, or controls for secular trends and confounders. This omission directly undermines evaluation of the null finding that underpins the paper's main conclusion.
- [Discussion] Discussion section: the authors correctly flag that database structure could mask adaptations, yet provide no sensitivity analysis, simulation of undetectable change types, or quantification of the granularity limits (e.g., action-category coarseness or reporting lags). Without such material the interpretation that the finding 'raises questions' rather than demonstrates non-adaptation remains speculative.
minor comments (2)
- [Abstract] Abstract: the exact list of the eight platforms and the precise start and end dates of the eight-month window should be stated explicitly rather than left implicit.
- [Methods] Methods: data-exclusion criteria, handling of missing values, and any platform-specific reporting differences are not summarized, complicating reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments on our manuscript. These observations highlight important areas for improving the statistical transparency and interpretive rigor of our analysis. We address each major comment below and will revise the paper accordingly.
read point-by-point responses
-
Referee: [Results] Results section: the central claim that enforcement patterns 'did not change significantly' is presented without any description of the statistical tests, effect-size thresholds, confidence intervals, multiple-comparison corrections, or controls for secular trends and confounders. This omission directly undermines evaluation of the null finding that underpins the paper's main conclusion.
Authors: We agree that the Results section requires a more explicit account of the statistical procedures used to evaluate changes around the election period. In the revised manuscript we will add a dedicated subsection describing the tests (including any interrupted time-series or difference-in-differences specifications), effect-size metrics, confidence intervals, multiple-comparison adjustments where relevant, and controls for secular trends and other potential confounders. This addition will allow readers to assess the robustness of the null result directly. revision: yes
-
Referee: [Discussion] Discussion section: the authors correctly flag that database structure could mask adaptations, yet provide no sensitivity analysis, simulation of undetectable change types, or quantification of the granularity limits (e.g., action-category coarseness or reporting lags). Without such material the interpretation that the finding 'raises questions' rather than demonstrates non-adaptation remains speculative.
Authors: We accept that our discussion of database limitations would be strengthened by quantitative sensitivity checks. We will incorporate sensitivity analyses and targeted simulations in the revised version to illustrate the minimum detectable shifts given the coarseness of action categories and known reporting lags. These additions will clarify the boundary between what the data can and cannot reveal, moving the interpretation from qualitative caution to a more evidence-based assessment. revision: yes
Circularity Check
No circularity: purely observational empirical analysis
full rationale
The paper performs a statistical examination of 1.58 billion self-reported moderation actions from a public database over an eight-month window. The central claim (no significant change in enforcement patterns around the 2024 elections) is obtained directly by comparing observed frequencies and distributions in the data before, during, and after the election period. No equations, fitted parameters, predictions, or derivations are present that could reduce the result to prior inputs by construction. The authors explicitly flag database limitations (coarse categories, possible reporting lags) and frame the finding as raising questions rather than proving non-adaptation. No self-citation chains or ansatzes are load-bearing; the analysis is self-contained against the external database.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Self-reported moderation actions in the DSA Transparency Database accurately and completely reflect platforms' actual enforcement practices.
Reference graph
Works this paper leans on
-
[1]
In: ACM CHI (2020)
Rho, E.H.R., Mazmanian, M.: Political hashtags & the lost art of democratic discourse. In: ACM CHI (2020)
2020
-
[2]
In: ACM CHI (2023)
Papakyriakopoulos, O., Engelmann, S., Winecoff, A.: Upvotes? downvotes? no votes? understanding the relationship between reaction mechanisms and political discourse on reddit. In: ACM CHI (2023)
2023
-
[3]
Information Systems Frontiers (2024)
Shahi, G.K., Basyurt, A.S., Stieglitz, S., Neuberger, C.: Agenda formation and prediction of voting tendencies for european parliament election using textual, social and network features. Information Systems Frontiers (2024)
2024
-
[4]
In: IEEE ICDCS Workshops (2014)
Cresci, S., Di Pietro, R., Petrocchi, M., Spognardi, A., Tesconi, M.: A criticism to society (as seen by Twitter analytics). In: IEEE ICDCS Workshops (2014)
2014
-
[5]
Frontiers in Communication7(2022)
Fatema, S., Yanbin, L., Fugui, D.: Social media influence on politicians’ and citizens’ relationship through the moderating effect of political slogans. Frontiers in Communication7(2022)
2022
-
[6]
New Media & Society24(12) (2022)
Kim, D.H., Ellison, N.B.: From observation on social media to offline political par- ticipation: The social media affordances approach. New Media & Society24(12) (2022)
2022
-
[7]
Political Communication (2022)
Bene, M., Ceron, A., Fenoll, V., Haßler, J., Kruschinski, S., Larsson, A.O., Magin, M., Schlosser, K., Wurst, A.-K.: Keep them engaged! investigating the effects of self-centered social media communication style on user engagement in 12 european countries. Political Communication (2022)
2022
-
[8]
Journal of Information Security and Applications64, 103060 (2022)
Chen, L., Chen, J., Xia, C.: Social network behavior and public opinion manipu- lation. Journal of Information Security and Applications64, 103060 (2022)
2022
-
[9]
PloS One15(6) 28 (2020)
Cinelli, M., Cresci, S., Galeazzi, A., Quattrociocchi, W., Tesconi, M.: The limited reach of fake news on twitter during 2019 european elections. PloS One15(6) 28 (2020)
2019
-
[10]
In: HCII (2020)
Tardelli, S., Avvenuti, M., Tesconi, M., Cresci, S.: Characterizing social bots spreading financial disinformation. In: HCII (2020)
2020
-
[11]
In: ACM CSCW (2022)
Matatov, H., Naaman, M., Amir, O.: Stop the [image] steal: The role and dynam- ics of visual content in the 2020 us election misinformation campaign. In: ACM CSCW (2022)
2020
-
[12]
Computer Communications 196(2022)
Mazza, M., Avvenuti, M., Cresci, S., Tesconi, M.: Investigating the difference between trolls, social bots, and humans on Twitter. Computer Communications 196(2022)
2022
-
[13]
In: ACM WebConf Companion (2024)
Haq, E.-U., Zhu, Y., Hui, P., Tyson, G.: History in making: Political campaigns in the era of artificial intelligence-generated content. In: ACM WebConf Companion (2024)
2024
-
[14]
New Media & Society23(7) (2021)
Diakopoulos, N., Johnson, D.: Anticipating and addressing the ethical impli- cations of deepfakes in the context of elections. New Media & Society23(7) (2021)
2021
-
[15]
In: ACM CHI (2020)
Hua, Y., Naaman, M., Ristenpart, T.: Characterizing Twitter users who engage in adversarial interactions against political candidates. In: ACM CHI (2020)
2020
-
[16]
PNAS Nexus 3(7) (2024)
B¨ ar, D., Pierri, F., De Francisci Morales, G., Feuerriegel, S.: Systematic discrep- ancies in the delivery of political ads on Facebook and Instagram. PNAS Nexus 3(7) (2024)
2024
-
[17]
Yale Journal of Law & Technology 17, 42 (2015)
Grimmelmann, J.: The virtues of moderation. Yale Journal of Law & Technology 17, 42 (2015)
2015
-
[18]
Yale University Press, New Haven (2018)
Gillespie, T.: Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press, New Haven (2018)
2018
-
[19]
https://eur- lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065
European Parliament and Council: Regulation on a Single Market For Dig- ital Services (Digital Services Act) and Amending Directive. https://eur- lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065
-
[20]
In: ACM CSCW (2025)
Trujillo, A., Fagni, T., Cresci, S.: The dsa transparency database: Auditing self- reported moderation actions by social media. In: ACM CSCW (2025)
2025
-
[21]
In: ACM FAccT (2024)
Kaushal, R., Van De Kerkhof, J., Goanta, C., Spanakis, G., Iamnitchi, A.: Auto- mated transparency: A legal and empirical analysis of the Digital Services Act Transparency Database. In: ACM FAccT (2024)
2024
-
[22]
Technical report, Lab Platform Gover- nance, Media, and Technology (PGMT)
Dergacheva, D., Kuznetsova, V., Scharlach, R., Katzenbach, C.: One day in content moderation: Analyzing 24h of social media platforms’ content decisions 29 through the DSA Transparency Database. Technical report, Lab Platform Gover- nance, Media, and Technology (PGMT). Centre for Media, Communication and Information Research (ZeMKI), University of Bremen (2023)
2023
-
[23]
In: NNLP (2024)
Aspromonte, M., Ferraris, A., Galli, F., Contissa, G.: LLMs to the rescue: Explain- ing DSA statements of reason with platform’s terms of services. In: NNLP (2024)
2024
-
[24]
In: ACM WebConf Companion (2024)
Drolsbach, C.P., Pr¨ ollochs, N.: Content moderation on social media in the EU: Insights from the DSA Transparency Database. In: ACM WebConf Companion (2024)
2024
-
[25]
In: AAAI COMPASS (2025)
Shahi, G.K., Tessa, B., Trujillo, A., Cresci, S.: A year of the DSA Transparency Database: What it (does not) reveal about platform moderation during the 2024 European parliament election. In: AAAI COMPASS (2025)
2024
-
[26]
In: NLLP (2025)
Eßer, L., Spanakis, G.: Linking transparency and accountability: Analysing the connection between TikTok’s terms of service and moderation decisions. In: NLLP (2025)
2025
-
[27]
Big Data & Society7(1), 2053951719897945 (2020)
Gorwa, R., Binns, R., Katzenbach, C.: Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), 2053951719897945 (2020)
2020
-
[28]
Trujillo, M.Z., Rosenblatt, S.F., Jauregui, G.D.A., Moog, E., Samson, B.P.V., H´ ebert-Dufresne, L., Roth, A.M.: When the echo chamber shatters: Examin- ing the use of community-specific language post-subreddit ban. arXiv:2106.16207 (2021)
-
[29]
Proceedings of the ACM on human-computer interaction5(CSCW2), 1–30 (2021)
Jhaver, S., Boylston, C., Yang, D., Bruckman, A.: Evaluating the effectiveness of deplatforming as a moderation strategy on twitter. Proceedings of the ACM on human-computer interaction5(CSCW2), 1–30 (2021)
2021
-
[30]
Proceedings of the ACM on Human-Computer Interaction 9(2), 1–25 (2025)
Horta Ribeiro, M., Jhaver, S., Cluet-i-Martinell, J., Reignier-Tayar, M., West, R.: Deplatforming norm-violating influencers on social media reduces overall online attention toward them. Proceedings of the ACM on Human-Computer Interaction 9(2), 1–25 (2025)
2025
-
[31]
In: Companion Publication of the 16th ACM Web Science Conference, pp
Cima, L., Trujillo, A., Avvenuti, M., Cresci, S.: The great ban: Efficacy and unintended consequences of a massive deplatforming operation on reddit. In: Companion Publication of the 16th ACM Web Science Conference, pp. 85–93 (2024)
2024
-
[32]
warning labels: Examining support for community-wide moderation interventions
Jhaver, S.: Bans vs. warning labels: Examining support for community-wide moderation interventions. arXiv preprint arXiv:2307.11880 (2023)
-
[33]
In: Proceedings of the International AAAI Conference on Web and Social Media, vol
Zannettou, S.: ”i won the election!”: An empirical analysis of soft moderation 30 interventions on twitter. In: Proceedings of the International AAAI Conference on Web and Social Media, vol. 15, pp. 865–876 (2021)
2021
-
[34]
ACM Transactions on Computer-Human Interaction (TOCHI) (2022)
Chandrasekharan, E., Jhaver, S., Bruckman, A., Gilbert, E.: Quarantined! exam- ining the effects of a community-wide moderation intervention on reddit. ACM Transactions on Computer-Human Interaction (TOCHI) (2022)
2022
-
[35]
Proceedings of the ACM on Human- computer Interaction6(CSCW2), 1–28 (2022)
Trujillo, A., Cresci, S.: Make Reddit Great Again: Assessing community effects of moderation interventions on r/The Donald. Proceedings of the ACM on Human- computer Interaction6(CSCW2), 1–28 (2022)
2022
-
[36]
In: Proceedings of the 15th ACM Web Science Conference 2023, pp
Trujillo, A., Cresci, S.: One of many: Assessing user-level effects of moderation interventions on r/the donald. In: Proceedings of the 15th ACM Web Science Conference 2023, pp. 55–64 (2023)
2023
-
[37]
Management science (2020)
Pennycook, G., Bear, A., Collins, E.T., Rand, D.G.: The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management science (2020)
2020
-
[38]
Engineering Applications of Artificial Intelligence162, 112375 (2025)
Tessa, B., Cima, L., Trujillo, A., Avvenuti, M., Cresci, S.: Beyond trial-and- error: Predicting user abandonment after a moderation intervention. Engineering Applications of Artificial Intelligence162, 112375 (2025)
2025
-
[39]
arXiv preprint arXiv:2510.19882 (2025)
Tessa, B., Moreo, A., Cresci, S., Fagni, T., Sebastiani, F.: Quantifying fea- ture importance for online content moderation. arXiv preprint arXiv:2510.19882 (2025)
-
[40]
In: ACM Web Conf
Niverthi, M., Verma, G., Kumar, S.: Characterizing, detecting, and predicting online ban evasion. In: ACM Web Conf. (2022)
2022
-
[41]
Habib, H., Musa, M.B., Zaffar, M.F., Nithyanand, R.: Are proactive interventions for Reddit communities feasible? In: AAAI ICWSM (2022)
2022
-
[42]
https://ec.europa.eu/commission/presscorner/detail/ en/IP 23 6709
European Commission: Commission opens formal proceedings against X under the Digital Services Act. https://ec.europa.eu/commission/presscorner/detail/ en/IP 23 6709. Accessed: 25 March 2025 (2023)
2025
-
[43]
In: Proceedings of WWW ’23, pp
Horta Ribeiro, M., Cheng, J., West, R.: Automated content moderation increases adherence to community guidelines. In: Proceedings of WWW ’23, pp. 2666–
-
[44]
https: //doi.org/10.1145/3543507.3583275 .https://doi.org/10.1145/3543507.3583275
Association for Computing Machinery, New York, NY, USA (2023). https: //doi.org/10.1145/3543507.3583275 .https://doi.org/10.1145/3543507.3583275
-
[45]
Journal of the American Statistical Association 107(500) (2012)
Killick, R., Fearnhead, P., Eckley, I.A.: Optimal detection of changepoints with a linear computational cost. Journal of the American Statistical Association 107(500) (2012)
2012
-
[46]
Signal Processing167(2020)
Truong, C., Oudre, L., Vayatis, N.: Selective review of offline change point 31 detection methods. Signal Processing167(2020)
2020
-
[47]
EPJ Data Science14(1), 10 (2025)
Xu, W., Sasahara, K., Chu, J., Wang, B., Fan, W., Hu, Z.: Social media warfare: Investigating human-bot engagement in English, Japanese and German during the Russo-Ukrainian war on Twitter and Reddit. EPJ Data Science14(1), 10 (2025)
2025
-
[48]
Internet Histories6(1-2) (2022)
DeCook, J.R.: r/WatchRedditDie and the politics of Reddit’s bans and quaran- tines. Internet Histories6(1-2) (2022)
2022
-
[49]
Papaevangelou, C., Votta, F., et al.: Content moderation and platform observ- ability in the digital services act (2024)
2024
-
[50]
In: ECML-PKDD Workshops (2025)
Tessa, B., Amram, D., Monreale, A., Cresci, S.: Improving regulatory oversight in online content moderation. In: ECML-PKDD Workshops (2025)
2025
-
[51]
ACM Computing Surveys (CSUR)54(1), 1–41 (2021)
Mirsky, Y., Lee, W.: The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR)54(1), 1–41 (2021)
2021
-
[52]
Accessed January 2026
Global Disinformation Index: Deep Dive: European Elections Aftermath. Accessed January 2026. https://www.disinformationindex.org/research/2024-07- 08-disinformation-in-the-european-parliamentary-elections-analysis-and-policy- context/
2026
-
[53]
Media and Communication13(2025)
Casero-Ripoll´ es, A., Alonso-Mu˜ noz, L., Moret-Soler, D.: Spreading false content in political campaigns: Disinformation in the 2024 European Parliament elections. Media and Communication13(2025)
2024
-
[54]
Business and Politics (2024)
Kausche, K., Weiss, M.: Platform power and regulatory capture in digital governance. Business and Politics (2024)
2024
-
[55]
In: ACM CHI (2024)
Cai, J., Patel, A., Naderi, A., Wohn, D.Y.: Content moderation justice and fair- ness on social media: Comparisons across different contexts and platforms. In: ACM CHI (2024)
2024
-
[56]
In: ACM CSCW (2019)
Jhaver, S., Bruckman, A., Gilbert, E.: Does transparency in moderation really matter? user behavior after content removal explanations on Reddit. In: ACM CSCW (2019)
2019
-
[57]
https://ec.europa.eu/commission/ presscorner/detail/en/ip 24 2373
European Commission: Commission opens formal proceedings against Facebook and Instagram under the Digital Services Act. https://ec.europa.eu/commission/ presscorner/detail/en/ip 24 2373. Accessed: 25 March 2025 (2024)
2025
-
[58]
Other” Specification required when content type is “other
Groesch, S., Birrer, A., Just, N., Saurwein, F.: Big data, small answers: How the DSA Transparency Database falls short of its regulatory objectives. Telecommu- nications Policy50(1), 103088 (2026) 32 A Systemic risk assessment reports Table 2 contains the URLs to the repositories of the DSA Systemic Risk Assessment Reports described in Section 3.1. The r...
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.