Recognition: unknown
Israel-Hamas War on X: A Case Study of Coordinated Campaigns and Information Integrity
Pith reviewed 2026-05-10 16:13 UTC · model grok-4.3
The pith
Coordinated groups on X during the Israel-Hamas war concentrate misleading claims in just three of eleven identified clusters, with claim integrity uncorrelated to toxicity or emotional tone.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Established coordination detection on 4.5 million tweets identifies 11 groups involving 541 accounts that rely on low-complexity tactics such as retweet amplification and copy-paste diffusion. These groups advance distinct narratives in a fragmented landscape without centralized control. Widely amplified misleading claims are concentrated in only three of the groups; the remaining groups mainly conduct advocacy, religious solidarity, or humanitarian mobilization. Claim-level integrity, toxicity, and emotional signals are mutually uncorrelated, so no single behavioral signal serves as a reliable proxy for the others.
What carries the argument
Detection of 11 coordinated groups via established methods, followed by multimodal characterization of their topics, amplification, toxicity, emotional tone, visual themes, and misleading claims.
If this is right
- Targeting the most prolific spreaders of misleading claims for moderation would reduce the volume of such content.
- Targeting prolific amplifiers in general would not achieve comparable reduction in misleading content.
- Coordinated campaigns during crises operate through low-complexity tactics and fragmented narratives rather than centralized direction.
- No single metric such as toxicity or emotional tone can reliably indicate the presence of misleading claims.
Where Pith is reading between the lines
- Moderation systems could benefit from combining coordination detection with separate claim-verification steps rather than relying on activity volume alone.
- The pattern of uncorrelated signals may appear in other fast-moving crisis discussions, suggesting a need to test the same multimodal analysis on additional events.
- Simple repeated-content detectors might serve as an early filter for the low-complexity tactics observed here.
Load-bearing premise
The coordination detection methods accurately flag real coordinated groups without substantial false positives or negatives, and the 4.5 million tweet sample covers the main coordinated activity at the conflict onset.
What would settle it
Re-running coordination detection on a larger or differently sampled dataset and finding many additional groups that spread substantial misleading content would undermine the claim that such content concentrates in only three groups.
Figures
read the original abstract
Coordinated campaigns on social media play a critical role in shaping crisis information environments, particularly during the onset of conflicts when uncertainty is high and verified information is scarce. We study the interplay between coordinated campaigns and information integrity through a case study of the 2023 Israel-Hamas War on Twitter (X). We analyze 4.5~million tweets and employ established coordination detection methods to identify 11 coordinated groups involving 541 accounts. We characterize these groups through a multimodal analysis that includes topics, account amplification, toxicity, emotional tone, visual themes, and misleading claims. Our analysis reveal that coordinated campaigns rely predominantly on low-complexity tactics, such as retweet amplification and copy-paste diffusion, and promote distinct narratives consistent with a fragmented manipulation landscape, without centralized control. Widely amplified misleading claims concentrate within just three of the identified coordinated groups; the remaining groups primarily engage in advocacy, religious solidarity, or humanitarian mobilization. Claim-level integrity, toxicity, and emotional signals are mutually uncorrelated: no single behavioral signal is a reliable proxy for the others. Targeting the most prolific spreaders of misleading content for moderation would be effective in reducing such content. However, targeting prolific amplifiers in general would not achieve the same mitigation effect. These findings suggest that evaluating coordination structures jointly with their specific content footprints is needed to effectively prioritize moderation interventions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper analyzes 4.5 million tweets from the onset of the 2023 Israel-Hamas War on X to identify 11 coordinated groups involving 541 accounts via established coordination detection methods. It performs a multimodal characterization of these groups across topics, amplification patterns, toxicity, emotional tone, visual themes, and misleading claims, finding that campaigns rely on low-complexity tactics such as retweet amplification and copy-paste diffusion, promote distinct narratives in a fragmented landscape, and that misleading claims concentrate in only three groups. Claim-level integrity, toxicity, and emotional signals are mutually uncorrelated, with the implication that moderation should target prolific spreaders of misleading content rather than general amplifiers.
Significance. If the results hold, the work offers a valuable large-scale empirical contribution to understanding coordinated influence operations in high-uncertainty crisis environments. The multimodal approach and the finding that no single behavioral signal proxies for misleading content provide concrete guidance for platform moderation strategies. The data-driven nature of the analysis, drawing on prior coordination detection literature without circularity, strengthens its potential impact on information integrity research.
major comments (2)
- [Methods] Methods section on coordination detection: The identification of the 11 groups (and thus the claim that misleading claims concentrate in only three of them) rests on applying 'established coordination detection methods' to the 4.5M-tweet dataset, yet no implementation details, parameter settings, thresholds, false-positive/negative estimates, or robustness checks against alternative detectors are provided. This is load-bearing for the central moderation-targeting recommendation.
- [Data Collection] Data collection subsection: The criteria used to assemble the 4.5 million tweets (keywords, time bounds, network sampling, or other filters) are unspecified. Without these, it is impossible to evaluate whether the sample adequately captures coordinated activity at conflict onset, which directly affects the observed concentration of misleading content and the characterization of group tactics.
minor comments (1)
- [Abstract] Abstract: The sentence 'Our analysis reveal that' contains a subject-verb agreement error and should read 'Our analysis reveals that'.
Simulated Author's Rebuttal
We thank the referee for their constructive review and positive assessment of the manuscript's potential contribution. We agree that greater methodological transparency is needed to support the central claims and will make the requested revisions. We respond to each major comment below.
read point-by-point responses
-
Referee: [Methods] Methods section on coordination detection: The identification of the 11 groups (and thus the claim that misleading claims concentrate in only three of them) rests on applying 'established coordination detection methods' to the 4.5M-tweet dataset, yet no implementation details, parameter settings, thresholds, false-positive/negative estimates, or robustness checks against alternative detectors are provided. This is load-bearing for the central moderation-targeting recommendation.
Authors: We agree that the current description of the coordination detection procedure lacks sufficient implementation details to ensure full reproducibility and to underwrite the key finding that misleading claims are concentrated in only three groups. In the revised manuscript we will expand the Methods section to specify the exact algorithms and similarity metrics drawn from the cited prior literature, all parameter values and thresholds applied to the 4.5 million tweets, the procedure used to derive the 11 groups from the 541 accounts, any internal validation steps for false-positive and false-negative rates, and explicit robustness checks against at least one alternative detector. These additions will directly address the load-bearing nature of the coordination step for the moderation recommendation. revision: yes
-
Referee: [Data Collection] Data collection subsection: The criteria used to assemble the 4.5 million tweets (keywords, time bounds, network sampling, or other filters) are unspecified. Without these, it is impossible to evaluate whether the sample adequately captures coordinated activity at conflict onset, which directly affects the observed concentration of misleading content and the characterization of group tactics.
Authors: We concur that the data-collection criteria must be stated explicitly so that readers can judge the sample's coverage of coordinated activity at the conflict's onset. In the revised version we will insert a dedicated data-collection subsection that details the precise keywords, hashtags, and phrases used; the exact temporal window (beginning October 7, 2023); any network-based or other sampling filters applied to reach the final 4.5 million tweets; and the rationale for these choices. This information will allow assessment of whether the dataset adequately represents the coordinated landscape and will strengthen the interpretation of both the concentration of misleading claims and the low-complexity tactics observed. revision: yes
Circularity Check
No significant circularity: empirical observations from data analysis
full rationale
The paper is a case study applying established coordination detection methods from prior literature to 4.5 million tweets, identifying 11 groups, and then reporting empirical characterizations of topics, amplification, toxicity, emotions, visuals, and misleading claims. No equations, fitted parameters renamed as predictions, self-definitional loops, or load-bearing self-citations that reduce the central claims (e.g., concentration of misleading content in three groups or uncorrelated signals) to inputs by construction. The findings are direct data-driven observations conditional on the applied methods, with no derivation chain that collapses into tautology or unverified self-reference.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Established coordination detection methods from prior literature accurately identify coordinated groups in social media data.
- domain assumption The collected 4.5 million tweets provide a representative view of coordinated activity during the conflict onset.
Reference graph
Works this paper leans on
-
[1]
Atlantic Council, Digital Forensic Research Lab. Accessed: 2025-11-06. Rory Carroll and Hibaq Farah. Gaza internet cutoff as israel siege intensifies and casualties rise. https://www.theguardian.com/world/2023/oct/27/gaza- internet-cutoff-israel-siege-casualties , 2023. Jeff Cercone. Explaining a deleted X post that said Israel is responsible for a hospit...
-
[2]
Tuğrulcan Elmas, Rebekah Overdorf, and Karl Aberer
IEEE, 2021. Tuğrulcan Elmas, Rebekah Overdorf, and Karl Aberer. Characterizing retweet bots: The case of black market accounts. In Proceedings of the international AAAI conference on web and social media , volume 16, pages 171–182, 2022. Tuğrulcan Elmas, Mathis Randl, and Youssef Attia. # teamfollowback: Detec- tion & analysis of follow back accounts on s...
-
[3]
Luca Luceri, Valeria Pantè, Keith Burghardt, and Emilio Ferrara
URL https://proceedings.mlr.press/v202/li23q.html. Luca Luceri, Valeria Pantè, Keith Burghardt, and Emilio Ferrara. Unmasking the web of deceit: Uncovering coordinated activity to expose information operations on twitter. In Proceedings of the ACM Web Conference 2024 , pages 2530–2541, 2024. 33 Luca Luceri, Tanishq Vijay Salkar, Ashwin Balasubramanian, Ga...
-
[4]
URL https://doi.org/10.1162/ 152039702753649656
doi: 10.1162/152039702753649656. URL https://doi.org/10.1162/ 152039702753649656. 34 Diogo Pacheco, Pik-Mai Hui, Christopher Torres-Lugo, Bao Tran Truong, Alessandro Flammini, and Filippo Menczer. Uncovering coordinated net- works on social media: methods and case studies. In Proceedings of the international AAAI conference on web and social media , volum...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.