pith. machine review for the scientific record. sign in

arxiv: 2605.12613 · v1 · submitted 2026-05-12 · 💻 cs.HC · cs.IR

Recognition: unknown

Creating Group Rules with AI: Human-AI Collaboration in WhatsApp Moderation

Authors on Pith no claims yet

Pith reviewed 2026-05-14 20:15 UTC · model grok-4.3

classification 💻 cs.HC cs.IR
keywords WhatsAppgroup moderationhuman-AI collaborationspeculative designAI governanceprivacy concernssocial media rules
0
0 comments X

The pith

WhatsApp admins value AI help drafting group rules yet resist full delegation over trust and privacy concerns.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how WhatsApp group administrators collaborate with AI to create and manage rules in the absence of platform support. In a two-phase speculative design study with twenty admins in India, participants used Meta AI to generate rules and responded to probes about AI moderation features. They welcomed the AI for identifying overlooked rules and easing their workload. At the same time they expressed strong reservations tied to relational trust, data privacy, appropriate tone, and group-specific social context. The study shows that willingness to delegate authority depends on group type and individual admin style, and it highlights limits in current chatbot interfaces for this kind of collaborative work.

Core claim

Admins appreciated the AI's ability to surface overlooked rules and reduce their moderation burden, but they were highly sensitive to issues of relational trust, data privacy, tone, and social context, with willingness to delegate shaped by group type and admin style.

What carries the argument

A two-phase speculative design study in which admins interacted with Meta AI to co-create rules and evaluated a series of design probes illustrating AI-assisted moderation features.

Load-bearing premise

That responses to speculative design probes and interactions with Meta AI accurately predict real-world willingness to delegate moderation authority in actual WhatsApp groups.

What would settle it

A field deployment that tracks how often admins actually accept, edit, or reject AI-suggested rules inside live WhatsApp groups over multiple weeks.

Figures

Figures reproduced from arXiv: 2605.12613 by Aditya Vashistha, Farhana Shahid, Gauri Nayak, Kiran Garimella.

Figure 1
Figure 1. Figure 1: Design probes illustrating different AI-assisted scenarios for group rule creation, enforcement, and [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
read the original abstract

WhatsApp is one of the most widely used messaging platforms globally, with billions of users sharing information in private groups. Yet, it offers little infrastructure to support moderation and group governance. In the absence of platform-level oversight, group admins bear the responsibility of governing group behavior. In this paper, we explore how WhatsApp group admins collaborate with AI tools to create, enforce, and maintain group rules. Drawing on a two-phase speculative design study with 20 admins in India, we examine how participants interacted with an AI assistant (Meta AI) to co-create rules and responded to a series of probes illustrating AI-assisted moderation features. Our findings show that while admins appreciated the AI's ability to surface overlooked rules and reduce their moderation burden, they were highly sensitive to issues of relational trust, data privacy, tone, and social context. We identify how group type and admin style shaped their willingness to delegate authority, and surface the limitations of current chatbot interfaces in supporting collaborative rule-making. We conclude with design implications for building moderation tools that center human judgment, relational nuance, contextual adaptability, and collective governance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper reports findings from a two-phase speculative design study with 20 WhatsApp group admins in India. Participants first interacted with Meta AI to co-create group rules and then responded to design probes illustrating AI-assisted moderation features. Key results indicate that admins value AI for surfacing overlooked rules and reducing moderation burden, but remain sensitive to relational trust, data privacy, tone, and social context; willingness to delegate varies by group type and admin style, and current chatbot interfaces show limitations for collaborative rule-making. The work concludes with design implications for moderation tools that prioritize human judgment, relational nuance, and collective governance.

Significance. If the reported participant reactions hold, the study offers timely empirical grounding for human-AI collaboration in private-group moderation, an area with sparse prior HCI work. It surfaces concrete sensitivities around delegation that could inform more context-sensitive tools on platforms like WhatsApp, while underscoring the value of speculative methods for early-stage exploration of governance features.

major comments (1)
  1. [§5] The design implications about willingness to delegate authority (abstract and §5) rest entirely on responses to speculative probes and Meta AI interactions; no data on actual rule adoption, override rates, or emergent conflicts in live WhatsApp groups is reported, leaving the mapping from hypothetical reactions to real-world behavior untested and load-bearing for the central claim about delegation.
minor comments (2)
  1. [§3] The methods section would benefit from an explicit statement of how the 20 participants were recruited and screened for diversity across group types, to allow readers to assess the range of admin styles represented.
  2. [§4] Figure captions and probe descriptions could be expanded with one additional sentence each to clarify the exact AI output shown to participants, improving reproducibility of the speculative probes.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and recommendation for minor revision. We address the single major comment below by clarifying the scope of our speculative design study and strengthening the framing of our claims.

read point-by-point responses
  1. Referee: [§5] The design implications about willingness to delegate authority (abstract and §5) rest entirely on responses to speculative probes and Meta AI interactions; no data on actual rule adoption, override rates, or emergent conflicts in live WhatsApp groups is reported, leaving the mapping from hypothetical reactions to real-world behavior untested and load-bearing for the central claim about delegation.

    Authors: We appreciate the referee's point and agree that the study provides no direct observational data from live WhatsApp groups. Our work is explicitly framed as a two-phase speculative design study (Sections 3 and 4) whose purpose is to surface early user perceptions, sensitivities, and design considerations rather than to measure real-world adoption or conflict rates. The willingness-to-delegate findings are reported as participant attitudes elicited through Meta AI interactions and design probes; the design implications in Section 5 are presented as hypotheses for future tool development, not as validated behavioral predictions. We will make a partial revision: (1) add an explicit limitations paragraph in Section 5 and the abstract that qualifies the claims as speculative and calls for longitudinal field studies; (2) rephrase the abstract and Section 5 to avoid any implication of direct real-world mapping. No new empirical data will be collected, as that would require a fundamentally different study design. revision: partial

Circularity Check

0 steps flagged

No significant circularity in qualitative empirical findings

full rationale

The paper reports findings from a two-phase speculative design study involving direct participant responses to AI interactions and design probes. No mathematical derivations, equations, fitted parameters, or predictions are present. Central claims about admin appreciation for AI rule-surfacing and sensitivities to trust/privacy derive straightforwardly from the collected interview and probe data rather than reducing to self-citations, ansatzes, or internal redefinitions. The derivation chain is self-contained empirical reporting with no load-bearing self-referential steps.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The central claim rests on qualitative data from a small sample of participants in one country, with assumptions about the representativeness of the sample and the validity of speculative design probes.

pith-pipeline@v0.9.0 · 5503 in / 1037 out tokens · 23891 ms · 2026-05-14T20:15:27.090094+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

67 extracted references · 25 canonical work pages

  1. [1]

    Dhruv Agarwal, Mor Naaman, and Aditya Vashistha. 2025. AI suggestions homogenize writing toward western styles and diminish cultural nuances. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–21

  2. [2]

    Abdullah Alrhmoun, Charlie Winter, and János Kertész. 2024. Automating terror: The role and impact of telegram bots in the Islamic State’s online ecosystem.Terrorism and political violence36, 4 (2024), 409–424

  3. [3]

    ISBN 978-1-4503-8309-7

    Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. doi:10.114...

  4. [4]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology.Qualitative research in psychology3, 2 (2006), 77

  5. [5]

    Eleftheria Briakou, Zhongtao Liu, Colin Cherry, and Markus Freitag. 2024. On the Implications of Verbose LLM Outputs: A Case Study in Translation Evaluation. arXiv:2410.00863 [cs.CL] https://arxiv.org/abs/2410.00863

  6. [6]

    Jie Cai and Donghee Yvette Wohn. 2022. Coordination and Collaboration: How do Volunteer Moderators Work as a Team in Live Streaming Communities?. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 300, Creating Group Rules with AI: H...

  7. [7]

    Jess Cartner-Morley. 2022. Group chat overload: have we reached peak WhatsApp? Retrieved May 23, 2023 from https://www.theguardian.com/lifeandstyle/2022/aug/19/whatsapp-group-chat-overload-have-we-reached-peak- whatsapp

  8. [8]

    group rules

    Andrew Chadwick, Natalie-Anne Hall, and Cristian Vaccari. 2025. Misinformation rules!? Could “group rules” reduce misinformation in online personal messaging?new media & society27, 1 (2025), 106–126

  9. [9]

    Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A Cross- Community Learning-based System to Assist Reddit Moderators.Proc. ACM Hum.-Comput. Interact.3, CSCW, Article 174 (Nov. 2019), 30 pages

  10. [10]

    Eshwar Chandrasekharan, Mattia Samory, Shagun Jhaver, Hunter Charvat, Amy Bruckman, Cliff Lampe, Jacob Eisenstein, and Eric Gilbert. 2018. The Internet’s Hidden Rules: An Empirical Study of Reddit Norm Violations at Micro, Meso, and Macro Scales.Proc. ACM Hum.-Comput. Interact.2, CSCW, Article 32 (Nov. 2018), 25 pages. doi:10.1145/3274301

  11. [11]

    Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, and Eshwar Chandrasekharan. 2023. ConvEx: A Visual Conversation Exploration System for Discord Moderators.Proc. ACM Hum.-Comput. Interact.7, CSCW2, Article 262 (Oct. 2023), 30 pages

  12. [12]

    Amanda L. L. Cullen and Sanjay R. Kairam. 2022. Practicing Moderation: Community Moderation as Reflective Practice. Proc. ACM Hum.-Comput. Interact.6, CSCW1, Article 111 (April 2022), 32 pages

  13. [13]

    Discord. 2024. AutoMod FAQ. https://support.discord.com/hc/en-us/articles/4421269296535-AutoMod-FAQ Accessed: 2025-05-12

  14. [14]

    Mainwaring

    Paul Dourish and Scott D. Mainwaring. 2012. Ubicomp’s colonial impulse. InProceedings of the 2012 ACM Conference on Ubiquitous Computing(Pittsburgh, Pennsylvania)(UbiComp ’12). Association for Computing Machinery, New York, NY, USA, 133–142. doi:10.1145/2370216.2370238

  15. [15]

    Cynthia Dwork, Chris Hays, Jon Kleinberg, and Manish Raghavan. 2024. Content Moderation and the Formation of Online Communities: A Theoretical Framework. InProceedings of the ACM Web Conference 2024(Singapore, Singapore) (WWW ’24). Association for Computing Machinery, New York, NY, USA, 1307–1317

  16. [16]

    Facebook. 2025. Add, edit or delete rules for a Facebook group you admin. https://www.facebook.com/help/ 462230500886400 Accessed: 2025-05-12

  17. [17]

    Facebook. 2025. Pin a post or group rules to the Featured section of your Facebook group. https://www.facebook. com/help/1395974820512040 Accessed: 2025-05-12

  18. [18]

    Facebook. 2025. Set up Admin Assist to automatically manage your Facebook group. https://www.facebook.com/ help/436275657385753 Accessed: 2025-05-12

  19. [19]

    Anna Fang, Wenjie Yang, and Haiyi Zhu. 2023. Shaping Online Dialogue: Examining How Community Rules Affect Discussion Structures on Reddit. arXiv:2308.01257 [cs.SI]

  20. [20]

    Kiran Garimella and Simon Chauchard. 2025. Whatsapp Explorer: A data donation tool to facilitate research on WhatsApp.Mobile Media & Communication13, 3 (2025), 481–503. doi:10.1177/20501579251326809

  21. [21]

    Thomas F. Gieryn. 1983. Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.American Sociological Review48, 6 (1983), 781–795

  22. [22]

    Tarleton Gillespie. 2020. Content Moderation, AI, and the Question of Scale.Big Data & Society7, 2 (2020). doi:10. 1177/2053951720943234

  23. [23]

    Robert Gorwa, Reuben Binns, and Christian Katzenbach. 2020. Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.Big Data & Society7, 1 (2020). doi:10.1177/2053951719897945

  24. [24]

    Alice Hunsberger. 2025. Can LLMs moderate nuanced policies? Retrieved May 10, 2025 from https://www.notion.so/ musubi-labs/Can-LLMs-moderate-nuanced-policies-public-1ebb7b1503c980acbd99ddd5c7b178ef

  25. [25]

    Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck

    Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Björn Eiderbäck. 2003. Technology probes: inspiring design for and with families(CHI ’03). Association for Computing Machinery, New York, NY, USA, 17–24

  26. [26]

    Juliane Jarke and Ulrike Gerhard. 2018. Using probes for sharing (tacit) knowing in participatory design: Facilitating perspective making and perspective taking.i-com17, 2 (2018), 137–152

  27. [27]

    Shagun Jhaver, Iris Birman, Eric Gilbert, and Amy Bruckman. 2019. Human-Machine Collaboration for Content Regulation: The Case of Reddit Automoderator.ACM Trans. Comput.-Hum. Interact.26, 5, Article 31 (July 2019), 35 pages. doi:10.1145/3338243

  28. [28]

    Brubaker, and Casey Fiesler

    Jialun ’Aaron’ Jiang, Skyler Middler, Jed R. Brubaker, and Casey Fiesler. 2020. Characterizing Community Guidelines on Social Media Platforms. InCompanion Publication of the 2020 Conference on Computer Supported Cooperative Work and Social Computing(Virtual Event, USA)(CSCW ’20 Companion). Association for Computing Machinery, New York, NY, USA, 287–291. 2...

  29. [29]

    Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through the Looking Glass: Study of Transparency in Reddit’s Moderation Practices.Proc. ACM Hum.-Comput. Interact.4, GROUP, Article 17 (Jan. 2020), 35 pages

  30. [30]

    Aman Khullar, Paramita Panjal, Rachit Pandey, Abhishek Burnwal, Prashit Raj, Ankit Akash Jha, Priyadarshi Hitesh, R Jayanth Reddy, Himanshu Himanshu, and Aaditeshwar Seth. 2022. Experiences with the Introduction of AI- based Tools for Moderation Automation of Voice-based Participatory Media Forum. InProceedings of the 12th Indian Conference on Human-Compu...

  31. [31]

    Charles Kiene and Benjamin Mako Hill. 2020. Who Uses Bots? A Statistical Analysis of Bot Usage in Moderation Teams. InExtended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) (CHI EA ’20). Association for Computing Machinery, New York, NY, USA, 1–8

  32. [32]

    Vinay Koshy, Frederick Choi, Yi-Shyuan Chiang, Hari Sundaram, Eshwar Chandrasekharan, and Karrie Karahalios

  33. [33]

    Venire: A Machine Learning-Guided Panel Review System for Community Content Moderation.arXiv preprint arXiv:2410.23448(2024)

  34. [34]

    Udo Kuckartz. 2019. Qualitative text analysis: A systematic approach.Compendium for early career researchers in mathematics education(2019), 181–197

  35. [35]

    Vera Liao, Yunfeng Zhang, and Chenhao Tan

    Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng Zhang, and Chenhao Tan. 2022. Human-AI Collaboration via Conditional Delegation: A Case Study of Content Moderation. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems(New Orleans, LA, USA)(CHI ’22). Association for Computing Machinery, New York, NY, USA, Article...

  36. [36]

    Cliff Lampe and Paul Resnick. 2004. Slash(dot) and burn: distributed moderation in a large online conversation space. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems(Vienna, Austria)(CHI ’04). Association for Computing Machinery, New York, NY, USA, 543–550

  37. [37]

    Pantelitsa Leonidou, Nicolas Kourtellis, Nikos Salamanos, and Michael Sirivianos. 2023. Privacy–Preserving Online Content Moderation: A Federated Learning Use Case. InCompanion Proceedings of the ACM Web Conference 2023 (Austin, TX, USA)(WWW ’23 Companion). Association for Computing Machinery, New York, NY, USA, 280–289. doi:10.1145/3543873.3587604

  38. [38]

    Travis Lloyd, Jennah Gosciak, Tung Nguyen, and Mor Naaman. 2025. AI Rules? Characterizing Reddit Community Policies Towards AI-Generated Content. InCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, 19 pages

  39. [39]

    Alfred Lua. 2023. 21 Top Social Media Sites to Consider for Your Brand in 2023. Retrieved May 23, 2023 from https://buffer.com/library/social-media-sites/

  40. [40]

    Pranav Malhotra and Katy Pearce. 2022. Facing Falsehoods: Strategies for Polite Misinformation Correction.Interna- tional Journal of Communication16, 0 (2022), 2303–2324

  41. [41]

    Nathan Matias

    J. Nathan Matias. 2019. Preventing harassment and increasing group participation through social norms in 2,190 online science discussions.Proceedings of the National Academy of Sciences116, 20 (2019), 9785–9789. doi:10.1073/ pnas.1813486116 arXiv:https://www.pnas.org/doi/pdf/10.1073/pnas.1813486116

  42. [42]

    Rahul Mukherjee. 2019. Imagining cellular india.Global Digital Cultures: Perspectives from South Asia (2019)76 (2019), 76–99

  43. [43]

    Sheryl Ng and Taberez Neyazi. 2022. Self- and Social Corrections on Instant Messaging Platforms.International Journal of Communication17, 0 (2022), 426–446

  44. [44]

    Norwanto Norwanto and Faizal Risdianto. 2022. The norm establishment in WhatsApp group conversations.Journal of Language and Literature22, 2 (2022), 504–516

  45. [45]

    Naresh R Pandit. 1996. The creation of theory: A recent application of the grounded theory method.The qualitative report2, 4 (1996), 1–15

  46. [46]

    Reddit. 2024. Getting Started with Post Guidance. https://redditforcommunity.com/blog/getting-started-post-guidance Accessed: 2025-05-12

  47. [47]

    Reddit. 2025. Automoderator. https://support.reddithelp.com/hc/en-us/articles/15484574206484-Automoderator Accessed: 2025-05-12

  48. [48]

    Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, and Vinodkumar Prabhakaran. 2021. Re-imagining Algorithmic Fairness in India and Beyond. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 315–328. doi:10.1145/344218...

  49. [49]

    Chang, Cristian Danescu-Niculescu-Mizil, and Karen Levy

    Charlotte Schluger, Jonathan P. Chang, Cristian Danescu-Niculescu-Mizil, and Karen Levy. 2022. Proactive Moderation of Online Discussions: Existing Practices and the Potential for Algorithmic Support.Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 370 (Nov. 2022), 27 pages. Creating Group Rules with AI: Human-AI Collaboration in WhatsApp Moderation 27

  50. [50]

    Schöpke-Gonzalez, Sarthak Atreja, H

    Ana M. Schöpke-Gonzalez, Sarthak Atreja, H. N. Shin, Naeem Ahmed, and Libby Hemphill. 2024. Why Do Volunteer Content Moderators Quit? Burnout, Conflict, and Harmful Behaviors.New Media & Society26, 10 (2024), 5677–5701. doi:10.1177/14614448221138529 Online first published in 2022

  51. [51]

    Joseph Seering, Juan Pablo Flores, Saiph Savage, and Jessica Hammer. 2018. The social roles of bots: evaluating impact of bots on discussions in online communities.Proceedings of the ACM on Human-Computer Interaction2, CSCW (2018), 1–29

  52. [52]

    Joseph Seering, Tony Wang, Jina Yoon, and Geoff Kaufman. 2019. Moderator engagement and community development in the age of algorithms.New Media & Society21, 7 (2019), 1417–1443

  53. [53]

    Farhana Shahid, Dhruv Agarwal, and Aditya Vashistha. 2024. One Style Does Not Regulate All: Moderation Practices in Public and Private WhatsApp Groups

  54. [54]

    Farhana Shahid, Mona Elswah, and Aditya Vashistha. 2025. Think Outside the Data: Colonial Biases and Systemic Issues in Automated Moderation Pipelines for Low-Resource Languages.ArXivabs/2501.13836 (2025). https: //api.semanticscholar.org/CorpusID:275820012

  55. [55]

    Farhana Shahid and Aditya Vashistha. 2023. Decolonizing Content Moderation: Does Uniform Global Community Standard Resemble Utopian Equality or Western Power Hegemony?. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18

  56. [56]

    2025.WhatsApp’s Biggest Market Is Becoming Its Toughest Test

    Jagmeet Singh. 2025.WhatsApp’s Biggest Market Is Becoming Its Toughest Test. TechCrunch. https://techcrunch.com/ 2025/12/14/whatsapps-biggest-market-is-becoming-its-toughest-test/

  57. [57]

    Translations

    Susan Leigh Star and James R. Griesemer. 1989. Institutional Ecology, “Translations” and Boundary Objects: Amateurs and Professionals in Berkeley’s Museum of Vertebrate Zoology, 1907–39.Social Studies of Science19, 3 (1989), 387–420. doi:10.1177/030631289019003001

  58. [58]

    Nicholas Sukiennik, Chen Gao, Fengli Xu, and Yong Li. 2025. An evaluation of cultural value alignment in llm.arXiv preprint arXiv:2504.08863(2025)

  59. [59]

    Raihanul Alam, Rokeya Akter, Md Mirajul Islam, Raihan Islam Arnob, A.K.M

    Sharifa Sultana, Pratyasha Saha, Shaid Hasan, S.M. Raihanul Alam, Rokeya Akter, Md Mirajul Islam, Raihan Islam Arnob, A.K.M. Najmul Islam, Mahdi Nasrullah Al-Ameen, and Syed Ishtiaque Ahmed. 2022. Imagined Online Communities: Communionship, Sovereignty, and Inclusiveness in Facebook Groups.Proc. ACM Hum.-Comput. Interact.6, CSCW2, Article 407 (Nov. 2022),...

  60. [60]

    Twitch. 2024. How to Use AutoMod. https://help.twitch.tv/s/article/how-to-use-automod?language=en_US Accessed: 2025-05-12

  61. [61]

    Sahana Udupa. 2024. Shadow politics: commercial digital influencers, “data, ” and disinformation in India.Social Media+ Society10, 1 (2024), 1–10

  62. [62]

    Rama Adithya Varanasi, Joyojeet Pal, and Aditya Vashistha. 2022. Accost, Accede, or Amplify: Attitudes towards COVID-19 Misinformation on WhatsApp in India. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ACM, NY, USA, Article 256, 17 pages

  63. [63]

    WhatsApp. 2024. How to chat with Meta AI in an individual or group chat. https://faq.whatsapp.com/203220822537614/

  64. [64]

    WhatsApp. 2025. About end-to-end encryption. https://faq.whatsapp.com/820124435853543 Accessed: 2025-05-12

  65. [65]

    Donghee Yvette Wohn. 2019. Volunteer Moderators in Twitch Micro Communities: How They Get Involved, the Roles They Play, and the Emotional Labor They Experience. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk)(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–13. doi:10.1145/3290605.3300390

  66. [66]

    Lucas Wright. 2022. Automated Platform Governance Through Visibility and Scale: On the Transformational Power of AutoModerator.Social Media + Society8, 1 (2022). doi:10.1177/20563051221077020

  67. [67]

    human touch

    Yan Xiang, Qianhui Fan, Kejiang Qian, Jiajie Li, Yuying Tang, and Ze Gao. 2023. Decentralized Governance for Virtual Community(DeGov4VC): Optimal Policy Design of Human-plant Symbiosis Co-creation. InCompanion Publication of the 2023 ACM Designing Interactive Systems Conference(Pittsburgh, PA, USA)(DIS ’23 Companion). Association for Computing Machinery, ...