pith. machine review for the scientific record. sign in

arxiv: 2605.12888 · v1 · submitted 2026-05-13 · 💻 cs.HC

Recognition: 1 theorem link

· Lean Theorem

Seed Bank, Co-op, Stoop Swap: Metaphors for Governing Language Model Data for Creative Writing

Alicia Guo , Carly Schnitzler , Katy Gero

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:58 UTC · model grok-4.3

classification 💻 cs.HC
keywords language model governancecreative writingmetaphorsdata consentcommunity modelsAI participationwriter control
0
0 comments X

The pith

Creative writers' metaphors for language model governance favor small community-controlled systems over large corporate ones.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper describes workshops with over one hundred creative writers who generated more than two hundred metaphors for how language models should handle their writing data. These metaphors surface four themes: the need for explicit consent, ways to set community boundaries, methods for recognizing contributors, and trade-offs around model scale. The authors conclude that the metaphors support building smaller open models that reflect the specific values of writer groups rather than broad commercial systems. A sympathetic reader would see this as a way to restore writer agency in AI development through familiar governance ideas drawn from real-world collectives.

Core claim

Workshops yielded metaphors such as seed banks, co-ops, and stoop swaps that writers used to reason about consent, boundaries, recognition, and scale. These point toward smaller, open language models that encode group values instead of large proprietary systems.

What carries the argument

The metaphors themselves, treated as objects, places, processes, groups, and infrastructure that let writers articulate concrete rules for data use and model ownership.

If this is right

  • Language models could incorporate explicit opt-in consent flows before including any writer's text.
  • Community boundaries could be defined by shared genres or values, limiting who contributes data and who accesses the model.
  • Contributor recognition could take the form of attribution, royalties, or governance votes in the model operation.
  • Scale choices would favor smaller models that stay aligned with group norms rather than maximizing generality.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same workshop method could be adapted to other creative fields such as visual artists or musicians to generate domain-specific governance metaphors.
  • Technical prototypes might test data-isolation techniques that enforce the consent and boundary rules described in the metaphors.
  • Policy discussions on AI training data could reference these writer-generated models as examples of consent-first alternatives to current scraping practices.

Load-bearing premise

The metaphors and themes from these workshops reflect what creative writers broadly want and can be directly translated into working governance mechanisms.

What would settle it

A larger survey of creative writers showing majority preference for unrestricted data use in large models or rejection of community consent processes would undermine the central claim.

Figures

Figures reproduced from arXiv: 2605.12888 by Alicia Guo, Carly Schnitzler, Katy Gero.

Figure 1
Figure 1. Figure 1: Six domains from which participants drew metaphors when reasoning about community governance of language [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Four clusters from the in-workshop clustering activity. [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Worksheet provided to participant groups in the Miro board; this one is filled out by a group. The worksheet encouraged [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Expansion of the community garden metaphor [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
read the original abstract

How might we govern a language model run for and by creative writers? While generative AI use is on the rise, many language models are created and owned in ways that limit writers' consent, participation, and control. We report on four workshops where over one hundred creative writers came up with and analyzed metaphors for language model governance, resulting in over two hundred metaphors: objects, places, processes, groups, and infrastructure that support reasoning about language model governance. What if a language model was like a community garden? Or a seed bank? Or the bathroom in a dive bar? We report on four themes: (1) the importance of consent, (2) how to define community boundaries, (3) ways to give contributor recognition, and (4) trade-offs in scale of language models. These metaphors point towards smaller, open models that encode group values. We discuss concrete ways to make community language models a reality.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper reports on four workshops with over 100 creative writers who generated and analyzed more than 200 metaphors for governing language models in creative writing. These are grouped into four themes—consent, community boundaries, contributor recognition, and scale trade-offs—which the authors interpret as pointing toward smaller, open models that encode group values, with discussion of concrete implementation steps for community language models.

Significance. If the metaphors-to-recommendation mapping is substantiated, the work provides a participatory, empirical contribution to HCI and AI governance research by surfacing creative writers' perspectives on data and model control. The scale of the workshops (>100 participants, >200 metaphors) offers a solid empirical foundation that could inform more inclusive alternatives to current large-scale proprietary models.

major comments (1)
  1. [Discussion / Implications] The central inference in the discussion section—that the four themes necessitate smaller, open models encoding group values rather than other governance forms—is not supported by an explicit analytical chain. No coding scheme, counter-example analysis, or participant validation step is described linking the workshop outputs directly to reduced scale and openness; this post-hoc synthesis is load-bearing for the strongest claim and requires transparent justification.
minor comments (2)
  1. [Abstract and Methods] The abstract and methods sections would benefit from a brief overview of the theming process (e.g., how metaphors were categorized into the four themes) to improve clarity and reproducibility.
  2. [Results] Figure or table summarizing the >200 metaphors by theme would aid readers in assessing the distribution and strength of evidence for each theme.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their positive assessment of the empirical foundation and participatory approach in our workshops. We address the major comment on the discussion section below and have made revisions to improve transparency.

read point-by-point responses
  1. Referee: The central inference in the discussion section—that the four themes necessitate smaller, open models encoding group values rather than other governance forms—is not supported by an explicit analytical chain. No coding scheme, counter-example analysis, or participant validation step is described linking the workshop outputs directly to reduced scale and openness; this post-hoc synthesis is load-bearing for the strongest claim and requires transparent justification.

    Authors: We agree that the discussion would benefit from a more explicit description of the analytical process. The four themes emerged from iterative thematic analysis of the 200+ metaphors, with participants actively discussing implications for consent, boundaries, recognition, and scale during the workshops themselves. In the revised manuscript, we will add a dedicated subsection detailing the coding approach (including how metaphors were grouped and validated through participant input), provide specific examples of metaphors that directly informed the preference for smaller open models (e.g., seed bank and co-op metaphors emphasizing community control over scale), and outline the interpretive steps from themes to recommendations. This will make the chain transparent while preserving the original findings. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical workshop outputs thematically analyzed without self-referential reduction

full rationale

The paper reports four workshops in which >100 creative writers generated and analyzed >200 metaphors for LM governance. These metaphors are grouped into four themes (consent, community boundaries, contributor recognition, scale trade-offs) via standard qualitative synthesis. The interpretive claim that the metaphors 'point towards smaller, open models that encode group values' is presented as a direct reading of the participant-generated material rather than any equation, fitted parameter, or self-citation chain that reduces to prior inputs. No mathematical derivations, predictions, or uniqueness theorems appear; the work is self-contained as an empirical report on workshop data.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that participatory workshops reliably surface meaningful governance preferences from creative writers; no free parameters or invented entities are introduced.

axioms (1)
  • domain assumption Workshops with creative writers can generate valid insights into governance preferences
    Invoked in the study design and interpretation of results

pith-pipeline@v0.9.0 · 5463 in / 1149 out tokens · 28682 ms · 2026-05-14T18:58:22.244775+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

68 extracted references · 36 canonical work pages · 1 internal anchor

  1. [1]

    [n. d.]. New Authors Guild AI Survey Reveals That Authors Over- whelmingly Want Consent and Compensation for Use of Their Works. https://authorsguild.org/news/ag-ai-survey-reveals-authors-overwhelmingly- want-consent-and-compensation-for-use-of-their-works/. Accessed: 2024-09-11

  2. [2]

    Philip E. Agre. 1997.Computation and Human Experience. Cambridge University Press, Cambridge

  3. [3]

    Glen Weyl

    Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E. Glen Weyl. 2018. Should We Treat Data as Labor? Moving Beyond “Free”. 108 (2018), 38–42. doi:10.1257/pandp.20181003

  4. [4]

    Leah Asmelash. 2023. These books are being used to train AI. No one told the authors.CNN(Oct 2023). https://www.cnn.com/2023/10/08/style/ai-books3- authors-nora-roberts-cec/index.html

  5. [5]

    James Auger. 2013. Speculative Design: Crafting the Speculation.Digital Cre- ativity24, 1 (March 2013), 11–35. doi:10.1080/14626268.2013.767276

  6. [6]

    Amna Batool, Didar Zowghi, and Muneera Bano. 2025. AI Governance: A Systematic Literature Review. 5, 3 (2025), 3265–3279. doi:10.1007/s43681-024- 00653-w

  7. [7]

    2007.College Writing and Beyond: A New Framework for Uni- versity Writing Instruction

    Anne Beaufort. 2007.College Writing and Beyond: A New Framework for Uni- versity Writing Instruction. Utah State University Press, Logan, UT. https: //upcolorado.com/utah-state-university-press/college-writing-and-beyond Im- print of University Press of Colorado

  8. [8]

    Jordan Beck and Hamid R. Ekbia. 2018. The Theory-Practice Gap as Generative Metaphor. InProceedings of the 2018 CHI Conference on Human Factors in Com- puting Systems(Montreal QC, Canada)(CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–11. doi:10.1145/3173574.3174194

  9. [9]

    Mark Blythe, Siân Lindley, and Dave Murray-Rust. 2025. Artificial Intelligence and other Speculative Metaphors. InProceedings of the 2025 ACM Designing Metaphors for Governing Language Model Data for Creative Writing Interactive Systems Conference (DIS ’25). Association for Computing Machinery, New York, NY, USA, 347–356. doi:10.1145/3715336.3735714

  10. [10]

    2017.The 3Cs – Consent, Credit, Compensation

    Monica Boţa-Moisin. 2017.The 3Cs – Consent, Credit, Compensation. Cultural In- tellectual Property Rights Initiative®. https://www.culturalintellectualproperty. com/the-3cs Accessed 4 Feb. 2026; framework for fair and equitable engagement with Indigenous Peoples, ethnic groups, and Local Communities

  11. [11]

    Virginia Braun and Victoria Clarke. 2012. Thematic Analysis. InAPA Handbook of Research Methods in Psychology, Vol 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological., Harris Cooper, Paul M. Camic, Debra L. Long, A. T. Panter, David Rindskopf, and Kenneth J. Sher (Eds.). American Psychological Association, Washington, 57–71...

  12. [12]

    Laura Braunstein and Michelle R. Warren. 2021. Zombies in the Library Stacks.Dartmouth Library Staff Publications41 (2021). https://digitalcommons. dartmouth.edu/dlstaffpubs/41 Accessed 4 Feb. 2026

  13. [13]

    Kenneth Burke. 1941. Four master tropes.The Kenyon Review3, 4 (1941), 421–438

  14. [14]

    Leshem Choshen, Ryan Cotterell, Mustafa Omer Gul, Jaap Jumelet, Tal Linzen, Aaron Mueller, Suchir Salhan, Raj Sanjay Shah, Alex Warstadt, and Ethan Gotlieb Wilcox. 2026. BabyLM Turns 4 and Goes Multilingual: Call for Papers for the 2026 BabyLM Workshop. arXiv:2602.20092 [cs.CL] https://arxiv.org/abs/2602.20092

  15. [15]

    Nick Couldry and Ulises A. Mejias. 2019. Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject.Television & New Media20, 4 (May 2019), 336–349. doi:10.1177/1527476418796632

  16. [16]

    Sylvie Delacroix and Neil D Lawrence. 2019. Bottom-up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance. (2019), ipz014. doi:10. 1093/idpl/ipz014

  17. [17]

    Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. 2023. The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice. InProceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(Boston, MA, USA)(EAAMO ’23). Association for Computing Machinery, New York, NY, USA, Ar...

  18. [18]

    Smit Desai and Michael Twidale. 2023. Metaphors in Voice User Interfaces: A Slippery Fish.ACM Trans. Comput.-Hum. Interact.30, 6, Article 89 (Sept. 2023), 37 pages. doi:10.1145/3609326

  19. [19]

    Jennifer Ding, Eva Jäger, Victoria Ivanova, and Mercedes Bunz. 2024. My Voice, Your Voice, Our Voice: Attitudes Towards Collective Governance of a Choral AI Dataset. arXiv:2412.01433 [cs] doi:10.48550/arXiv.2412.01433

  20. [20]

    Ronen Eldan and Yuanzhi Li. 2023. TinyStories: How Small Can Language Models Be and Still Speak Coherent English? arXiv:2305.07759 [cs.CL] https: //arxiv.org/abs/2305.07759

  21. [21]

    Casey Fiesler, Jialun Jiang, Joshua McCann, Kyle Frye, and Jed Brubaker. 2018. Reddit Rules! Characterizing an Ecosystem of Governance.Proceedings of the International AAAI Conference on Web and Social Media12, 1 (Jun. 2018). doi:10. 1609/icwsm.v12i1.15033

  22. [22]

    Andrea Forte, Vanesa Larco, and Amy Bruckman. 2009. Decentralization in Wikipedia Governance.Journal of Management Information Systems26, 1 (July 2009), 49–72. doi:10.2753/MIS0742-1222260103

  23. [23]

    2026.Use Cases — Latimer.ai

    FutureSum AI, Inc. 2026.Use Cases — Latimer.ai. Latimer.ai. https://www.latimer. ai/#use-cases Accessed: 2026-02-06

  24. [24]

    Varshney

    Katy Ilonka Gero, Payel Das, Pierre Dognin, Inkit Padhi, Prasanna Sattigeri, and Kush R. Varshney. 2023. The Incentive Gap in Data Work in the Era of Large Models.Nature Machine Intelligence5, 6 (June 2023), 565–567. doi:10.1038/s42256- 023-00673-x

  25. [25]

    Glassman

    Katy Ilonka Gero, Meera Desai, Carly Schnitzler, Nayun Eom, Jack Cushman, and Elena L. Glassman. 2025. Creative Writers’ Attitudes on Writing as Training Data for Large Language Models. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems(New York, NY, USA, 2025-04-25)(CHI ’25). Association for Computing Machinery, 1–16. doi:10.1...

  26. [26]

    Alicia Guo, Shreya Sathyanarayanan, Leijie Wang, Jeffrey Heer, and Amy X. Zhang. 2025. From Pen to Prompt: How Creative Writers Integrate AI into Their Writing Practice. InProceedings of the 2025 Conference on Creativity and Cognition(Virtual United Kingdom, 2025-06-23). ACM, 527–545. doi:10.1145/ 3698061.3726910

  27. [27]

    Joo-Wha Hong and Nathaniel Ming Curran. 2019. Artificial Intelligence, Artists, and Art: Attitudes Toward Artwork Produced by Humans vs. Artificial Intelli- gence.ACM Trans. Multimedia Comput. Commun. Appl.15, 2s, Article 58 (jul 2019), 16 pages. doi:10.1145/3326337

  28. [28]

    Yacine Jernite, Huu Nguyen, Stella Biderman, Anna Rogers, Maraim Masoud, Valentin Danchev, Samson Tan, Alexandra Sasha Luccioni, Nishant Subramani, Isaac Johnson, Gerard Dupont, Jesse Dodge, Kyle Lo, Zeerak Talat, Dragomir Radev, Aaron Gokaslan, Somaieh Nikpoor, Peter Henderson, Rishi Bommasani, and Margaret Mitchell. 2022. Data Governance in the Age of L...

  29. [29]

    InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society(Montréal, QC, Canada)(AIES ’23)

    Harry H. Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and Its Impact on Artists. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society(Montr\’{e}al QC Canada, 2023-08-08). ACM, 363–374. doi:10.1145/3600211.3604681

  30. [30]

    Kate Knibbs. 2023. Why the Great AI Backlash Came for a Tiny Startup You’ve Probably Never Heard Of. https://www.wired.com/story/prosecraft-backlash- writers-ai/

  31. [31]

    Six Silberman, Reuben Binns, Jun Zhao, and Asia J

    Lin Kyi, Amruta Mahuli, M. Six Silberman, Reuben Binns, Jun Zhao, and Asia J. Biega. 2025. Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI ’25). Association for Computing Machinery, New York, NY, USA, 1–16. doi:10.1145/3706598.3713799

  32. [32]

    1980.Metaphors We Live By

    George Lakoff and Mark Johnson. 1980.Metaphors We Live By. University of Chicago Press, Chicago

  33. [33]

    Charlotte P. Lee. 2007. Boundary Negotiating Artifacts: Unbinding the Routine of Boundary Objects and Embracing Chaos in Collaborative Work.Computer Supported Cooperative Work (CSCW)16, 3 (June 2007), 307–339. doi:10.1007/ s10606-007-9044-5

  34. [34]

    Weixin Liang, Yaohui Zhang, Mihai Codreanu, Jiayu Wang, Hancheng Cao, and James Zou. 2025. The Widespread Adoption of Large Language Model-Assisted Writing Across Society. arXiv:2502.09747 [cs] doi:10.48550/arXiv.2502.09747

  35. [35]

    Varshney, Mohit Bansal, Sanmi Koyejo, and Yang Liu

    Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Yuguang Yao, Chris Yuhao Liu, Xiaojun Xu, Hang Li, Kush R. Varshney, Mohit Bansal, Sanmi Koyejo, and Yang Liu. 2025. Rethinking machine unlearning for large language models.Nature Machine Intelligence7 (2025), 181–194. doi:10. 1038/s42256-025-00985-0

  36. [36]

    Dan Lockton, Devika Singh, Saloni Sabnis, Michelle Chou, Sarah Foley, and Alejandro Pantoja. 2019. New Metaphors: A Workshop Method for Generating Ideas and Reframing Problems in Design and Beyond. InProceedings of the 2019 on Creativity and Cognition. ACM, San Diego CA USA, 319–332. doi:10.1145/ 3325480.3326570

  37. [37]

    Shayne Longpre, Robert Mahari, Ariel Lee, Campbell Lund, Hamidah Oderinwale, William Brannon, Nayan Saxena, Naana Obeng-Marnu, Tobin South, Cole Hunter, Kevin Klyman, Christopher Klamm, Hailey Schoelkopf, Nikhil Singh, Manuel Cherep, Ahmad Mustafa Anis, An Dinh, Caroline Chitongo, Da Yin, Damien Sileo, Deividas Mataciunas, Diganta Misra, Emad Alghamdi, En...

  38. [38]

    Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Katy Ilonka Gero, Alex Pentland, and Jad Kabbara. 2024. Position: Data Authenticity, Consent, & Provenance for AI are all broken: what will it take to fix them?. InForty-first International Conference on Machine Learning

  39. [39]

    Juniper Lovato, Julia Witte Zimmerman, Isabelle Smith, Peter Dodds, and Jen- nifer L. Karson. 2024. Foregrounding Artist Opinions: A Survey Study on Trans- parency, Ownership, and Fairness in AI Generative Art. 7 (2024), 905–916. doi:10.1609/aies.v7i1.31691

  40. [40]

    Qinghua Lu, Liming Zhu, Xiwei Xu, Jon Whittle, Didar Zowghi, and Aurelie Jacquet. 2024. Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering. 56, 7 (2024), 1–35. doi:10.1145/3626234

  41. [41]

    2026.AI Copyright Lawsuit Developments in 2025: A Year in Review

    Kevin Madigan. 2026.AI Copyright Lawsuit Developments in 2025: A Year in Review. Copyright Alliance. https://copyrightalliance.org/ai-copyright-lawsuit- developments-2025/ Accessed 4 Feb. 2026

  42. [42]

    Marina Micheli, Marisa Ponti, Max Craglia, and Anna Berti Suman. 2020. Emerging Models of Data Governance in the Age of Datafication. 7, 2 (2020), 2053951720948087. doi:10.1177/2053951720948087

  43. [43]

    Jeffery Scott Mio. 1997. Metaphor and politics.Metaphor and symbol12, 2 (1997), 113–133

  44. [44]

    Decca Muldowney. 2025. Fanfiction Writers Battle AI, One Scrape at a Time. https://www.theverge.com/ai-artificial-intelligence/688640/fanfiction-ai

  45. [45]

    Michael Muller and Allison Druin. 2002. Participatory Design: The Third Space in HCI.Handbook of HCI(01 2002)

  46. [46]

    Pine, Trine Rask Nielsen, and Gina Neff

    Naja Holten Møller, Claus Bossen, Kathleen H. Pine, Trine Rask Nielsen, and Gina Neff. 2020. Who Does the Work of Data? 27, 3 (2020), 52–55. doi:10.1145/3386389

  47. [47]

    Shakked Noy and Whitney Zhang. 2023. Experimental Evidence on the Produc- tivity Effects of Generative Artificial Intelligence.Science381, 6654 (July 2023), 187–192. doi:10.1126/science.adh2586

  48. [48]

    Olmo 3

    Team Olmo, :, Allyson Ettinger, Amanda Bertsch, Bailey Kuehl, David Graham, David Heineman, Dirk Groeneveld, Faeze Brahman, Finbarr Timbers, Hamish Ivison, Jacob Morrison, Jake Poznanski, Kyle Lo, Luca Soldaini, Matt Jordan, Mayee Chen, Michael Noukhovitch, Nathan Lambert, Pete Walsh, Pradeep Dasigi, Robert Berry, Saumya Malik, Saurabh Shah, Scott Geng, S...

  49. [49]

    Siobhán O’Mahony and Fabrizio Ferraro. 2007. The Emergence of Governance in an Open Source Community.Academy of Management Journal50, 5 (2007), 1079–1106. http://www.jstor.org/stable/20159914

  50. [50]

    Dejan Ravšelj, Damijana Keržič, Nina Tomaževič, Lan Umek, Nejc Brezovar, Noorminshah A. Iahad, Ali Abdulla Abdulla, Anait Akopyan, Magdalena Waleska Aldana Segura, Jehan AlHumaid, Mohamed Farouk Allam, Maria Alló, Raphael Papa Kweku Andoh, Octavian Andronic, Yarhands Dissou Arthur, Fatih Ay- dın, Amira Badran, Roxana Balbontín-Alvarado, Helmi Ben Saad, An...

  51. [51]

    Donald A. Schön. 1979. Generative Metaphor: A Perspective on Problem-Setting in Social Policy. InMetaphor and Thought, Andrew Ortony (Ed.). Cambridge University Press, Cambridge

  52. [52]

    Tanusree Sharma, Yihao Zhou, and Visar Berisha. 2025. PRAC3 (Privacy, Reputa- tion, Accountability, Consent, Credit, Compensation): Long Tailed Risks of Voice Actors in AI Data-Economy. arXiv:2507.16247 [cs] doi:10.48550/arXiv.2507.16247

  53. [53]

    Slate, Chaoran Chen, Yaxing Yao, and Toby Jia-Jun Li

    Daniel D. Slate, Chaoran Chen, Yaxing Yao, and Toby Jia-Jun Li. 2025. Iterative Contextual Consent: AI-enabled Data Privacy Contracts. InProceedings of the 2025 Workshop on Human-Centered AI Privacy and Security (HAIPS ’25). Associa- tion for Computing Machinery, New York, NY, USA, 84–91. doi:10.1145/3733816. 3760757

  54. [54]

    2016.Writing with the machine

    Robin Sloan. 2016.Writing with the machine. https://www.robinsloan.com/ notes/writing-with-the-machine/ Accessed: 2026-02-06

  55. [55]

    Spawning AI. 2025. Have I Been Trained? https://haveibeentrained.com/ Ac- cessed: 2025-02-06

  56. [56]

    Harini Suresh, Emily Tseng, Meg Young, Mary Gray, Emma Pierson, and Karen Levy. 2024. Participation in the Age of Foundation Models. InThe 2024 ACM Conference on Fairness, Accountability, and Transparency. ACM, Rio de Janeiro Brazil, 1609–1621. doi:10.1145/3630106.3658992

  57. [57]

    John M. Swales. 1990.Genre Analysis: English in Academic and Research Settings. Cambridge University Press, Cambridge, UK

  58. [58]

    Marton Szep, Daniel Rueckert, Rüdiger Von Eisenhart-Rothe, and Florian Hin- terwimmer. 2026. Fine-Tuning Large Language Models with Limited Data: A Survey and Practical Guide.Transactions of the Association for Computational Linguistics14 (April 2026), 341–377. doi:10.1162/TACL.a.627

  59. [59]

    2025.Bartz v

    The Authors Guild. 2025.Bartz v. Anthropic Settlement: What Authors Need to Know. Authors Guild. https://authorsguild.org/advocacy/artificial-intelligence/ what-authors-need-to-know-about-the-anthropic-settlement/ Accessed 4 Feb. 2026

  60. [60]

    The Editors. 2025. Large Language Muddle.n+1Issue 51, Force Majeure (2025). https://www.nplusonemag.com/issue-51/the-intellectual-situation/large- language-muddle/ Published Fall 2025

  61. [61]

    Ownership, Not Just Happy Talk

    Emily Tseng, Meg Young, Marianne Aubin Le Quéré, Aimee Rinehart, and Harini Suresh. 2025. "Ownership, Not Just Happy Talk": Co-Designing a Participatory Large Language Model for Journalism. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Com- puting Machinery, New York, NY, USA, 3119–3130...

  62. [62]

    United States District Court. 2025. Order inBartz v. Anthropic, Inc.U.S. District Court Order. https://copyrightalliance.org/wp-content/uploads/2025/06/Bartz- v.-Anthropic-Order.pdf No. C 24-05417 WHA (N.C. of Cal.), PDF document

  63. [63]

    United States District Court. 2025. Order inKadrey v. Meta Platforms, Inc.U.S. District Court Order. https://media.npr.org/assets/artslife/arts/2025/order1.pdf No. 3:23-cv-03417-VC, N.D. Cal

  64. [64]

    Vauhini Vara. 2023. Confessions of a Viral AI Writer. https://www.wired.com/ story/confessions-viral-ai-writer-chatgpt/

  65. [65]

    Nicholas Vincent, Hanlin Li, Nicole Tilly, Stevie Chancellor, and Brent Hecht

  66. [66]

    InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event Canada, 2021-03-03)

    Data Leverage: A Framework for Empowering the Public in Its Relationship with Technology Companies. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event Canada, 2021-03-03). ACM, 215–227. doi:10.1145/3442188.3445885

  67. [67]

    Halimah Abdullah Vincent Acovino. 2023. Sci-Fi magazine stops submissions after flood of AI generated stories. https://www.npr.org/2023/02/23/1159118948/ sci-fi-magazine-stops-submissions-after-flood-of-ai-generated-stories

  68. [68]

    run of show

    Tiannan Wang, Jiamin Chen, Qingrui Jia, Shuai Wang, Ruoyu Fang, Huilin Wang, Zhaowei Gao, Chunzhao Xie, Chuou Xu, Jihong Dai, Yibin Liu, Jialong Wu, Shengwei Ding, Long Li, Zhiwei Huang, Xinle Deng, Teng Yu, Gangan Ma, Han Xiao, Zixin Chen, Danjun Xiang, Yunxia Wang, Yuanyuan Zhu, Yi Xiao, Jing Wang, Yiru Wang, Siran Ding, Jiayang Huang, Jiayi Xu, Yiliham...