pith. machine review for the scientific record. sign in

arxiv: 2605.03295 · v1 · submitted 2026-05-05 · 💻 cs.CY · cs.HC

Recognition: unknown

Cheap Expertise: Mapping and Challenging Industry Perspectives in the Expert Data Gig Economy

Authors on Pith no claims yet

Pith reviewed 2026-05-07 13:38 UTC · model grok-4.3

classification 💻 cs.CY cs.HC
keywords expert data gig economyAI expertisedata annotationcheap expertisehuman expertiseinstitutional expertiseindustry perspectives
0
0 comments X

The pith

Data annotation organizations portray AI expertise as cheaper and more efficient than human expertise, with human knowledge treated as extractable and institutional sources needing reform.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines public statements from five data annotation companies and their leaders on social media and podcasts to identify their vision for expertise in an AI-driven economy. It establishes that these organizations present AI as delivering cheap expertise that yields better returns than relying on human experts. Human expertise is framed as a resource that can be extracted and assessed by comparison to AI outputs. Institutional expertise from universities and corporations is depicted as requiring liberation or reform to feed into AI systems. This matters to readers because the vision could reshape how experts are employed and how knowledge institutions function in society.

Core claim

Demand for expert-annotated data has spurred an expert gig economy. Public communications from the studied organizations show AI expertise envisioned as cheap, offering a better return on investment than human expertise. Human expertise is viewed as an extractable resource whose value is judged relative to AI. Institutional expertise is seen as in need of liberation or reform so that it can be incorporated into artificial intelligence systems.

What carries the argument

Mapping of industry perspectives via analysis of social media feeds and podcast appearances from five data annotation organizations and their CEOs, which surfaces the promoted model of cheap AI expertise versus extractable human and institutional expertise.

If this is right

  • Human experts may experience transformed professional roles and revalued contributions relative to AI capabilities.
  • Societal institutions that create or hold expertise could encounter pressure to adapt or release knowledge for AI incorporation.
  • The expert gig economy may expand and alter white-collar work along with broader societal views of expertise.
  • Society will need approaches to manage an AI-driven expert gig economy and the cheap expertise it seeks to produce.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Public promotion of this vision could influence hiring and compensation practices in fields beyond data annotation.
  • Educational and research institutions might face new expectations to align resources or outputs with AI training demands.
  • Comparing these public statements to private company practices or worker accounts could highlight differences in how expertise is actually handled.

Load-bearing premise

The public social media feeds and podcast appearances from five data annotation organizations accurately capture the industry's collective vision for the future of expertise.

What would settle it

A survey or collection of statements from additional data annotation organizations, or internal documents from the studied ones, that presents AI expertise as not cheaper or human expertise as not extractable would challenge the mapped vision.

read the original abstract

Demand for expert-annotated data on the part of leading AI labs has created an expert gig economy with the potential to reshape white collar work and society's understanding of expertise. In this research, we study the vision for the future of expertise described in the public communication of five industry data annotation organizations and their CEOs, as reflected on social media feeds and public appearances on podcasts. We find that the industry envisions AI expertise as cheap, meaning that it can offer a better return on investment than human expertise. Human expertise, meanwhile, is viewed as an extractable resource, the value of which can be judged relative to AI expertise. Finally, institutional expertise (such as that created or possessed by universities and corporations) is viewed as in need of liberation or reform, such that it can be incorporated into the latest artificial intelligence systems. Our findings have implications for human experts, whose professional lives may be transformed and revalued by this industry, as well as for societal institutions that mediate expertise. We close this work with a series of provocations intended to elicit consideration of how society can best approach an AI-driven expert gig economy and the cheap expertise it intends to produce.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript analyzes public communications (social media feeds and podcast appearances) from five data annotation organizations and their CEOs. It claims that these sources reveal an industry vision in which AI expertise is positioned as cheap (offering better ROI than human expertise), human expertise is treated as an extractable resource whose value is assessed relative to AI, and institutional expertise (from universities and corporations) is framed as needing liberation or reform to be incorporated into AI systems. The paper discusses implications for human experts and societal institutions and ends with provocations for approaching the AI-driven expert gig economy.

Significance. If the sampled statements accurately reflect sector-wide perspectives, the work offers a timely interpretive account of how the expert data annotation gig economy is reshaping conceptions of expertise, with potential effects on labor valuation, professional identities, and the role of traditional knowledge institutions. The provocations provide a constructive entry point for policy and ethical discussion. The grounding in publicly available sources is a strength, but the small non-probabilistic sample constrains the reach of the industry-level conclusions.

major comments (2)
  1. [Abstract and Methods] Abstract and Methods (implied): The manuscript provides no details on the sampling strategy used to select the five organizations, the process for identifying and coding relevant statements from their public feeds and appearances, or any inter-rater reliability procedures. Without this information, it is difficult to evaluate whether the extracted themes are robust or systematically derived.
  2. [Findings and Discussion] Findings and Discussion: The central claim that 'the industry envisions AI expertise as cheap' and the related characterizations of human and institutional expertise generalize from the public outputs of only five organizations. No justification is given for the selection of these firms, no comparison to other data annotation companies is presented, and no triangulation with internal documents or additional sources is described. This leaves the industry-level generalization unsupported by the evidence provided.
minor comments (2)
  1. [Abstract] The abstract introduces the term 'cheap expertise' without a concise definition; a brief parenthetical gloss would improve immediate clarity for readers.
  2. [Conclusion] The provocations in the conclusion are engaging but would be strengthened by explicit cross-references to specific statements or themes identified in the analyzed materials.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for their constructive comments, which identify key areas where the manuscript can be strengthened through greater transparency and more precise scoping of claims. We address each point below.

read point-by-point responses
  1. Referee: [Abstract and Methods] Abstract and Methods (implied): The manuscript provides no details on the sampling strategy used to select the five organizations, the process for identifying and coding relevant statements from their public feeds and appearances, or any inter-rater reliability procedures. Without this information, it is difficult to evaluate whether the extracted themes are robust or systematically derived.

    Authors: We agree that the original manuscript lacked sufficient methodological detail. In the revised version, we will add a dedicated Methods section that describes the sampling strategy (selection of five leading expert data annotation organizations based on their prominence in the AI data gig economy and public engagement), data collection (archival review of social media feeds and podcast appearances), statement identification (targeted searches using terms such as expertise, AI systems, human experts, and institutional knowledge), and thematic analysis process (iterative coding to surface the three core themes). As this is a single-team interpretive qualitative study, formal inter-rater reliability was not calculated; we will instead document the reflexive approach and provide example coded excerpts to demonstrate how themes were derived. revision: yes

  2. Referee: [Findings and Discussion] Findings and Discussion: The central claim that 'the industry envisions AI expertise as cheap' and the related characterizations of human and institutional expertise generalize from the public outputs of only five organizations. No justification is given for the selection of these firms, no comparison to other data annotation companies is presented, and no triangulation with internal documents or additional sources is described. This leaves the industry-level generalization unsupported by the evidence provided.

    Authors: We accept that the manuscript's phrasing risks overstating generalizability. The five organizations were chosen as influential actors whose public communications actively shape discourse in the expert annotation sector. In revision, we will (1) add explicit selection justification to the Methods section, (2) revise abstract and findings language to frame results as the vision articulated by these organizations rather than 'the industry' writ large, and (3) insert a limitations paragraph acknowledging the small non-probabilistic sample, lack of exhaustive comparisons, and reliance on public sources only. Triangulation with internal documents is not possible without proprietary access, but the public record still provides meaningful insight into projected industry perspectives. revision: partial

standing simulated objections not resolved
  • Triangulation with internal documents or exhaustive comparisons to additional data annotation firms, as these sources are not publicly available.

Circularity Check

0 steps flagged

No circularity: interpretive analysis grounded in external sources

full rationale

The paper conducts a qualitative interpretive study of public social media and podcast content from five data annotation organizations. No equations, fitted parameters, predictions, or self-citations appear in the derivation of its central claims. The findings are presented as direct readings of external public statements rather than reductions of any internal inputs by construction, satisfying the self-contained criterion for a score of 0.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the interpretive reading of selected public communications; no free parameters or invented entities are introduced, but the analysis depends on domain assumptions about what public statements reveal.

axioms (1)
  • domain assumption Public communications on social media and podcasts from five organizations reflect the industry's strategic vision for expertise.
    The mapping treats these materials as representative without internal documents or broader sampling justification.

pith-pipeline@v0.9.0 · 5501 in / 1315 out tokens · 77125 ms · 2026-05-07T13:38:35.998749+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

136 extracted references · 19 canonical work pages · 4 internal anchors

  1. [1]

    Mark S Ackerman. 2000. The intellectual challenge of CSCW: the gap between social requirements and technical feasibility.Human–Computer Interaction15, 2-3 (2000), 179–203

  2. [2]

    Mark S Ackerman, Juri Dachtera, Volkmar Pipek, and Volker Wulf. 2013. Shar- ing knowledge and expertise: The CSCW view of knowledge management. Computer Supported Cooperative Work (CSCW)22, 4 (2013), 531–573

  3. [3]

    Neil M Agnew, Kenneth M Ford, and Patrick J Hayes. 1997. 10 Expertise In Context: Personally Constructed, Socially Selected and Reality-Relevant?Inter- national Journal of Expert Systems7, 1 (1997), 65

  4. [4]

    Ahmed Ahmed, A Feder Cooper, Sanmi Koyejo, and Percy Liang. 2026. Extract- ing books from production language models.arXiv preprint arXiv:2601.02671 (2026)

  5. [5]

    Surge AI. 2026. What made Hemingway, Kahlo, and von Neumann extraordi- nary? https://surgehq.ai/ Accessed: 2026-02-01

  6. [6]

    Mousumi Akter, Erion Çano, Erik Weber, Dennis Dobler, and Ivan Habernal

  7. [7]

    Surveys58, 7 (2025), 1–32

    A comprehensive survey on legal summarization: Challenges and future directions.Comput. Surveys58, 7 (2025), 1–32

  8. [8]

    Ana Alacovska, Eliane Bucher, and Christian Fieseler. 2025. Algorithmic para- noia: gig workers’ affective experience of abusive algorithmic management. New Technology, Work and Employment40, 3 (2025), 421–435

  9. [9]

    Mohammed Almutairi, Charles Chiang, Yuxin Bai, and Diego Gomez-Zara. 2025. taifa: Enhancing team effectiveness and cohesion with ai-generated automated feedback. InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work. 1–25

  10. [10]

    Taghreed Alshehri, Reuben Kirkham, and Patrick Olivier. 2020. Scenario co- creation cards: A culturally sensitive tool for eliciting values. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14

  11. [11]

    Tawfiq Ammari, Meilun Chen, SM Zaman, and Kiran Garimella. 2025. How Students (Really) Use ChatGPT: Uncovering Experiences Among Undergraduate Students.arXiv preprint arXiv:2505.24126(2025)

  12. [12]

    Abdolghader Assarroudi, Fatemeh Heshmati Nabavi, Mohammad Reza Armat, Abbas Ebadi, and Mojtaba Vaismoradi. 2018. Directed qualitative content anal- ysis: the description and elaboration of its underpinning methods and data analysis process.Journal of research in nursing23, 1 (2018), 42–55

  13. [13]

    Blair Attard-Frost and David Gray Widder. 2025. The ethics of AI value chains. Big Data & Society12, 2 (2025), 20539517251340603

  14. [14]

    Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al

  15. [15]

    Training a helpful and harmless assistant with reinforcement learning from human feedback.arXiv preprint arXiv:2204.05862(2022)

  16. [16]

    Christopher A Bail, D Sunshine Hillygus, Alexander Volfovsky, Max Allamong, Fatima Alqabandi, Diana ME Jordan, Graham Tierney, Christina Tucker, Andrew Trexler, and Austin van Loon. 2023. Do We Need a Social Media Accelerator? SocArXiv doi10 (2023)

  17. [17]

    Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623

  18. [18]

    Thor Berger, Carl Benedikt Frey, Guy Levin, and Santosh Rao Danda. 2019. Uber happy? Work and well-being in the ‘gig economy’.Economic Policy34, 99 (2019), 429–477

  19. [19]

    Dorothy Lee Blyth, Mohammad Hossein Jarrahi, Christoph Lutz, and Gemma Newlands. 2024. Self-branding strategies of online freelancers on Upwork.new media & society26, 7 (2024), 4008–4033

  20. [20]

    2024.Algorithms of resistance: The everyday fight against platform power

    Tiziano Bonini and Emiliano Treré. 2024.Algorithms of resistance: The everyday fight against platform power. Mit Press

  21. [21]

    Lyle E Bourne Jr, James A Kole, and Alice F Healy. 2014. Expertise: defined, described, explained. 186 pages

  22. [22]

    Michelle Brachman, Amina El-Ashry, Casey Dugan, and Werner Geyer. 2025. Current and future use of large language models for knowledge work.Proceed- ings of the ACM on Human-Computer Interaction9, 7 (2025), 1–24

  23. [23]

    Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis.Qualitative research in sport, exercise and health11, 4 (2019), 589–597

  24. [24]

    Dan Breznitz, Karen Levy, Kenneth Lipartito, and Amos Zehavi. 2025. An equity- focused research agenda for workplace surveillance.Industrial and Corporate Change(2025), dtaf038

  25. [25]

    Levin Brinkmann, Fabian Baumann, Jean-François Bonnefon, Maxime Derex, Thomas F Müller, Anne-Marie Nussberger, Agnieszka Czaplicka, Alberto Acerbi, Thomas L Griffiths, Joseph Henrich, et al. 2023. Machine culture.Nature Human Behaviour7, 11 (2023), 1855–1868

  26. [26]

    Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners.Advances in neural information processing systems33 (2020), 1877–1901

  27. [27]

    Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al

  28. [28]

    Sparks of Artificial General Intelligence: Early experiments with GPT-4

    Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712(2023)

  29. [29]

    Ashley Capoot. 2025. Scale AI plans to promote strategy chief Droege to CEO as founder Wang heads for Meta. https://www.cnbc.com/2025/06/12/scale- ai-promotes-strategy-chief-droege-to-ceo-as-wang-heads-for-meta.html Ac- cessed: 2026-02-01

  30. [30]

    E Summerson Carr. 2010. Enactments of expertise.Annual review of anthropology 39 (2010), 17–32

  31. [31]

    Xinyue Chen, Lev Tankelevitch, Rishi Vanukuru, Ava Elizabeth Scott, Payod Panda, and Sean Rintel. 2025. Are We On Track? AI-Assisted Active and Passive Goal Reflection During Meetings. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22

  32. [32]

    Amit K Chopra and Munindar P Singh. 2018. Sociotechnical systems and ethics in the large. InProceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. 48–53

  33. [33]

    Ishita Chordia, Leya Breanna Baltaxe-Admony, Ashley Boone, Alyssa Sheehan, Lynn Dombrowski, Christopher A Le Dantec, Kathryn E Ringland, and An- gela DR Smith. 2024. Social justice in HCI: A systematic literature review. In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–33

  34. [34]

    Ishita Chordia, Robert Wolfe, Jason Yip, and Alexis Hiniker. 2025. Building the Beloved Community: Designing Technologies for Neighborhood Safety. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–18

  35. [35]

    Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. 2025. Sft memorizes, rl generalizes: A comparative study of foundation model post-training.arXiv preprint arXiv:2501.17161(2025). CHIWORK ’26, June 22–25, 2026, Linz, Austria Wolfe & Dangol

  36. [36]

    Jennifer Conrad. 2025. Surge AI, the Hot Tech Startup You’ve Probably Never Heard of, Is Already Outpacing Rivals. https://www.inc.com/jennifer-conrad/ surge-ai-edwin-chen-scale-ai-meta-alexandr-wang/91204563 Accessed: 2026- 02-01

  37. [37]

    Marios Constantinides and Daniele Quercia. 2025. AI, Jobs, and the Automation Trap: Where Is HCI?. InProceedings of the 4th Annual Symposium on Human- Computer Interaction for Work. 1–8

  38. [38]

    Marios Constantinides, Himanshu Verma, Shadan Sadeghian, and Abdallah El Ali. 2025. The Future of Work is Blended, Not Hybrid. InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work. 1–13

  39. [39]

    Hannes Cools and Nicholas Diakopoulos. 2024. Uses of generative AI in the newsroom: Mapping journalists’ perceptions of perils and possibilities.Journal- ism Practice(2024), 1–19

  40. [40]

    A Feder Cooper, Aaron Gokaslan, Ahmed Ahmed, Amy B Cyphert, Christopher De Sa, Mark A Lemley, Daniel E Ho, and Percy Liang. 2025. Extracting memo- rized pieces of (copyrighted) books from open-weight language models.arXiv preprint arXiv:2505.12546(2025)

  41. [41]

    Aayushi Dangol, Smriti Kotiyal, Robert Wolfe, Alex J Bowers, Antonio Vigil, Jason Yip, Julie A Kientz, Suleman Shahid, Tom Yeh, Vincent Cho, et al. 2025. Relief or displacement? How teachers are negotiating generative AI’s role in their professional practice.arXiv preprint arXiv:2510.18296(2025)

  42. [42]

    Anubrata Das, Houjiang Liu, Venelin Kovatchev, and Matthew Lease. 2023. The state of human-centered NLP technology for fact-checking.Information processing & management60, 2 (2023), 103219

  43. [43]

    Vedant Das Swain and Koustuv Saha. 2024. Teacher, trainer, counsel, spy: how generative AI can bridge or widen the gaps in worker-centric digital phenotyp- ing of wellbeing. InProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work. 1–13

  44. [44]

    Meredith Dedema and Howard Rosenbaum. 2024. Socio-technical issues in the platform-mediated gig economy: A systematic literature review: An Annual Review of Information Science and Technology (ARIST) paper.Journal of the Association for Information Science and Technology75, 3 (2024), 344–374

  45. [45]

    Ozge Demirci, Jonas Hannane, and Xinrong Zhu. 2025. Who is AI replacing? The impact of generative AI on online freelancing platforms.Management Science(2025)

  46. [46]

    Mark Diaz and Angela DR Smith. 2024. What Makes An Expert? Reviewing How ML Researchers Define" Expert". InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Vol. 7. 358–370

  47. [47]

    It makes you think

    Ian Drosos, Advait Sarkar, Neil Toronto, et al . 2025. " It makes you think": Provocations Help Restore Critical Thinking to AI-Assisted Knowledge Work. arXiv preprint arXiv:2501.17247(2025)

  48. [48]

    Gordon Ebanks. 2025. AI hiring is here. It’s making companies — and job seekers — miserable. https://www.cnn.com/2025/12/21/economy/ai-hiring- complication Accessed: 2026-02-01

  49. [49]

    Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. GPTs are GPTs: Labor market impact potential of LLMs.Science384, 6702 (2024), 1306–1308

  50. [50]

    Virginia Eubanks. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.St. Martin’s(2018)

  51. [51]

    Molly Q Feldman and Carolyn Jane Anderson. 2024. Non-expert programmers in the generative AI future. InProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work. 1–19

  52. [52]

    Avigail Ferdman. 2025. AI Deskilling is a structural problem.AI & SOCIETY (2025), 1–13

  53. [53]

    Jennifer Fereday and Eimear Muir-Cochrane. 2006. Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development.International journal of qualitative methods5, 1 (2006), 80–92

  54. [54]

    Simon Friederich and Leonard Dung. 2025. Against the Manhattan project framing of AI alignment.Mind & Language(2025)

  55. [55]

    Batya Friedman. 1996. Value-sensitive design.interactions3, 6 (1996), 16–23

  56. [56]

    Batya Friedman, David G Hendry, and Alan Borning. 2017. A survey of value sen- sitive design methods.Foundations and Trends®in Human–Computer Interaction 11, 2 (2017), 63–125

  57. [57]

    Zachary Fulker and Christoph Riedl. 2024. Cooperation in the gig economy: insights from upwork freelancers.Proceedings of the ACM on Human-Computer Interaction8, CSCW1 (2024), 1–20

  58. [58]

    Mark Graham and Mohammad Amir Anwar. 2019. The global gig economy: To- ward a planetary labor market. InThe digital transformation of labor. Routledge, 213–234

  59. [59]

    Reiner Grundmann. 2017. The problem of expertise in knowledge societies. Minerva55, 1 (2017), 25–48

  60. [60]

    Michael M Grynbaum and Ryan Mac. 2023. The Times sues OpenAI and Mi- crosoft over AI use of copyrighted work.The New York Times27, 1 (2023)

  61. [61]

    Longjie Guo, Chenjie Yuan, Mingyuan Zhong, Robert Wolfe, Ruican Zhong, Yue Xu, Bingbing Wen, Hua Shen, Lucy Lu Wang, and Alexis Hiniker. 2026. Susbench: An online benchmark for evaluating dark pattern susceptibility of computer-use agents. InProceedings of the 31st International Conference on Intelligent User Interfaces. 1917–1937

  62. [62]

    Anthony Ha. 2026. OpenAI is reportedly asking contractors to upload real work from past jobs. https://techcrunch.com/2026/01/10/openai-is-reportedly- asking-contractors-to-upload-real-work-from-past-jobs/ Accessed: 2026-02- 01

  63. [63]

    Bin Han, Robert Wolfe, Anat Caspi, and Bill Howe. 2025. Can Large Language Models Integrate Spatial Data? Empirical Insights into Reasoning Strengths and Computational Weaknesses.arXiv preprint arXiv:2508.05009(2025)

  64. [64]

    Handshake. 2026. The career network for the AI economy. https:// joinhandshake.com/about/ Accessed: 2026-02-01

  65. [65]

    Jessica He, Stephanie Houde, Gabriel E Gonzalez, Darío Andrés Silva Moran, Steven I Ross, Michael Muller, and Justin D Weisz. 2024. AI and the Future of Collaborative Work: Group Ideation with an LLM in a Virtual Canvas. In Proceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction for Work. 1–14

  66. [66]

    Heiner Heiland. 2021. Controlling space, controlling labour? Contested space in food delivery gig work.New Technology, Work and Employment36, 1 (2021), 1–16

  67. [67]

    Tomasz Hollanek, Dorian Peters, Eleanor Drage, and Raphael Hernandes. 2025. AI, journalism, and critical AI literacy: exploring journalists’ perspectives on AI and responsible reporting.AI & SOCIETY(2025), 1–13

  68. [68]

    Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, and Diyi Yang. 2025. Helping the helper: Supporting peer counselors via ai-empowered practice and feedback.Proceedings of the ACM on Human-Computer Interaction9, 2 (2025), 1–45

  69. [69]

    Ram Iyer. 2025. Mercor quintuples valuation to $10B with $350M Series C. https://techcrunch.com/2025/10/27/mercor-quintuples-valuation-to-10b- with-350m-series-c/ Accessed: 2026-02-01

  70. [70]

    Harry H Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 363–374

  71. [71]

    Harang Ju and Sinan Aral. 2025. Collaborating with ai agents: Field experiments on teamwork, productivity, and performance.arXiv preprint arXiv:2503.18238 (2025)

  72. [72]

    Rashima Kachari. 2026. Integrating large language models into spatial analysis: experimental insights from GIS interpolation.Spatial Information Research34, 3 (2026), 22

  73. [73]

    Chris Katje. 2025. Scale AI’s Alexandr Wang Went From MIT Dropout To AI Billionaire — 5 Things You Might Not Know. https://finance.yahoo.com/news/ scale-ais-alexandr-wang-went-023055270.html Accessed: 2026-02-01

  74. [74]

    Corin Katzke and Gideon Futerman. 2024. The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating.arXiv preprint arXiv:2501.14749 (2024)

  75. [75]

    Abdul Ghafoor Kazi, Rosman Md Yusoff, Anwar Khan, and Shazia Kazi. 2014. The freelancer: A conceptual review.Sains Humanika2, 3 (2014)

  76. [76]

    Zixuan Ke, Fangkai Jiao, Yifei Ming, Xuan-Phi Nguyen, Austin Xu, Do Xuan Long, Minzhi Li, Chengwei Qin, Peifeng Wang, Silvio Savarese, et al. 2025. A survey of frontiers in llm reasoning: Inference scaling, learning to reason, and agentic systems.arXiv preprint arXiv:2504.09037(2025)

  77. [77]

    Pyeonghwa Kim, Charis Asante-Agyei, Isabel Munoz, Michael Dunn, and Steve Sawyer. 2025. Decoding the Meaning of Success on Digital Labor Platforms: Worker-Centered Perspectives.Proceedings of the ACM on Human-Computer Interaction9, 2 (2025), 1–29

  78. [78]

    Will Knight, Maxwell Zeff, and Zoë Schiffer. 2026. OpenAI Is Asking Con- tractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents. https://www.wired.com/story/openai-contractor-upload-real-work- documents-ai-agents/ Accessed: 2026-02-01

  79. [79]

    Charlotte Kobiella, Teodora Mitrevska, Albrecht Schmidt, and Fiona Draxler

  80. [80]

    InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work

    When Efficiency Meets Fulfillment: Understanding Long-Term LLM Integration in Knowledge Work. InProceedings of the 4th Annual Symposium on Human-Computer Interaction for Work. 1–15

Showing first 80 references.