pith. machine review for the scientific record. sign in

arxiv: 2604.26956 · v1 · submitted 2026-04-03 · 💻 cs.CY · cs.AI· cs.HC

Recognition: no theorem link

Can AI be a moral victim? The role of moral patiency and ownership perceptions in ethical judgments of using AI-generated content

Authors on Pith no claims yet

Pith reviewed 2026-05-13 18:46 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.HC
keywords AI ethicsmoral patiencyplagiarismownership perceptionsgenerative AIethical judgmentsmoral disengagement
0
0 comments X

The pith

Copying AI-generated content is judged less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work because people see AI as having lower moral patiency and the human reuser as having greater ownership.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tests how source type shapes ethical judgments about reusing written content. Participants read descriptions of two similar manuscripts and rated the ethics of copying from a human author, an AI system, or an AI agent given a human-like name. Judgments were more lenient toward AI sources, with mediation showing the effect runs through reduced perceptions that AI can be harmed and increased sense that the copier owns the output. Anthropomorphic naming influenced ownership perceptions but did not directly alter moral patiency ratings. The work shows how people morally disengage from AI-created material in ways they do not for human material.

Core claim

In the experiment, participants evaluated reuse of substantively identical manuscripts whose original source was labeled human, AI system, or AI agent with a human-like name. Copying from either AI condition was rated less unethical, less plagiaristic, and less guilt-inducing than copying from the human author. Mediation analyses indicated that lower moral patiency assigned to AI and higher ownership assigned to the human reuser accounted for the leniency. Anthropomorphic cues affected judgments indirectly by lowering perceived ownership of the AI output.

What carries the argument

Moral patiency (perceived capacity of the original source to suffer harm) and ownership perceptions (attributed to the human who reuses the content) as mediators of ethical leniency toward AI-generated material.

If this is right

  • Ethical standards for plagiarism and attribution will treat AI-created text more leniently than human-created text under current perceptions.
  • Anthropomorphic naming of AI tools indirectly increases acceptance of reuse by shifting ownership attributions to the human user.
  • Moral disengagement from AI output occurs because people do not assign it the same capacity to be victimized as human work.
  • Judgments of guilt and plagiarism scale with perceived patiency rather than with the objective similarity of the copied material.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If future AI systems are routinely described as having internal states or suffering, the current leniency in reuse judgments may shrink.
  • Platform policies that require disclosure of AI use could alter ownership perceptions and thereby tighten ethical standards for reuse.
  • Legal copyright doctrines that hinge on human authorship may need explicit rules for mixed human-AI content to avoid relying on shifting moral intuitions.

Load-bearing premise

Brief labels describing the source as human, AI system, or named AI agent cleanly separate moral patiency and ownership perceptions without introducing other uncontrolled differences in how participants picture the content or its author.

What would settle it

A replication in which participants are told the AI system has the same capacity to experience harm as a human author, yet still show no difference in ethical ratings of copying, would undermine the mediation account.

Figures

Figures reproduced from arXiv: 2604.26956 by Hyesun Choung, Soojong Kim.

Figure 1
Figure 1. Figure 1: Effects of content source on guilt, unethicality, and plagiarism perceptions through mediators. [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: (a) Effects of AI source on feelings of guilt through mediators, [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a) Effects of anthropomorphic cue on feelings of guilt through mediators, [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
read the original abstract

The growing use of generative AI raises ethical concerns about authorship and plagiarism. This study examines how people judge the reuse of AI-generated content, focusing on moral patiency and ownership perceptions. In an experiment, participants evaluated two substantively similar manuscripts in which the original source was described as authored by a human, an AI system, or an AI agent with a human-like name. Results showed that copying AI-generated work was judged less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work. Mediation analyses revealed that this leniency stemmed from lower perceptions of AI's capacity to suffer harm (moral patiency) and greater ownership attributed to the human writer reusing AI-generated content. Anthropomorphic cues shaped moral evaluations indirectly by reducing perceived ownership. These findings shed light on how people morally disengage when using AI-generated work and highlight differences in how ethical judgments are applied to human versus AI-created content.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper reports an experiment in which participants judged the reuse of substantively identical manuscripts whose original source was described as human-authored, AI-system-authored, or AI-agent-authored with a human-like name. Copying AI-generated content was rated less unethical, less plagiaristic, and less guilt-inducing than copying human-authored content; mediation analyses attributed this leniency to lower perceived moral patiency of AI and higher ownership attributed to the human re-user. Anthropomorphic cues were said to affect judgments indirectly via ownership perceptions.

Significance. If the mediation results hold after addressing design and reporting issues, the work supplies empirical evidence on moral disengagement mechanisms in AI content reuse, linking patiency and ownership perceptions to ethical leniency. This is relevant to AI ethics, plagiarism policy, and human-AI interaction research, and the mediation approach strengthens the explanatory claim over simple main-effect comparisons.

major comments (2)
  1. [Methods] Methods section: the three source-description conditions (human, AI system, AI agent with human-like name) are unlikely to isolate moral patiency and ownership cleanly. Brief labels can simultaneously alter perceived agency, intentionality, and capability; without explicit measures or statistical controls for these co-varying constructs, the mediation paths cannot be unambiguously attributed to patiency and ownership alone. This is load-bearing for the central explanatory claim.
  2. [Results] Results section: the abstract and reported mediation analyses provide no information on sample size, power analysis, exclusion criteria, or robustness checks (e.g., alternative models controlling for agency perceptions). These omissions prevent evaluation of whether the reported indirect effects are statistically reliable or sensitive to analytic choices.
minor comments (1)
  1. [Abstract] Abstract: the description of the stimuli could be expanded to note whether participants saw full manuscripts or only brief source labels, as this affects interpretation of how the manipulations were experienced.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which help clarify the scope and robustness of our findings. We address each major point below and indicate the revisions we will make to the manuscript.

read point-by-point responses
  1. Referee: [Methods] Methods section: the three source-description conditions (human, AI system, AI agent with human-like name) are unlikely to isolate moral patiency and ownership cleanly. Brief labels can simultaneously alter perceived agency, intentionality, and capability; without explicit measures or statistical controls for these co-varying constructs, the mediation paths cannot be unambiguously attributed to patiency and ownership alone. This is load-bearing for the central explanatory claim.

    Authors: We agree that brief source labels can influence multiple related perceptions and that our design does not include direct measures of agency or intentionality. The conditions were selected to vary moral patiency via anthropomorphic cues while measuring ownership perceptions as the key mediator, consistent with prior work on AI anthropomorphism. However, because we lack explicit controls for co-varying constructs, we cannot claim the mediation paths are fully isolated. In the revision we will add an explicit discussion of this limitation, report any available correlations with related constructs if present in the data, and note that future studies should include dedicated agency measures. This addresses the concern without altering the reported mediation results. revision: partial

  2. Referee: [Results] Results section: the abstract and reported mediation analyses provide no information on sample size, power analysis, exclusion criteria, or robustness checks (e.g., alternative models controlling for agency perceptions). These omissions prevent evaluation of whether the reported indirect effects are statistically reliable or sensitive to analytic choices.

    Authors: We acknowledge the omission of these details from the abstract and mediation reporting. The full manuscript contains the raw sample size and exclusion criteria, but we will expand the Results section to include a power analysis, explicit exclusion criteria, and robustness checks (including alternative mediation models that control for any available agency-related items). We will also update the abstract to reference the sample size and key analytic decisions. These additions will allow readers to assess the reliability of the indirect effects. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical mediation study with no derivations or self-referential reductions

full rationale

This paper reports an experiment measuring participant judgments of ethical reuse of content under different source descriptions (human vs. AI), followed by standard statistical mediation analysis on moral patiency and ownership perceptions. No equations, fitted parameters renamed as predictions, self-definitional constructs, or derivation chains appear in the abstract or described methods. Mediation paths are computed directly from collected data rather than forced by prior inputs or self-citations. Any self-citations (if present) support background literature and are not load-bearing for the reported results, which rest on the experimental design and participant responses. The skeptic concern about confounding in conditions is a validity issue, not a circularity reduction. The derivation chain is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The study rests on standard assumptions from moral psychology that perceptions of suffering capacity and ownership can be manipulated via short descriptions and that mediation analysis validly identifies causal pathways; no free parameters or invented entities are introduced.

axioms (2)
  • domain assumption Moral patiency (capacity to suffer harm) is a measurable and causally relevant construct for ethical judgments
    Invoked in the mediation analysis section of the abstract as the explanatory mechanism
  • domain assumption Brief textual descriptions of authorship source can isolate ownership perceptions without major confounds
    Underlying the three experimental conditions described in the abstract

pith-pipeline@v0.9.0 · 5467 in / 1298 out tokens · 46105 ms · 2026-05-13T18:46:28.876378+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

62 extracted references · 62 canonical work pages

  1. [1]

    Airenti, G. 2015. The Cognitive Bases of Anthropomorphism: From Relatedness to Empathy. International Journal of Social Robotics. 7, 1 (Feb. 2015), 117–127. https://doi.org/10.1007/s12369-014-0263-x

  2. [2]

    Akbulut, C. et al. 2025. All Too Human? Mapping and Mitigating the Risks from Anthropomorphic AI. Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society (San Jose, California, USA, Feb. 2025), 13–26

  3. [3]

    Anthis, J.R. et al. 2025. Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–22

  4. [4]

    Avey, J.B. et al. 2009. Psychological ownership: theoretical extensions, measure- ment and relation to work outcomes. Journal of Organizational Behavior. 30, 2 (Feb. 2009), 173–191. https://doi.org/10.1002/job.583

  5. [5]

    Aylett, M.P. et al. 2019. The right kind of unnatural: designing a robot voice. Pro- ceedings of the 1st International Conference on Conversational User Interfaces (Dublin Ireland, Aug. 2019), 1–2

  6. [6]

    Balle, S.N. 2022. Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & SOCIETY. 37, 2 (June 2022), 535–548. https://doi.org/10.1007/s00146-021-01211-2

  7. [7]

    Bandura, A. 2011. Moral Disengagement. The Encyclopedia of Peace Psychology. D.J. Christie, ed. Wiley

  8. [8]

    Banks, J. 2021. From Warranty Voids to Uprising Advocacy: Human Action and the Perceived Moral Patiency of Social Robots. Frontiers in Robotics and AI. 8, (May 2021). https://doi.org/10.3389/frobt.2021.670503

  9. [9]

    and Bowman, N.D

    Banks, J. and Bowman, N.D. 2023. Perceived Moral Patiency of Social Robots: Explication and Scale Development. International Journal of Social Robotics. 15, 1 (Jan. 2023), 101–113. https://doi.org/10.1007/s12369-022-00950-6

  10. [10]

    Brave, S. et al. 2005. Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent. International Journal of Human-Computer Studies. 62, 2 (Feb. 2005), 161–178. https://doi.org/10.1016/j. ijhcs.2004.11.002

  11. [11]

    and Hale, J.L

    Burgoon, J.K. and Hale, J.L. 1984. The fundamental topoi of relational com- munication. Communication Monographs. 51, 3 (Sept. 1984), 193–214. https: //doi.org/10.1080/03637758409390195

  12. [12]

    AI-giarism

    Chan, C.K.Y. 2023. Is AI Changing the Rules of Academic Misconduct? An In- depth Look at Students’ Perceptions of “AI-giarism. ” arXiv

  13. [13]

    Generative artificial intelligence and copy- right law

    Congressional Research Service 2023. Generative artificial intelligence and copy- right law. Technical Report #LSB10922. Library of Congress

  14. [14]

    Cotte, J. et al. 2005. Enhancing or disrupting guilt: the role of ad credibility and perceived manipulative intent. Journal of Business Research. 58, 3 (Mar. 2005), 361–368. https://doi.org/10.1016/S0148-2963(03)00102-4

  15. [15]

    Darling, K. 2021. The new breed: how to think about robots. Allen Lane

  16. [16]

    Davis, J.L. 2020. How Artifacts Afford: The Power and Politics of Everyday Things. The MIT Press

  17. [17]

    Deterding, S. et al. 2017. Mixed-Initiative Creative Interfaces. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver Colorado USA, May 2017), 628–635

  18. [18]

    Epley, N. et al. 2007. On seeing human: A three-factor theory of anthropomor- phism. Psychological Review. 114, 4 (2007), 864–886. https://doi.org/10.1037/0033- 295X.114.4.864

  19. [19]

    Frazer, R. et al. 2022. Moral Disengagement Cues and Consequences for Victims in Entertainment Narratives: An Experimental Investigation. Media Psychology. 25, 4 (July 2022), 619–637. https://doi.org/10.1080/15213269.2022.2034020

  20. [20]

    Fyfe, P. 2023. How to cheat on your final paper: Assigning AI for student writing. AI & SOCIETY. 38, 4 (Aug. 2023), 1395–1405. https://doi.org/10.1007/s00146-022- 01397-z

  21. [21]

    Giroux, M. et al. 2022. Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI. Journal of Business Ethics. 178, 4 (July 2022), 1027–1041. https://doi.org/10.1007/s10551-022-05056-7

  22. [22]

    Gouldner, A.W. 1960. The Norm of Reciprocity: A Preliminary Statement. Ameri- can Sociological Review. 25, 2 (1960), 161–178. https://doi.org/10.2307/2092623

  23. [23]

    Gray, K. et al. 2012. Mind Perception Is the Essence of Morality. Psychological Inquiry. 23, 2 (Apr. 2012), 101–124. https://doi.org/10.1080/1047840X.2012.651387

  24. [24]

    and Graziano, M.S.A

    Guingrich, R.E. and Graziano, M.S.A. 2024. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Frontiers in Psychology. 15, (Mar. 2024), 1322781. https://doi.org/10. 3389/fpsyg.2024.1322781

  25. [25]

    Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 4 (2001), 814–834. https: //doi.org/10.1037/0033-295X.108.4.814

  26. [26]

    Haidt, J. ed. 2013. The righteous mind: why good people are divided by politics and religion. Vintage Books

  27. [27]

    Hancock, J.T. et al. 2020. AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communi- cation. 25, 1 (Mar. 2020), 89–100. https://doi.org/10.1093/jcmc/zmz022

  28. [28]

    Hayes, A.F. 2018. Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. Guilford Press

  29. [29]

    It was 80% me, 20% AI

    Hwang, A.H.-C. et al. 2025. “It was 80% me, 20% AI”: Seeking Authenticity in Co-Writing with Large Language Models. Proc. ACM Hum.-Comput. Interact. 9, 2 (May 2025), CSCW122:1-CSCW122:41. https://doi.org/10.1145/3711020

  30. [30]

    and Lee, S.W

    Jelson, A. and Lee, S.W. 2024. An empirical study to understand how students use ChatGPT for writing essays and how it affects their ownership. arXiv

  31. [31]

    and Khan, M.M.H

    Jensen, T. and Khan, M.M.H. 2022. I’m Only Human: The Effects of Trust Damp- ening by Anthropomorphic Agents. HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence (Cham, 2022), 285–306

  32. [32]

    Jia, H. et al. 2022. Do We Blame it on the Machine? Task Outcome and Agency Attribution in Human-Technology Collaboration. Proceedings of the 55th Hawaii International Conference on System Sciences (2022)

  33. [33]

    and Vogel, D

    Joshi, N. and Vogel, D. 2025. Writing with AI Lowers Psychological Ownership, but Longer Prompts Can Help. Proceedings of the 7th ACM Conference on Conversational User Interfaces (Waterloo ON Canada, July 2025), 1–17

  34. [34]

    Kadoma, K. et al. 2024. The Role of Inclusion, Control, and Ownership in Work- place AI-Mediated Communication. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2024), 1–10

  35. [35]

    Khalaf, M.A. 2024. Does attitude towards plagiarism predict aigiarism using ChatGPT? AI and Ethics. (Feb. 2024). https://doi.org/10.1007/s43681-024-00426-5

  36. [36]

    Kim, T. et al. 2023. AI increases unethical consumer behavior due to reduced anticipatory guilt. Journal of the Academy of Marketing Science. 51, 4 (July 2023), 785–801. https://doi.org/10.1007/s11747-021-00832-9

  37. [37]

    and Sundar, S.S

    Kim, Y. and Sundar, S.S. 2012. Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior. 28, 1 (Jan. 2012), 241–250. https: //doi.org/10.1016/j.chb.2011.09.006

  38. [38]

    Kyi, L. et al. 2025. Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–16

  39. [39]

    Ladak, A. et al. 2025. Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–19

  40. [40]

    Ladak, A. et al. 2024. The Moral Psychology of Artificial Intelligence. Current Directions in Psychological Science. 33, 1 (Feb. 2024), 27–34. https://doi.org/10. 1177/09637214231205866

  41. [41]

    Longoni, C. et al. 2023. Plagiarizing AI-generated Content Is Seen As Less Uneth- ical and More Permissible. Preprint. https://doi.org/10.31234/osf.io/na3wb

  42. [42]

    Loomis, J.L. 1959. Communication, the Development of Trust, and Cooperative Behavior. Human Relations. 12, 4 (Nov. 1959), 305–315. https://doi.org/10.1177/ 001872675901200402

  43. [43]

    Lovato, J. et al. 2024. Foregrounding Artist Opinions: A Survey Study on Trans- parency, Ownership, and Fairness in AI Generative Art. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 7, 1 (Oct. 2024), 905–916. https://doi.org/10.1609/aies.v7i1.31691

  44. [44]

    and Quan-Haase, A

    Maeda, T. and Quan-Haase, A. 2024. When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design. Proceedings of CHI ’26, April 13–17, 2026, Barcelona, Spain Hyesun Choung and Soojong Kim the 2024 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, June 2024), 1068–1077

  45. [45]

    Malle, B.F. et al. 2025. People’s judgments of humans and robots in a classic moral dilemma. Cognition. 254, (Jan. 2025), 105958. https://doi.org/10.1016/j.cognition. 2024.105958

  46. [46]

    Mildner, T. et al. 2024. Listening to the Voices: Describing Ethical Caveats of Con- versational User Interfaces According to Experts and Frequent Users. Proceedings of the CHI Conference on Human Factors in Computing Systems (Honolulu HI USA, May 2024), 1–18

  47. [47]

    and Moon, Y

    Nass, C. and Moon, Y. 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues. 56, 1 (Jan. 2000), 81–103. https://doi.org/10. 1111/0022-4537.00153

  48. [48]

    Intellectual property issues in artificial intelligence trained on scraped data

    OECD 2025. Intellectual property issues in artificial intelligence trained on scraped data

  49. [49]

    and Nass, C.I

    Reeves, B. and Nass, C.I. 1996. The media equation: How people treat computers, television, and new media like real people and places. CSLI Publ

  50. [50]

    and Richardson, B

    Schecter, A. and Richardson, B. 2025. How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–15

  51. [51]

    and Loidolt, S

    Schmidt, P. and Loidolt, S. 2023. Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner? Philosophy & Technology. 36, 3 (Sept. 2023), 55. https://doi.org/10.1007/s13347-023-00656-1

  52. [52]

    and Linder, D.E

    Schroeder, D.A. and Linder, D.E. 1976. Effects of actor’s causal role, outcome severity, and knowledge of prior accidents upon attributions of responsibility. Journal of Experimental Social Psychology. 12, 4 (Jan. 1976), 340–356. https: //doi.org/10.1016/S0022-1031(76)80003-0

  53. [53]

    Shaver, K.G. 1985. The attribution of blame: Causality, responsibility, and blame- worthiness. Springer-Verlag

  54. [54]

    Sidoti, O. et al. 2025. About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023. Pew Research Center

  55. [55]

    and Fosso Wamba, S

    Sullivan, Y.W. and Fosso Wamba, S. 2022. Moral Judgments in the Age of Artificial Intelligence. Journal of Business Ethics. 178, 4 (July 2022), 917–943. https://doi. org/10.1007/s10551-022-05053-w

  56. [56]

    Sundar, S.S. 2020. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication. 25, 1 (Mar. 2020), 74–88. https://doi.org/10.1093/jcmc/zmz026

  57. [57]

    Swanepoel, D. 2021. Does Artificial Intelligence Have Agency? The Mind- Technology Problem. R.W. Clowes et al., eds. Springer International Publishing. 83–104

  58. [58]

    Tanibe, T. et al. 2017. We perceive a mind in a robot when we help it. PLOS ONE. 12, 7 (July 2017), e0180952. https://doi.org/10.1371/journal.pone.0180952

  59. [59]

    Turkle, S. 2011. Alone together: why we expect more from technology and less from each other. Basic books

  60. [60]

    Wasi, A.T. et al. 2024. LLMs as Writing Assistants: Exploring Perspectives on Sense of Ownership and Reasoning. arXiv

  61. [61]

    Xu, Y. et al. 2024. What Makes It Mine? Exploring Psychological Ownership over Human-AI Co-Creations. Nova Scotia. (2024)

  62. [62]

    Yannakakis, G.N. et al. 2014. Mixed-initiative co-creativity. (2014)