Recognition: no theorem link
Can AI be a moral victim? The role of moral patiency and ownership perceptions in ethical judgments of using AI-generated content
Pith reviewed 2026-05-13 18:46 UTC · model grok-4.3
The pith
Copying AI-generated content is judged less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work because people see AI as having lower moral patiency and the human reuser as having greater ownership.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
In the experiment, participants evaluated reuse of substantively identical manuscripts whose original source was labeled human, AI system, or AI agent with a human-like name. Copying from either AI condition was rated less unethical, less plagiaristic, and less guilt-inducing than copying from the human author. Mediation analyses indicated that lower moral patiency assigned to AI and higher ownership assigned to the human reuser accounted for the leniency. Anthropomorphic cues affected judgments indirectly by lowering perceived ownership of the AI output.
What carries the argument
Moral patiency (perceived capacity of the original source to suffer harm) and ownership perceptions (attributed to the human who reuses the content) as mediators of ethical leniency toward AI-generated material.
If this is right
- Ethical standards for plagiarism and attribution will treat AI-created text more leniently than human-created text under current perceptions.
- Anthropomorphic naming of AI tools indirectly increases acceptance of reuse by shifting ownership attributions to the human user.
- Moral disengagement from AI output occurs because people do not assign it the same capacity to be victimized as human work.
- Judgments of guilt and plagiarism scale with perceived patiency rather than with the objective similarity of the copied material.
Where Pith is reading between the lines
- If future AI systems are routinely described as having internal states or suffering, the current leniency in reuse judgments may shrink.
- Platform policies that require disclosure of AI use could alter ownership perceptions and thereby tighten ethical standards for reuse.
- Legal copyright doctrines that hinge on human authorship may need explicit rules for mixed human-AI content to avoid relying on shifting moral intuitions.
Load-bearing premise
Brief labels describing the source as human, AI system, or named AI agent cleanly separate moral patiency and ownership perceptions without introducing other uncontrolled differences in how participants picture the content or its author.
What would settle it
A replication in which participants are told the AI system has the same capacity to experience harm as a human author, yet still show no difference in ethical ratings of copying, would undermine the mediation account.
Figures
read the original abstract
The growing use of generative AI raises ethical concerns about authorship and plagiarism. This study examines how people judge the reuse of AI-generated content, focusing on moral patiency and ownership perceptions. In an experiment, participants evaluated two substantively similar manuscripts in which the original source was described as authored by a human, an AI system, or an AI agent with a human-like name. Results showed that copying AI-generated work was judged less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work. Mediation analyses revealed that this leniency stemmed from lower perceptions of AI's capacity to suffer harm (moral patiency) and greater ownership attributed to the human writer reusing AI-generated content. Anthropomorphic cues shaped moral evaluations indirectly by reducing perceived ownership. These findings shed light on how people morally disengage when using AI-generated work and highlight differences in how ethical judgments are applied to human versus AI-created content.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports an experiment in which participants judged the reuse of substantively identical manuscripts whose original source was described as human-authored, AI-system-authored, or AI-agent-authored with a human-like name. Copying AI-generated content was rated less unethical, less plagiaristic, and less guilt-inducing than copying human-authored content; mediation analyses attributed this leniency to lower perceived moral patiency of AI and higher ownership attributed to the human re-user. Anthropomorphic cues were said to affect judgments indirectly via ownership perceptions.
Significance. If the mediation results hold after addressing design and reporting issues, the work supplies empirical evidence on moral disengagement mechanisms in AI content reuse, linking patiency and ownership perceptions to ethical leniency. This is relevant to AI ethics, plagiarism policy, and human-AI interaction research, and the mediation approach strengthens the explanatory claim over simple main-effect comparisons.
major comments (2)
- [Methods] Methods section: the three source-description conditions (human, AI system, AI agent with human-like name) are unlikely to isolate moral patiency and ownership cleanly. Brief labels can simultaneously alter perceived agency, intentionality, and capability; without explicit measures or statistical controls for these co-varying constructs, the mediation paths cannot be unambiguously attributed to patiency and ownership alone. This is load-bearing for the central explanatory claim.
- [Results] Results section: the abstract and reported mediation analyses provide no information on sample size, power analysis, exclusion criteria, or robustness checks (e.g., alternative models controlling for agency perceptions). These omissions prevent evaluation of whether the reported indirect effects are statistically reliable or sensitive to analytic choices.
minor comments (1)
- [Abstract] Abstract: the description of the stimuli could be expanded to note whether participants saw full manuscripts or only brief source labels, as this affects interpretation of how the manipulations were experienced.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help clarify the scope and robustness of our findings. We address each major point below and indicate the revisions we will make to the manuscript.
read point-by-point responses
-
Referee: [Methods] Methods section: the three source-description conditions (human, AI system, AI agent with human-like name) are unlikely to isolate moral patiency and ownership cleanly. Brief labels can simultaneously alter perceived agency, intentionality, and capability; without explicit measures or statistical controls for these co-varying constructs, the mediation paths cannot be unambiguously attributed to patiency and ownership alone. This is load-bearing for the central explanatory claim.
Authors: We agree that brief source labels can influence multiple related perceptions and that our design does not include direct measures of agency or intentionality. The conditions were selected to vary moral patiency via anthropomorphic cues while measuring ownership perceptions as the key mediator, consistent with prior work on AI anthropomorphism. However, because we lack explicit controls for co-varying constructs, we cannot claim the mediation paths are fully isolated. In the revision we will add an explicit discussion of this limitation, report any available correlations with related constructs if present in the data, and note that future studies should include dedicated agency measures. This addresses the concern without altering the reported mediation results. revision: partial
-
Referee: [Results] Results section: the abstract and reported mediation analyses provide no information on sample size, power analysis, exclusion criteria, or robustness checks (e.g., alternative models controlling for agency perceptions). These omissions prevent evaluation of whether the reported indirect effects are statistically reliable or sensitive to analytic choices.
Authors: We acknowledge the omission of these details from the abstract and mediation reporting. The full manuscript contains the raw sample size and exclusion criteria, but we will expand the Results section to include a power analysis, explicit exclusion criteria, and robustness checks (including alternative mediation models that control for any available agency-related items). We will also update the abstract to reference the sample size and key analytic decisions. These additions will allow readers to assess the reliability of the indirect effects. revision: yes
Circularity Check
No circularity: purely empirical mediation study with no derivations or self-referential reductions
full rationale
This paper reports an experiment measuring participant judgments of ethical reuse of content under different source descriptions (human vs. AI), followed by standard statistical mediation analysis on moral patiency and ownership perceptions. No equations, fitted parameters renamed as predictions, self-definitional constructs, or derivation chains appear in the abstract or described methods. Mediation paths are computed directly from collected data rather than forced by prior inputs or self-citations. Any self-citations (if present) support background literature and are not load-bearing for the reported results, which rest on the experimental design and participant responses. The skeptic concern about confounding in conditions is a validity issue, not a circularity reduction. The derivation chain is self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Moral patiency (capacity to suffer harm) is a measurable and causally relevant construct for ethical judgments
- domain assumption Brief textual descriptions of authorship source can isolate ownership perceptions without major confounds
Reference graph
Works this paper leans on
-
[1]
Airenti, G. 2015. The Cognitive Bases of Anthropomorphism: From Relatedness to Empathy. International Journal of Social Robotics. 7, 1 (Feb. 2015), 117–127. https://doi.org/10.1007/s12369-014-0263-x
-
[2]
Akbulut, C. et al. 2025. All Too Human? Mapping and Mitigating the Risks from Anthropomorphic AI. Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society (San Jose, California, USA, Feb. 2025), 13–26
work page 2025
-
[3]
Anthis, J.R. et al. 2025. Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–22
work page 2025
-
[4]
Avey, J.B. et al. 2009. Psychological ownership: theoretical extensions, measure- ment and relation to work outcomes. Journal of Organizational Behavior. 30, 2 (Feb. 2009), 173–191. https://doi.org/10.1002/job.583
-
[5]
Aylett, M.P. et al. 2019. The right kind of unnatural: designing a robot voice. Pro- ceedings of the 1st International Conference on Conversational User Interfaces (Dublin Ireland, Aug. 2019), 1–2
work page 2019
-
[6]
Balle, S.N. 2022. Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup. AI & SOCIETY. 37, 2 (June 2022), 535–548. https://doi.org/10.1007/s00146-021-01211-2
-
[7]
Bandura, A. 2011. Moral Disengagement. The Encyclopedia of Peace Psychology. D.J. Christie, ed. Wiley
work page 2011
-
[8]
Banks, J. 2021. From Warranty Voids to Uprising Advocacy: Human Action and the Perceived Moral Patiency of Social Robots. Frontiers in Robotics and AI. 8, (May 2021). https://doi.org/10.3389/frobt.2021.670503
-
[9]
Banks, J. and Bowman, N.D. 2023. Perceived Moral Patiency of Social Robots: Explication and Scale Development. International Journal of Social Robotics. 15, 1 (Jan. 2023), 101–113. https://doi.org/10.1007/s12369-022-00950-6
-
[10]
Brave, S. et al. 2005. Computers that care: investigating the effects of orientation of emotion exhibited by an embodied computer agent. International Journal of Human-Computer Studies. 62, 2 (Feb. 2005), 161–178. https://doi.org/10.1016/j. ijhcs.2004.11.002
work page doi:10.1016/j 2005
-
[11]
Burgoon, J.K. and Hale, J.L. 1984. The fundamental topoi of relational com- munication. Communication Monographs. 51, 3 (Sept. 1984), 193–214. https: //doi.org/10.1080/03637758409390195
-
[12]
Chan, C.K.Y. 2023. Is AI Changing the Rules of Academic Misconduct? An In- depth Look at Students’ Perceptions of “AI-giarism. ” arXiv
work page 2023
-
[13]
Generative artificial intelligence and copy- right law
Congressional Research Service 2023. Generative artificial intelligence and copy- right law. Technical Report #LSB10922. Library of Congress
work page 2023
-
[14]
Cotte, J. et al. 2005. Enhancing or disrupting guilt: the role of ad credibility and perceived manipulative intent. Journal of Business Research. 58, 3 (Mar. 2005), 361–368. https://doi.org/10.1016/S0148-2963(03)00102-4
-
[15]
Darling, K. 2021. The new breed: how to think about robots. Allen Lane
work page 2021
-
[16]
Davis, J.L. 2020. How Artifacts Afford: The Power and Politics of Everyday Things. The MIT Press
work page 2020
-
[17]
Deterding, S. et al. 2017. Mixed-Initiative Creative Interfaces. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver Colorado USA, May 2017), 628–635
work page 2017
-
[18]
Epley, N. et al. 2007. On seeing human: A three-factor theory of anthropomor- phism. Psychological Review. 114, 4 (2007), 864–886. https://doi.org/10.1037/0033- 295X.114.4.864
-
[19]
Frazer, R. et al. 2022. Moral Disengagement Cues and Consequences for Victims in Entertainment Narratives: An Experimental Investigation. Media Psychology. 25, 4 (July 2022), 619–637. https://doi.org/10.1080/15213269.2022.2034020
-
[20]
Fyfe, P. 2023. How to cheat on your final paper: Assigning AI for student writing. AI & SOCIETY. 38, 4 (Aug. 2023), 1395–1405. https://doi.org/10.1007/s00146-022- 01397-z
-
[21]
Giroux, M. et al. 2022. Artificial Intelligence and Declined Guilt: Retailing Morality Comparison Between Human and AI. Journal of Business Ethics. 178, 4 (July 2022), 1027–1041. https://doi.org/10.1007/s10551-022-05056-7
-
[22]
Gouldner, A.W. 1960. The Norm of Reciprocity: A Preliminary Statement. Ameri- can Sociological Review. 25, 2 (1960), 161–178. https://doi.org/10.2307/2092623
-
[23]
Gray, K. et al. 2012. Mind Perception Is the Essence of Morality. Psychological Inquiry. 23, 2 (Apr. 2012), 101–124. https://doi.org/10.1080/1047840X.2012.651387
-
[24]
Guingrich, R.E. and Graziano, M.S.A. 2024. Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Frontiers in Psychology. 15, (Mar. 2024), 1322781. https://doi.org/10. 3389/fpsyg.2024.1322781
-
[25]
Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review. 108, 4 (2001), 814–834. https: //doi.org/10.1037/0033-295X.108.4.814
-
[26]
Haidt, J. ed. 2013. The righteous mind: why good people are divided by politics and religion. Vintage Books
work page 2013
-
[27]
Hancock, J.T. et al. 2020. AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations. Journal of Computer-Mediated Communi- cation. 25, 1 (Mar. 2020), 89–100. https://doi.org/10.1093/jcmc/zmz022
-
[28]
Hayes, A.F. 2018. Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. Guilford Press
work page 2018
-
[29]
Hwang, A.H.-C. et al. 2025. “It was 80% me, 20% AI”: Seeking Authenticity in Co-Writing with Large Language Models. Proc. ACM Hum.-Comput. Interact. 9, 2 (May 2025), CSCW122:1-CSCW122:41. https://doi.org/10.1145/3711020
-
[30]
Jelson, A. and Lee, S.W. 2024. An empirical study to understand how students use ChatGPT for writing essays and how it affects their ownership. arXiv
work page 2024
-
[31]
Jensen, T. and Khan, M.M.H. 2022. I’m Only Human: The Effects of Trust Damp- ening by Anthropomorphic Agents. HCI International 2022 – Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence (Cham, 2022), 285–306
work page 2022
-
[32]
Jia, H. et al. 2022. Do We Blame it on the Machine? Task Outcome and Agency Attribution in Human-Technology Collaboration. Proceedings of the 55th Hawaii International Conference on System Sciences (2022)
work page 2022
-
[33]
Joshi, N. and Vogel, D. 2025. Writing with AI Lowers Psychological Ownership, but Longer Prompts Can Help. Proceedings of the 7th ACM Conference on Conversational User Interfaces (Waterloo ON Canada, July 2025), 1–17
work page 2025
-
[34]
Kadoma, K. et al. 2024. The Role of Inclusion, Control, and Ownership in Work- place AI-Mediated Communication. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2024), 1–10
work page 2024
-
[35]
Khalaf, M.A. 2024. Does attitude towards plagiarism predict aigiarism using ChatGPT? AI and Ethics. (Feb. 2024). https://doi.org/10.1007/s43681-024-00426-5
-
[36]
Kim, T. et al. 2023. AI increases unethical consumer behavior due to reduced anticipatory guilt. Journal of the Academy of Marketing Science. 51, 4 (July 2023), 785–801. https://doi.org/10.1007/s11747-021-00832-9
-
[37]
Kim, Y. and Sundar, S.S. 2012. Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior. 28, 1 (Jan. 2012), 241–250. https: //doi.org/10.1016/j.chb.2011.09.006
-
[38]
Kyi, L. et al. 2025. Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and Beyond. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–16
work page 2025
-
[39]
Ladak, A. et al. 2025. Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–19
work page 2025
-
[40]
Ladak, A. et al. 2024. The Moral Psychology of Artificial Intelligence. Current Directions in Psychological Science. 33, 1 (Feb. 2024), 27–34. https://doi.org/10. 1177/09637214231205866
work page 2024
-
[41]
Longoni, C. et al. 2023. Plagiarizing AI-generated Content Is Seen As Less Uneth- ical and More Permissible. Preprint. https://doi.org/10.31234/osf.io/na3wb
-
[42]
Loomis, J.L. 1959. Communication, the Development of Trust, and Cooperative Behavior. Human Relations. 12, 4 (Nov. 1959), 305–315. https://doi.org/10.1177/ 001872675901200402
work page 1959
-
[43]
Lovato, J. et al. 2024. Foregrounding Artist Opinions: A Survey Study on Trans- parency, Ownership, and Fairness in AI Generative Art. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 7, 1 (Oct. 2024), 905–916. https://doi.org/10.1609/aies.v7i1.31691
-
[44]
Maeda, T. and Quan-Haase, A. 2024. When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design. Proceedings of CHI ’26, April 13–17, 2026, Barcelona, Spain Hyesun Choung and Soojong Kim the 2024 ACM Conference on Fairness, Accountability, and Transparency (New York, NY, USA, June 2024), 1068–1077
work page 2024
-
[45]
Malle, B.F. et al. 2025. People’s judgments of humans and robots in a classic moral dilemma. Cognition. 254, (Jan. 2025), 105958. https://doi.org/10.1016/j.cognition. 2024.105958
-
[46]
Mildner, T. et al. 2024. Listening to the Voices: Describing Ethical Caveats of Con- versational User Interfaces According to Experts and Frequent Users. Proceedings of the CHI Conference on Human Factors in Computing Systems (Honolulu HI USA, May 2024), 1–18
work page 2024
-
[47]
Nass, C. and Moon, Y. 2000. Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues. 56, 1 (Jan. 2000), 81–103. https://doi.org/10. 1111/0022-4537.00153
-
[48]
Intellectual property issues in artificial intelligence trained on scraped data
OECD 2025. Intellectual property issues in artificial intelligence trained on scraped data
work page 2025
-
[49]
Reeves, B. and Nass, C.I. 1996. The media equation: How people treat computers, television, and new media like real people and places. CSLI Publ
work page 1996
-
[50]
Schecter, A. and Richardson, B. 2025. How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2025), 1–15
work page 2025
-
[51]
Schmidt, P. and Loidolt, S. 2023. Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner? Philosophy & Technology. 36, 3 (Sept. 2023), 55. https://doi.org/10.1007/s13347-023-00656-1
-
[52]
Schroeder, D.A. and Linder, D.E. 1976. Effects of actor’s causal role, outcome severity, and knowledge of prior accidents upon attributions of responsibility. Journal of Experimental Social Psychology. 12, 4 (Jan. 1976), 340–356. https: //doi.org/10.1016/S0022-1031(76)80003-0
-
[53]
Shaver, K.G. 1985. The attribution of blame: Causality, responsibility, and blame- worthiness. Springer-Verlag
work page 1985
-
[54]
Sidoti, O. et al. 2025. About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023. Pew Research Center
work page 2025
-
[55]
Sullivan, Y.W. and Fosso Wamba, S. 2022. Moral Judgments in the Age of Artificial Intelligence. Journal of Business Ethics. 178, 4 (July 2022), 917–943. https://doi. org/10.1007/s10551-022-05053-w
-
[56]
Sundar, S.S. 2020. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication. 25, 1 (Mar. 2020), 74–88. https://doi.org/10.1093/jcmc/zmz026
-
[57]
Swanepoel, D. 2021. Does Artificial Intelligence Have Agency? The Mind- Technology Problem. R.W. Clowes et al., eds. Springer International Publishing. 83–104
work page 2021
-
[58]
Tanibe, T. et al. 2017. We perceive a mind in a robot when we help it. PLOS ONE. 12, 7 (July 2017), e0180952. https://doi.org/10.1371/journal.pone.0180952
-
[59]
Turkle, S. 2011. Alone together: why we expect more from technology and less from each other. Basic books
work page 2011
-
[60]
Wasi, A.T. et al. 2024. LLMs as Writing Assistants: Exploring Perspectives on Sense of Ownership and Reasoning. arXiv
work page 2024
-
[61]
Xu, Y. et al. 2024. What Makes It Mine? Exploring Psychological Ownership over Human-AI Co-Creations. Nova Scotia. (2024)
work page 2024
-
[62]
Yannakakis, G.N. et al. 2014. Mixed-initiative co-creativity. (2014)
work page 2014
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.