Recognition: no theorem link
AI and Consciousness: Shifting Focus Towards Tractable Questions
Pith reviewed 2026-05-11 00:49 UTC · model grok-4.3
The pith
The direct question of whether AI systems can be conscious is currently intractable, so research should focus on how people perceive consciousness in them.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The fundamental problem of whether AI systems can be conscious is currently intractable in its direct form, given the absence of a universally accepted scientific theory of consciousness, as well as the historical open-endedness of the philosophical mind-body problem. In contrast, questions around the adjacent subject of perceived AI consciousness are tractable, timely, and highly consequential for society.
What carries the argument
The distinction between actual AI consciousness (intractable) and perceived AI consciousness (tractable), with the latter already driving observable shifts in user experience, ethical standards, and linguistic norms.
If this is right
- Clear communication about the uncertainties of AI consciousness becomes a practical obligation for developers and decision-makers.
- Research can target concrete drivers such as anthropomorphic language and interface design to understand their effects on human self-understanding.
- Ethical standards and regulatory discussions will need to account for widespread public perceptions rather than waiting for resolution of the direct consciousness question.
- Societal shifts in how humans view their own subjective experience relative to artificial entities will continue regardless of whether machines actually possess consciousness.
Where Pith is reading between the lines
- Treating perceived consciousness as the primary target could accelerate empirical work in human-AI interaction psychology without requiring resolution of longstanding philosophical debates.
- Policy proposals for AI rights or protections might emerge from perception data alone, creating a feedback loop where public beliefs shape legal categories before scientific clarity arrives.
- Long-term studies tracking changes in human self-description when people interact with increasingly anthropomorphic systems would test whether the predicted shifts in self-understanding actually occur.
Load-bearing premise
That questions about perceived AI consciousness are both tractable and will deliver clearer societal benefits than continued direct work on whether machines can actually be conscious.
What would settle it
A controlled study showing that public attributions of subjective experience to AI produce no measurable changes in ethical judgments, language use, or interaction patterns would falsify the claim that perceived consciousness is highly consequential.
Figures
read the original abstract
As language-based AI systems become more anthropomorphic, the question of whether they can have subjective experience is increasingly pressing. I focus here on the tractability of research questions in the space of AI consciousness. I argue that the fundamental problem of whether AI systems can be conscious is currently intractable in its direct form, given the absence of a universally accepted scientific theory of consciousness, as well as the historical open-endedness of the philosophical mind-body problem. In contrast, questions around the adjacent subject of perceived AI consciousness are tractable, timely, and highly consequential for society. The general public is increasingly open to the possibility of consciousness in AI systems and routinely adopts the vocabulary of human cognition and subjective experience to describe them. This phenomenon is already driving societal shifts across user experience, ethical standards, and linguistic norms. I therefore propose an increased research focus on uncovering the causes and effects of perceived AI consciousness, which ultimately shape how we see our own human subjective experience relative to artificial entities. To support this, I map the current landscape of AI consciousness perception and discuss its key potential drivers and societal consequences. Finally, I urge developers, decision-makers, and the broader scientific community to commit to clear and accurate communication regarding the topic of AI consciousness, explicitly acknowledging its inherent uncertainties.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that direct questions about whether AI systems can possess subjective consciousness are currently intractable, owing to the absence of a consensus scientific theory of consciousness and the longstanding open status of the mind-body problem. In contrast, it argues that questions concerning perceived AI consciousness—driven by increasing anthropomorphism in language-based systems—are tractable, timely, and societally consequential. The manuscript maps the landscape of public perceptions, identifies potential drivers and effects on user experience, ethics, and linguistic norms, and recommends that researchers, developers, and policymakers prioritize these questions while committing to clear communication that acknowledges uncertainties.
Significance. If the normative recommendation holds, the paper could usefully redirect interdisciplinary attention in computer science and society toward empirically addressable questions about human-AI interaction, potentially informing more grounded approaches to AI ethics, interface design, and public communication. As a position paper it contributes by explicitly framing its argument around acknowledged philosophical and scientific gaps rather than advancing untestable claims, and by highlighting observable societal shifts (anthropomorphic language use, shifting ethical intuitions) as motivation for further study.
major comments (1)
- [proposal and concluding sections] The central claim that perceived-consciousness questions are tractable and yield clearer societal benefits than direct consciousness questions (final paragraphs and proposal section) is asserted without outlining concrete research methods, existing empirical studies, or falsifiable predictions that would demonstrate tractability; this weakens the comparative argument for the proposed shift.
minor comments (2)
- [Abstract and introduction] The abstract and introduction would benefit from a brief, explicit definition or operationalization of 'perceived AI consciousness' to distinguish it clearly from direct consciousness attributions and to anchor the subsequent landscape mapping.
- [landscape mapping section] Societal observations (public openness to AI consciousness, adoption of cognitive vocabulary) are presented as motivation; adding one or two specific citations or examples from recent surveys or usage studies would strengthen the grounding without altering the position-paper character.
Simulated Author's Rebuttal
We thank the referee for their constructive review and recommendation of minor revision. We address the single major comment below by agreeing to strengthen the manuscript with concrete examples.
read point-by-point responses
-
Referee: [proposal and concluding sections] The central claim that perceived-consciousness questions are tractable and yield clearer societal benefits than direct consciousness questions (final paragraphs and proposal section) is asserted without outlining concrete research methods, existing empirical studies, or falsifiable predictions that would demonstrate tractability; this weakens the comparative argument for the proposed shift.
Authors: We agree that the proposal section would be strengthened by explicit illustrations of tractable methods and supporting evidence. The manuscript already maps public perceptions and discusses drivers and consequences at a high level, but we acknowledge that this leaves the tractability claim somewhat abstract. In the revised manuscript we will expand the proposal and concluding sections to include: (1) concrete empirical methods such as large-scale surveys measuring correlations between AI interaction frequency and perceived consciousness, controlled experiments testing effects of anthropomorphic language on user trust and ethical judgments, and analysis of linguistic corpora to detect norm shifts; (2) references to existing studies from HCI and psychology on anthropomorphism in conversational agents; and (3) example falsifiable predictions, such as measurable increases in perceived-consciousness scores following exposure to more human-like interfaces. These additions will make the comparative advantages over direct consciousness research more demonstrable while preserving the position-paper framing. revision: yes
Circularity Check
No significant circularity
full rationale
The paper is a normative position piece that explicitly premises its recommendation on the acknowledged absence of a consensus scientific theory of consciousness and the open-ended status of the mind-body problem. It advances no equations, derivations, fitted parameters, models, or self-referential definitions that could reduce a claimed result to its own inputs by construction. The central argument is a call to redirect research priorities toward perceived-AI-consciousness questions on grounds of tractability and societal impact; these premises are stated outright rather than smuggled in via self-citation chains or ansatzes. No load-bearing step relies on prior work by the same author to forbid alternatives or to rename an empirical pattern as a novel unification. The manuscript is therefore self-contained against external benchmarks and receives the default non-circularity finding.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption There is no universally accepted scientific theory of consciousness
- domain assumption The philosophical mind-body problem is historically open-ended
Reference graph
Works this paper leans on
-
[1]
Seminars in the Neurosciences , volume = 2, pages =
Toward a Neurobiological Theory of Consciousness , author =. Seminars in the Neurosciences , volume = 2, pages =
-
[2]
Adversarial testing of global neuronal workspace and integrated information theories of consciousness , author =. Nature , publisher =. doi:10.1038/s41586-025-08888-1 , copyright =
-
[3]
Theories of consciousness , author =. Nature Reviews Neuroscience , volume = 23, number = 7, pages =. doi:10.1038/s41583-022-00587-4 , issn =
-
[4]
Progress in Biophysics and Molecular Biology , volume = 190, pages =
A landscape of consciousness: Toward a taxonomy of explanations and implications , author =. Progress in Biophysics and Molecular Biology , volume = 190, pages =. doi:10.1016/j.pbiomolbio.2023.12.003 , issn =
-
[5]
Neuroscience & Biobehavioral Reviews , volume = 170, pages = 106053, doi =
Unpacking the complexities of consciousness: Theories and reflections , author =. Neuroscience & Biobehavioral Reviews , volume = 170, pages = 106053, doi =
-
[6]
arXiv preprint arXiv:2308.08708 , year =
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness , author =. 2308.08708 , archiveprefix =
-
[7]
Seth, Anil , year = 2021, publisher =
2021
- [8]
-
[9]
Trends in Neurosciences , volume = 46, number = 12, pages =
The feasibility of artificial consciousness through the lens of neuroscience , author =. Trends in Neurosciences , volume = 46, number = 12, pages =. doi:10.1016/j.tins.2023.09.009 , issn =
-
[10]
Communications of the ACM , publisher =
Talking about Large Language Models , author =. Communications of the ACM , publisher =. doi:10.1145/3624724 , issn =
-
[11]
Science , volume = 313, number = 5792, pages =
Detecting Awareness in the Vegetative State , author =. Science , volume = 313, number = 5792, pages =. doi:10.1126/science.1130197 , url =
-
[12]
The Conscious Mind: In Search of a Fundamental Theory , author =
-
[13]
Philosophy of Mind: Classical and Contemporary Readings , year = 2021, publisher =
2021
-
[14]
Schaffer, Jonathan , year = 2018, booktitle =
2018
-
[15]
Robinson, Howard , year = 2023, booktitle =
2023
-
[16]
Rescorla, Michael , year = 2025, booktitle =
2025
-
[17]
Behavioral and Brain Sciences , pages =
Conscious artificial intelligence and biological naturalism , author =. Behavioral and Brain Sciences , pages =
-
[18]
Annals of the New York Academy of Sciences , volume = 1157, number = 1, pages =
Dualism Persists in the Science of Mind , author =. Annals of the New York Academy of Sciences , volume = 1157, number = 1, pages =. doi:10.1111/j.1749-6632.2008.04117.x , url =
-
[19]
Philosophers' Imprint , volume = 23, number = 11, doi =
Philosophers on Philosophy: The 2020 PhilPapers Survey , author =. Philosophers' Imprint , volume = 23, number = 11, doi =
2020
-
[20]
Levin, Janet , year = 2023, booktitle =
2023
-
[21]
Nature Reviews Neuroscience , volume = 17, number = 5, pages =
Neural correlates of consciousness: progress and problems , author =. Nature Reviews Neuroscience , volume = 17, number = 5, pages =. doi:10.1038/nrn.2016.22 , issn =
-
[22]
Behavioral and Brain Sciences , volume = 3, number = 3, pages =
Minds, brains, and programs , author =. Behavioral and Brain Sciences , volume = 3, number = 3, pages =
-
[23]
Current Biology , volume = 33, number = 16, pages =
Consciousness beyond the human case , author =. Current Biology , volume = 33, number = 16, pages =. doi:10.1016/j.cub.2023.06.067 , issn =
-
[24]
All too human?
Shevlin, Henry , year = 2024, journal =. All too human?
2024
-
[25]
Bowman , year = 2024, booktitle =
David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman , year = 2024, booktitle =
2024
-
[26]
First Conference on Language Modeling (COLM) , url =
Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models - A Survey , author =. First Conference on Language Modeling (COLM) , url =
-
[27]
Neuroscience of Consciousness , volume = 2024, number = 1, pages =
Folk psychological attributions of consciousness to large language models , author =. Neuroscience of Consciousness , volume = 2024, number = 1, pages =. doi:10.1093/nc/niae013 , pmid = 38618488, pmcid =
-
[28]
Does It Make Sense to Speak of Introspection in Large Language Models? , author =. 2506.05068 , archiveprefix =
-
[29]
Palatable Conceptions of Disembodied Being , author =. 2503.16348 , archiveprefix =
-
[30]
doi:10.31234/osf.io/af7c9_v1 , url =
Birch, Jonathan , year = 2025, month = sep, publisher =. doi:10.31234/osf.io/af7c9_v1 , url =
-
[31]
Deanthropomorphising
Shardlow, Matthew and Przyby. Deanthropomorphising. PLoS One , volume = 19, number = 12, pages =
-
[32]
Journal of Artificial Intelligence and Consciousness , volume =
Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology , author =. Journal of Artificial Intelligence and Consciousness , volume =. doi:10.1142/S270507852150003X , url =
-
[33]
Trends in Cognitive Sciences , volume = 28, number = 5, pages =
Tests for consciousness in humans and beyond , author =. Trends in Cognitive Sciences , volume = 28, number = 5, pages =. doi:10.1016/j.tics.2024.01.010 , issn =
-
[34]
Testing for Consciousness in Machines: An Update on the
Susan Schneider , year = 2024, journal =. Testing for Consciousness in Machines: An Update on the
2024
-
[35]
Erkenntnis , publisher =
Tests of animal consciousness are tests of machine consciousness , author =. Erkenntnis , publisher =
-
[36]
Could a Large Language Model Be Conscious? , author =
-
[37]
Scott, Ava Elizabeth and Neumann, Daniel and Niess, Jasmin and Wo\'. Do You Mind?. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , location =. doi:10.1145/3544548.3581296 , isbn = 9781450394215, url =
-
[38]
Guingrich, Rose E. and Graziano, Michael S. A. , year = 2024, journal =. Ascribing consciousness to artificial intelligence: human-. doi:10.3389/fpsyg.2024.1322781 , issn =
-
[39]
Chrisley, Ron , year = 2020, month = jul, booktitle =. A Human-Centered Approach to. doi:10.1093/oxfordhb/9780190067397.013.29 , isbn = 9780190067397, url =
-
[40]
Journal of Consciousness Studies , publisher =
Virtual Machines and Consciousness , author =. Journal of Consciousness Studies , publisher =
-
[41]
Neuroscience of Consciousness , volume = 2022, number = 1, pages =
An academic survey on theoretical foundations, common assumptions and the current state of consciousness science , author =. Neuroscience of Consciousness , volume = 2022, number = 1, pages =. doi:10.1093/nc/niac011 , issn =
-
[42]
Humanities and Social Sciences Communications , volume = 11, number = 1, pages = 1031, doi =
A clarification of the conditions under which Large language Models could be conscious , author =. Humanities and Social Sciences Communications , volume = 11, number = 1, pages = 1031, doi =
-
[43]
Proceedings of the National Academy of Sciences , volume = 122, number = 22, pages =
The benefits and dangers of anthropomorphic conversational agents , author =. Proceedings of the National Academy of Sciences , volume = 122, number = 22, pages =. doi:10.1073/pnas.2415898122 , url =
-
[44]
Will our social brain inherently shape and be shaped by interactions with
Benjamin Becker , year = 2025, journal =. Will our social brain inherently shape and be shaped by interactions with. doi:10.1016/j.neuron.2025.04.034 , issn =
-
[45]
Glickman, Moshe and Sharot, Tali , year = 2025, month = feb, journal =. How human–. doi:10.1038/s41562-024-02077-2 , issn =
-
[46]
Deshpande, Ameet and Rajpurohit, Tanmay and Narasimhan, Karthik and Kalyan, Ashwin , year = 2023, month = dec, booktitle =. Anthropomorphization of. doi:10.18653/v1/2023.nllp-1.1 , url =
-
[47]
Identifying features that shape perceived consciousness in
Bongsu Kang and Jundong Kim and Taerim Yun and Hyojin Bae and Chang-Eop Kim , year = 2026, journal =. Identifying features that shape perceived consciousness in. doi:10.1016/j.chbr.2025.100901 , issn =
-
[48]
Joo, Minjoo , year = 2024, month = dec, journal =. It's the. doi:10.1371/journal.pone.0314559 , pmid = 39693294, pmcid =
-
[49]
Trends in Cognitive Sciences , doi =
Can only meat machines be conscious? , author =. Trends in Cognitive Sciences , doi =
-
[50]
Journal of Human-Robot Interaction , publisher =
Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings , author =. Journal of Human-Robot Interaction , publisher =. doi:10.1145/3526112 , url =
-
[51]
and Ladak, Ali and Manoli, Aikaterina , year = 2025, month = apr, booktitle =
Anthis, Jacy Reese and Pauketat, Janet V.T. and Ladak, Ali and Manoli, Aikaterina , year = 2025, month = apr, booktitle =. Perceptions of Sentient. doi:10.1145/3706598.3713329 , url =
-
[52]
Lucius Caviola and Jeff Sebo and Jonathan Birch , year = 2025, journal =. What will society think about. doi:10.1016/j.tics.2025.06.002 , issn =
-
[53]
The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =
People with Disorders of Consciousness , author =. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =. doi:10.1093/9780191966729.003.0010 , isbn = 9780192870421, url =
-
[54]
The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =
Fetuses and Embryos , author =. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =. doi:10.1093/9780191966729.003.0011 , isbn = 9780192870421, url =
-
[55]
Pat Pataranutaporn and Sheer Karny and Chayapatr Archiwaranguprok and Constanze Albrecht and Auren R. Liu and Pattie Maes , year = 2025, url =. ``. 2509.11391 , archiveprefix =
-
[56]
Apply rich psychological terms in
Shevlin, Henry and Halina, Marta , year = 2019, month = apr, journal =. Apply rich psychological terms in. doi:10.1038/s42256-019-0039-y , issn =
-
[57]
doi:10.1007/s00146-025-02663-6 , issn =
O'Connor, Marie Theresa , year = 2025, month = oct, journal =. doi:10.1007/s00146-025-02663-6 , issn =
-
[58]
Eric Schwitzgebel , year = 2026, url =. 2510.09858 , archiveprefix =
-
[59]
Ascribing Consciousness to Artificial Intelligence , author =. 1504.05696 , archiveprefix =
-
[60]
(2024) Dissociating artificial intelligence from artificial consciousness
Dissociating Artificial Intelligence from Artificial Consciousness , author =. 2412.04571 , archiveprefix =
-
[61]
Science , volume = 358, number = 6362, pages =
What is consciousness, and could machines have it? , author =. Science , volume = 358, number = 6362, pages =. doi:10.1126/science.aan8871 , url =
-
[62]
Journal of Consciousness Studies , volume = 26, number =
Reviewing Tests for Machine Consciousness , author =. Journal of Consciousness Studies , volume = 26, number =
-
[63]
Trends in Cognitive Sciences , publisher =
Consciousness in the cradle: on the emergence of infant experience , author =. Trends in Cognitive Sciences , publisher =. doi:10.1016/j.tics.2023.08.018 , issn =
-
[64]
, year = 2025, booktitle =
Shevlin, H. , year = 2025, booktitle =. The anthropomimetic turn in contemporary
2025
-
[65]
The American Journal of Psychology , publisher =
An Experimental Study of Apparent Behavior , author =. The American Journal of Psychology , publisher =
-
[66]
AJOB Neu- roscience11(2), 88–95 (2020)
Arleen Salles and Kathinka Evers and Michele Farisco , year = 2020, journal =. Anthropomorphism in. doi:10.1080/21507740.2020.1740350 , url =
-
[67]
On seeing human: a three-factor theory of anthropomorphism. , author =. Psychological review , volume = 114, number = 4, pages =. doi:10.1037/0033-295X.114.4.864 , issn =
-
[68]
Lemoine, Blake , year = 2022, journal =. Is
2022
-
[69]
Frontiers in Human Neuroscience , volume = 19, pages = 1633272, doi =
Street, Winnie and Siy, John Oliver and Keeling, Geoff and Baranes, Adrien and Barnett, Benjamin and McKibben, Michael and Kanyere, Tatenda and Lentz, Alison and Ag\". Frontiers in Human Neuroscience , volume = 19, pages = 1633272, doi =
-
[70]
Nature Human Behaviour , volume = 8, number = 7, pages =
Testing theory of mind in large language models and humans , author =. Nature Human Behaviour , volume = 8, number = 7, pages =. doi:10.1038/s41562-024-01882-z , issn =
-
[71]
Weizenbaum, Joseph , year = 1966, month = jan, journal =. doi:10.1145/365153.365168 , issn =
-
[72]
The Emerging Social Status of Generative
Bran, Emanuela and Rughini. The Emerging Social Status of Generative. 2023 24th International Conference on Control Systems and Computer Science (CSCS) , pages =. doi:10.1109/CSCS59211.2023.00068 , keywords =
-
[73]
Humanities and Social Sciences Communications , volume = 12, number = 1, pages = 1647, doi =
There is no such thing as conscious artificial intelligence , author =. Humanities and Social Sciences Communications , volume = 12, number = 1, pages = 1647, doi =
-
[74]
Trends in Cognitive Sciences , publisher =
Identifying indicators of consciousness in AI systems , author =. Trends in Cognitive Sciences , publisher =. doi:10.1016/j.tics.2025.10.011 , issn =
-
[75]
Simulacra as conscious exotica , author =. Inquiry , publisher =. doi:10.1080/0020174X.2024.2434860 , url =
-
[76]
What We Talk to When We Talk to Language Models , author =
-
[77]
Communications Psychology , volume = 3, number = 1, pages = 84, doi =
The influence of mental state attributions on trust in large language models , author =. Communications Psychology , volume = 3, number = 1, pages = 84, doi =
-
[78]
Yoshua Bengio and Eric Elmoznino , year = 2025, journal =. Illusions of. doi:10.1126/science.adn4935 , url =
-
[79]
The Grounds of Moral Status , author =. The
-
[80]
Nan and Yan, Bei and Huang, Jujun and Luo, Guirong and Algas, Kristina , year = 2022, journal =
Wilkenfeld, J. Nan and Yan, Bei and Huang, Jujun and Luo, Guirong and Algas, Kristina , year = 2022, journal =. ``. doi:10.5465/AMBPP.2022.17063abstract , url =
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.