pith. machine review for the scientific record. sign in

arxiv: 2605.06965 · v1 · submitted 2026-05-07 · 💻 cs.CY · cs.AI· cs.HC

Recognition: no theorem link

AI and Consciousness: Shifting Focus Towards Tractable Questions

Authors on Pith no claims yet

Pith reviewed 2026-05-11 00:49 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.HC
keywords AI consciousnessperceived consciousnessanthropomorphism in AIAI ethicstractability of consciousness questionssocietal impacts of AImind-body problempublic perception of AI
0
0 comments X

The pith

The direct question of whether AI systems can be conscious is currently intractable, so research should focus on how people perceive consciousness in them.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that the core question of AI consciousness cannot be settled scientifically at present because no agreed theory of consciousness exists and the mind-body problem has remained open for centuries. In its place the author identifies perceived AI consciousness as a set of questions that are already measurable and consequential, because the public routinely describes AI systems in terms of subjective experience and this language is reshaping user behavior, ethical norms, and everyday speech. The work therefore maps the drivers and effects of these perceptions and urges developers and scientists to state the uncertainties plainly rather than imply that machines possess inner experience.

Core claim

The fundamental problem of whether AI systems can be conscious is currently intractable in its direct form, given the absence of a universally accepted scientific theory of consciousness, as well as the historical open-endedness of the philosophical mind-body problem. In contrast, questions around the adjacent subject of perceived AI consciousness are tractable, timely, and highly consequential for society.

What carries the argument

The distinction between actual AI consciousness (intractable) and perceived AI consciousness (tractable), with the latter already driving observable shifts in user experience, ethical standards, and linguistic norms.

If this is right

  • Clear communication about the uncertainties of AI consciousness becomes a practical obligation for developers and decision-makers.
  • Research can target concrete drivers such as anthropomorphic language and interface design to understand their effects on human self-understanding.
  • Ethical standards and regulatory discussions will need to account for widespread public perceptions rather than waiting for resolution of the direct consciousness question.
  • Societal shifts in how humans view their own subjective experience relative to artificial entities will continue regardless of whether machines actually possess consciousness.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Treating perceived consciousness as the primary target could accelerate empirical work in human-AI interaction psychology without requiring resolution of longstanding philosophical debates.
  • Policy proposals for AI rights or protections might emerge from perception data alone, creating a feedback loop where public beliefs shape legal categories before scientific clarity arrives.
  • Long-term studies tracking changes in human self-description when people interact with increasingly anthropomorphic systems would test whether the predicted shifts in self-understanding actually occur.

Load-bearing premise

That questions about perceived AI consciousness are both tractable and will deliver clearer societal benefits than continued direct work on whether machines can actually be conscious.

What would settle it

A controlled study showing that public attributions of subjective experience to AI produce no measurable changes in ethical judgments, language use, or interaction patterns would falsify the claim that perceived consciousness is highly consequential.

Figures

Figures reproduced from arXiv: 2605.06965 by Iulia-Maria Comsa.

Figure 1
Figure 1. Figure 1: Gemini 3 Flash 28 [PITH_FULL_IMAGE:figures/full_fig_p028_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Gemini 3.1 Pro [PITH_FULL_IMAGE:figures/full_fig_p029_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: GPT-5.2 29 [PITH_FULL_IMAGE:figures/full_fig_p029_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Grok 4 30 [PITH_FULL_IMAGE:figures/full_fig_p030_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: DeepSeek-V3.2 31 [PITH_FULL_IMAGE:figures/full_fig_p031_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Claude Opus 4.6 32 [PITH_FULL_IMAGE:figures/full_fig_p032_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Claude Sonnet 4.6 33 [PITH_FULL_IMAGE:figures/full_fig_p033_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Claude Haiku 4.5 34 [PITH_FULL_IMAGE:figures/full_fig_p034_8.png] view at source ↗
read the original abstract

As language-based AI systems become more anthropomorphic, the question of whether they can have subjective experience is increasingly pressing. I focus here on the tractability of research questions in the space of AI consciousness. I argue that the fundamental problem of whether AI systems can be conscious is currently intractable in its direct form, given the absence of a universally accepted scientific theory of consciousness, as well as the historical open-endedness of the philosophical mind-body problem. In contrast, questions around the adjacent subject of perceived AI consciousness are tractable, timely, and highly consequential for society. The general public is increasingly open to the possibility of consciousness in AI systems and routinely adopts the vocabulary of human cognition and subjective experience to describe them. This phenomenon is already driving societal shifts across user experience, ethical standards, and linguistic norms. I therefore propose an increased research focus on uncovering the causes and effects of perceived AI consciousness, which ultimately shape how we see our own human subjective experience relative to artificial entities. To support this, I map the current landscape of AI consciousness perception and discuss its key potential drivers and societal consequences. Finally, I urge developers, decision-makers, and the broader scientific community to commit to clear and accurate communication regarding the topic of AI consciousness, explicitly acknowledging its inherent uncertainties.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper claims that direct questions about whether AI systems can possess subjective consciousness are currently intractable, owing to the absence of a consensus scientific theory of consciousness and the longstanding open status of the mind-body problem. In contrast, it argues that questions concerning perceived AI consciousness—driven by increasing anthropomorphism in language-based systems—are tractable, timely, and societally consequential. The manuscript maps the landscape of public perceptions, identifies potential drivers and effects on user experience, ethics, and linguistic norms, and recommends that researchers, developers, and policymakers prioritize these questions while committing to clear communication that acknowledges uncertainties.

Significance. If the normative recommendation holds, the paper could usefully redirect interdisciplinary attention in computer science and society toward empirically addressable questions about human-AI interaction, potentially informing more grounded approaches to AI ethics, interface design, and public communication. As a position paper it contributes by explicitly framing its argument around acknowledged philosophical and scientific gaps rather than advancing untestable claims, and by highlighting observable societal shifts (anthropomorphic language use, shifting ethical intuitions) as motivation for further study.

major comments (1)
  1. [proposal and concluding sections] The central claim that perceived-consciousness questions are tractable and yield clearer societal benefits than direct consciousness questions (final paragraphs and proposal section) is asserted without outlining concrete research methods, existing empirical studies, or falsifiable predictions that would demonstrate tractability; this weakens the comparative argument for the proposed shift.
minor comments (2)
  1. [Abstract and introduction] The abstract and introduction would benefit from a brief, explicit definition or operationalization of 'perceived AI consciousness' to distinguish it clearly from direct consciousness attributions and to anchor the subsequent landscape mapping.
  2. [landscape mapping section] Societal observations (public openness to AI consciousness, adoption of cognitive vocabulary) are presented as motivation; adding one or two specific citations or examples from recent surveys or usage studies would strengthen the grounding without altering the position-paper character.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive review and recommendation of minor revision. We address the single major comment below by agreeing to strengthen the manuscript with concrete examples.

read point-by-point responses
  1. Referee: [proposal and concluding sections] The central claim that perceived-consciousness questions are tractable and yield clearer societal benefits than direct consciousness questions (final paragraphs and proposal section) is asserted without outlining concrete research methods, existing empirical studies, or falsifiable predictions that would demonstrate tractability; this weakens the comparative argument for the proposed shift.

    Authors: We agree that the proposal section would be strengthened by explicit illustrations of tractable methods and supporting evidence. The manuscript already maps public perceptions and discusses drivers and consequences at a high level, but we acknowledge that this leaves the tractability claim somewhat abstract. In the revised manuscript we will expand the proposal and concluding sections to include: (1) concrete empirical methods such as large-scale surveys measuring correlations between AI interaction frequency and perceived consciousness, controlled experiments testing effects of anthropomorphic language on user trust and ethical judgments, and analysis of linguistic corpora to detect norm shifts; (2) references to existing studies from HCI and psychology on anthropomorphism in conversational agents; and (3) example falsifiable predictions, such as measurable increases in perceived-consciousness scores following exposure to more human-like interfaces. These additions will make the comparative advantages over direct consciousness research more demonstrable while preserving the position-paper framing. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper is a normative position piece that explicitly premises its recommendation on the acknowledged absence of a consensus scientific theory of consciousness and the open-ended status of the mind-body problem. It advances no equations, derivations, fitted parameters, models, or self-referential definitions that could reduce a claimed result to its own inputs by construction. The central argument is a call to redirect research priorities toward perceived-AI-consciousness questions on grounds of tractability and societal impact; these premises are stated outright rather than smuggled in via self-citation chains or ansatzes. No load-bearing step relies on prior work by the same author to forbid alternatives or to rename an empirical pattern as a novel unification. The manuscript is therefore self-contained against external benchmarks and receives the default non-circularity finding.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim depends on the premise that no scientific theory of consciousness exists and that the mind-body problem remains open-ended; these are treated as background facts rather than derived results.

axioms (2)
  • domain assumption There is no universally accepted scientific theory of consciousness
    Invoked directly in the abstract to establish intractability of the direct question.
  • domain assumption The philosophical mind-body problem is historically open-ended
    Cited as a second reason the direct consciousness question cannot be resolved.

pith-pipeline@v0.9.0 · 5519 in / 1189 out tokens · 54911 ms · 2026-05-11T00:49:06.378675+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

159 extracted references · 90 canonical work pages · 1 internal anchor

  1. [1]

    Seminars in the Neurosciences , volume = 2, pages =

    Toward a Neurobiological Theory of Consciousness , author =. Seminars in the Neurosciences , volume = 2, pages =

  2. [2]

    Adversarial testing of global neuronal workspace and integrated information theories of consciousness.Nature642, 133–142 (2025).https://doi.org/10.1038/s41586-025-08888-1

    Adversarial testing of global neuronal workspace and integrated information theories of consciousness , author =. Nature , publisher =. doi:10.1038/s41586-025-08888-1 , copyright =

  3. [3]

    K., BAYNE, T

    Theories of consciousness , author =. Nature Reviews Neuroscience , volume = 23, number = 7, pages =. doi:10.1038/s41583-022-00587-4 , issn =

  4. [4]

    Progress in Biophysics and Molecular Biology , volume = 190, pages =

    A landscape of consciousness: Toward a taxonomy of explanations and implications , author =. Progress in Biophysics and Molecular Biology , volume = 190, pages =. doi:10.1016/j.pbiomolbio.2023.12.003 , issn =

  5. [5]

    Neuroscience & Biobehavioral Reviews , volume = 170, pages = 106053, doi =

    Unpacking the complexities of consciousness: Theories and reflections , author =. Neuroscience & Biobehavioral Reviews , volume = 170, pages = 106053, doi =

  6. [6]

    arXiv preprint arXiv:2308.08708 , year =

    Consciousness in Artificial Intelligence: Insights from the Science of Consciousness , author =. 2308.08708 , archiveprefix =

  7. [7]

    Seth, Anil , year = 2021, publisher =

  8. [8]

    Long, Robert and Sebo, Jeff and Butlin, Patrick and Finlinson, Kathleen and Fish, Kyle and Harding, Jacqueline and Pfau, Jacob and Sims, Toni and Birch, Jonathan and Chalmers, David , year = 2024, url =. Taking. 2411.00986 , archiveprefix =

  9. [9]

    Trends in Neurosciences , volume = 46, number = 12, pages =

    The feasibility of artificial consciousness through the lens of neuroscience , author =. Trends in Neurosciences , volume = 46, number = 12, pages =. doi:10.1016/j.tins.2023.09.009 , issn =

  10. [10]

    Communications of the ACM , publisher =

    Talking about Large Language Models , author =. Communications of the ACM , publisher =. doi:10.1145/3624724 , issn =

  11. [11]

    Science , volume = 313, number = 5792, pages =

    Detecting Awareness in the Vegetative State , author =. Science , volume = 313, number = 5792, pages =. doi:10.1126/science.1130197 , url =

  12. [12]

    The Conscious Mind: In Search of a Fundamental Theory , author =

  13. [13]

    Philosophy of Mind: Classical and Contemporary Readings , year = 2021, publisher =

  14. [14]

    Schaffer, Jonathan , year = 2018, booktitle =

  15. [15]

    Robinson, Howard , year = 2023, booktitle =

  16. [16]

    Rescorla, Michael , year = 2025, booktitle =

  17. [17]

    Behavioral and Brain Sciences , pages =

    Conscious artificial intelligence and biological naturalism , author =. Behavioral and Brain Sciences , pages =

  18. [18]

    Annals of the New York Academy of Sciences , volume = 1157, number = 1, pages =

    Dualism Persists in the Science of Mind , author =. Annals of the New York Academy of Sciences , volume = 1157, number = 1, pages =. doi:10.1111/j.1749-6632.2008.04117.x , url =

  19. [19]

    Philosophers' Imprint , volume = 23, number = 11, doi =

    Philosophers on Philosophy: The 2020 PhilPapers Survey , author =. Philosophers' Imprint , volume = 23, number = 11, doi =

  20. [20]

    Levin, Janet , year = 2023, booktitle =

  21. [21]

    Nature Reviews Neuroscience , volume = 17, number = 5, pages =

    Neural correlates of consciousness: progress and problems , author =. Nature Reviews Neuroscience , volume = 17, number = 5, pages =. doi:10.1038/nrn.2016.22 , issn =

  22. [22]

    Behavioral and Brain Sciences , volume = 3, number = 3, pages =

    Minds, brains, and programs , author =. Behavioral and Brain Sciences , volume = 3, number = 3, pages =

  23. [23]

    Current Biology , volume = 33, number = 16, pages =

    Consciousness beyond the human case , author =. Current Biology , volume = 33, number = 16, pages =. doi:10.1016/j.cub.2023.06.067 , issn =

  24. [24]

    All too human?

    Shevlin, Henry , year = 2024, journal =. All too human?

  25. [25]

    Bowman , year = 2024, booktitle =

    David Rein and Betty Li Hou and Asa Cooper Stickland and Jackson Petty and Richard Yuanzhe Pang and Julien Dirani and Julian Michael and Samuel R. Bowman , year = 2024, booktitle =

  26. [26]

    First Conference on Language Modeling (COLM) , url =

    Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models - A Survey , author =. First Conference on Language Modeling (COLM) , url =

  27. [27]

    Neuroscience of Consciousness , volume = 2024, number = 1, pages =

    Folk psychological attributions of consciousness to large language models , author =. Neuroscience of Consciousness , volume = 2024, number = 1, pages =. doi:10.1093/nc/niae013 , pmid = 38618488, pmcid =

  28. [28]

    Comsa and Murray Shanahan

    Does It Make Sense to Speak of Introspection in Large Language Models? , author =. 2506.05068 , archiveprefix =

  29. [29]

    2503.16348 , archiveprefix =

    Palatable Conceptions of Disembodied Being , author =. 2503.16348 , archiveprefix =

  30. [30]

    doi:10.31234/osf.io/af7c9_v1 , url =

    Birch, Jonathan , year = 2025, month = sep, publisher =. doi:10.31234/osf.io/af7c9_v1 , url =

  31. [31]

    Deanthropomorphising

    Shardlow, Matthew and Przyby. Deanthropomorphising. PLoS One , volume = 19, number = 12, pages =

  32. [32]

    Journal of Artificial Intelligence and Consciousness , volume =

    Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology , author =. Journal of Artificial Intelligence and Consciousness , volume =. doi:10.1142/S270507852150003X , url =

  33. [33]

    Trends in Cognitive Sciences , volume = 28, number = 5, pages =

    Tests for consciousness in humans and beyond , author =. Trends in Cognitive Sciences , volume = 28, number = 5, pages =. doi:10.1016/j.tics.2024.01.010 , issn =

  34. [34]

    Testing for Consciousness in Machines: An Update on the

    Susan Schneider , year = 2024, journal =. Testing for Consciousness in Machines: An Update on the

  35. [35]

    Erkenntnis , publisher =

    Tests of animal consciousness are tests of machine consciousness , author =. Erkenntnis , publisher =

  36. [36]

    Could a Large Language Model Be Conscious? , author =

  37. [37]

    Do You Mind?

    Scott, Ava Elizabeth and Neumann, Daniel and Niess, Jasmin and Wo\'. Do You Mind?. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems , location =. doi:10.1145/3544548.3581296 , isbn = 9781450394215, url =

  38. [38]

    and Graziano, Michael S

    Guingrich, Rose E. and Graziano, Michael S. A. , year = 2024, journal =. Ascribing consciousness to artificial intelligence: human-. doi:10.3389/fpsyg.2024.1322781 , issn =

  39. [39]

    A Human-Centered Approach to

    Chrisley, Ron , year = 2020, month = jul, booktitle =. A Human-Centered Approach to. doi:10.1093/oxfordhb/9780190067397.013.29 , isbn = 9780190067397, url =

  40. [40]

    Journal of Consciousness Studies , publisher =

    Virtual Machines and Consciousness , author =. Journal of Consciousness Studies , publisher =

  41. [41]

    Neuroscience of Consciousness , volume = 2022, number = 1, pages =

    An academic survey on theoretical foundations, common assumptions and the current state of consciousness science , author =. Neuroscience of Consciousness , volume = 2022, number = 1, pages =. doi:10.1093/nc/niac011 , issn =

  42. [42]

    Humanities and Social Sciences Communications , volume = 11, number = 1, pages = 1031, doi =

    A clarification of the conditions under which Large language Models could be conscious , author =. Humanities and Social Sciences Communications , volume = 11, number = 1, pages = 1031, doi =

  43. [43]

    Proceedings of the National Academy of Sciences , volume = 122, number = 22, pages =

    The benefits and dangers of anthropomorphic conversational agents , author =. Proceedings of the National Academy of Sciences , volume = 122, number = 22, pages =. doi:10.1073/pnas.2415898122 , url =

  44. [44]

    Will our social brain inherently shape and be shaped by interactions with

    Benjamin Becker , year = 2025, journal =. Will our social brain inherently shape and be shaped by interactions with. doi:10.1016/j.neuron.2025.04.034 , issn =

  45. [45]

    How human–

    Glickman, Moshe and Sharot, Tali , year = 2025, month = feb, journal =. How human–. doi:10.1038/s41562-024-02077-2 , issn =

  46. [46]

    Anthropomorphization of

    Deshpande, Ameet and Rajpurohit, Tanmay and Narasimhan, Karthik and Kalyan, Ashwin , year = 2023, month = dec, booktitle =. Anthropomorphization of. doi:10.18653/v1/2023.nllp-1.1 , url =

  47. [47]

    Identifying features that shape perceived consciousness in

    Bongsu Kang and Jundong Kim and Taerim Yun and Hyojin Bae and Chang-Eop Kim , year = 2026, journal =. Identifying features that shape perceived consciousness in. doi:10.1016/j.chbr.2025.100901 , issn =

  48. [48]

    It's the

    Joo, Minjoo , year = 2024, month = dec, journal =. It's the. doi:10.1371/journal.pone.0314559 , pmid = 39693294, pmcid =

  49. [49]

    Trends in Cognitive Sciences , doi =

    Can only meat machines be conscious? , author =. Trends in Cognitive Sciences , doi =

  50. [50]

    Journal of Human-Robot Interaction , publisher =

    Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings , author =. Journal of Human-Robot Interaction , publisher =. doi:10.1145/3526112 , url =

  51. [51]

    and Ladak, Ali and Manoli, Aikaterina , year = 2025, month = apr, booktitle =

    Anthis, Jacy Reese and Pauketat, Janet V.T. and Ladak, Ali and Manoli, Aikaterina , year = 2025, month = apr, booktitle =. Perceptions of Sentient. doi:10.1145/3706598.3713329 , url =

  52. [52]

    What will society think about

    Lucius Caviola and Jeff Sebo and Jonathan Birch , year = 2025, journal =. What will society think about. doi:10.1016/j.tics.2025.06.002 , issn =

  53. [53]

    The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =

    People with Disorders of Consciousness , author =. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =. doi:10.1093/9780191966729.003.0010 , isbn = 9780192870421, url =

  54. [54]

    The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =

    Fetuses and Embryos , author =. The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI , publisher =. doi:10.1093/9780191966729.003.0011 , isbn = 9780192870421, url =

  55. [55]

    My Boyfriend is AI

    Pat Pataranutaporn and Sheer Karny and Chayapatr Archiwaranguprok and Constanze Albrecht and Auren R. Liu and Pattie Maes , year = 2025, url =. ``. 2509.11391 , archiveprefix =

  56. [56]

    Apply rich psychological terms in

    Shevlin, Henry and Halina, Marta , year = 2019, month = apr, journal =. Apply rich psychological terms in. doi:10.1038/s42256-019-0039-y , issn =

  57. [57]

    doi:10.1007/s00146-025-02663-6 , issn =

    O'Connor, Marie Theresa , year = 2025, month = oct, journal =. doi:10.1007/s00146-025-02663-6 , issn =

  58. [58]

    2510.09858 , archiveprefix =

    Eric Schwitzgebel , year = 2026, url =. 2510.09858 , archiveprefix =

  59. [59]

    1504.05696 , archiveprefix =

    Ascribing Consciousness to Artificial Intelligence , author =. 1504.05696 , archiveprefix =

  60. [60]

    (2024) Dissociating artificial intelligence from artificial consciousness

    Dissociating Artificial Intelligence from Artificial Consciousness , author =. 2412.04571 , archiveprefix =

  61. [61]

    Science , volume = 358, number = 6362, pages =

    What is consciousness, and could machines have it? , author =. Science , volume = 358, number = 6362, pages =. doi:10.1126/science.aan8871 , url =

  62. [62]

    Journal of Consciousness Studies , volume = 26, number =

    Reviewing Tests for Machine Consciousness , author =. Journal of Consciousness Studies , volume = 26, number =

  63. [63]

    Trends in Cognitive Sciences , publisher =

    Consciousness in the cradle: on the emergence of infant experience , author =. Trends in Cognitive Sciences , publisher =. doi:10.1016/j.tics.2023.08.018 , issn =

  64. [64]

    , year = 2025, booktitle =

    Shevlin, H. , year = 2025, booktitle =. The anthropomimetic turn in contemporary

  65. [65]

    The American Journal of Psychology , publisher =

    An Experimental Study of Apparent Behavior , author =. The American Journal of Psychology , publisher =

  66. [66]

    AJOB Neu- roscience11(2), 88–95 (2020)

    Arleen Salles and Kathinka Evers and Michele Farisco , year = 2020, journal =. Anthropomorphism in. doi:10.1080/21507740.2020.1740350 , url =

  67. [67]

    , title =

    On seeing human: a three-factor theory of anthropomorphism. , author =. Psychological review , volume = 114, number = 4, pages =. doi:10.1037/0033-295X.114.4.864 , issn =

  68. [68]

    Lemoine, Blake , year = 2022, journal =. Is

  69. [69]

    Frontiers in Human Neuroscience , volume = 19, pages = 1633272, doi =

    Street, Winnie and Siy, John Oliver and Keeling, Geoff and Baranes, Adrien and Barnett, Benjamin and McKibben, Michael and Kanyere, Tatenda and Lentz, Alison and Ag\". Frontiers in Human Neuroscience , volume = 19, pages = 1633272, doi =

  70. [70]

    Nature Human Behaviour , volume = 8, number = 7, pages =

    Testing theory of mind in large language models and humans , author =. Nature Human Behaviour , volume = 8, number = 7, pages =. doi:10.1038/s41562-024-01882-z , issn =

  71. [71]

    1966 , issue_date =

    Weizenbaum, Joseph , year = 1966, month = jan, journal =. doi:10.1145/365153.365168 , issn =

  72. [72]

    The Emerging Social Status of Generative

    Bran, Emanuela and Rughini. The Emerging Social Status of Generative. 2023 24th International Conference on Control Systems and Computer Science (CSCS) , pages =. doi:10.1109/CSCS59211.2023.00068 , keywords =

  73. [73]

    Humanities and Social Sciences Communications , volume = 12, number = 1, pages = 1647, doi =

    There is no such thing as conscious artificial intelligence , author =. Humanities and Social Sciences Communications , volume = 12, number = 1, pages = 1647, doi =

  74. [74]

    Trends in Cognitive Sciences , publisher =

    Identifying indicators of consciousness in AI systems , author =. Trends in Cognitive Sciences , publisher =. doi:10.1016/j.tics.2025.10.011 , issn =

  75. [75]

    Inquiry , publisher =

    Simulacra as conscious exotica , author =. Inquiry , publisher =. doi:10.1080/0020174X.2024.2434860 , url =

  76. [76]

    What We Talk to When We Talk to Language Models , author =

  77. [77]

    Communications Psychology , volume = 3, number = 1, pages = 84, doi =

    The influence of mental state attributions on trust in large language models , author =. Communications Psychology , volume = 3, number = 1, pages = 84, doi =

  78. [78]

    Illusions of

    Yoshua Bengio and Eric Elmoznino , year = 2025, journal =. Illusions of. doi:10.1126/science.adn4935 , url =

  79. [79]

    The Grounds of Moral Status , author =. The

  80. [80]

    Nan and Yan, Bei and Huang, Jujun and Luo, Guirong and Algas, Kristina , year = 2022, journal =

    Wilkenfeld, J. Nan and Yan, Bei and Huang, Jujun and Luo, Guirong and Algas, Kristina , year = 2022, journal =. ``. doi:10.5465/AMBPP.2022.17063abstract , url =

Showing first 80 references.