Recognition: no theorem link
Beyond Tools and Persons: Who Are They? Classifying Robots and AI Agents for Proportional Governance
Pith reviewed 2026-05-10 19:12 UTC · model grok-4.3
The pith
Autonomous systems need a three-tier classification by CPST integration depth to replace the tool-person binary with proportional governance rules.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper proposes a classification framework grounded in Cyber-Physical-Social-Thinking (CPST) space theory, which categorizes autonomous entities by their degree of integration across four interconnected dimensions: computational, embodied, relational, and cognitive. The resulting three-tier taxonomy -- Confined Actors, Socially-Aware Interactors, and CPST-Integrated Agents -- provides principled scaffolding for proportional governance: enhanced product liability for isolated systems, relational duties of care for interactive companions, and qualified legal personhood for deeply integrated agents.
What carries the argument
The CPST space theory that evaluates integration across computational, embodied, relational, and cognitive dimensions to generate the three-tier taxonomy and guide matching governance rules.
If this is right
- Isolated systems face enhanced product liability standards.
- Interactive companions incur relational duties of care toward users and others.
- Deeply integrated agents can receive qualified legal personhood with corresponding rights and responsibilities.
- Regulators can apply a composite assessment protocol built from existing metrics in robotics, human-robot interaction, and cognitive science.
- Entities may transition between tiers as their capabilities change, requiring institutional mechanisms to reclassify them.
Where Pith is reading between the lines
- Companies could design future systems to reach or avoid particular tiers by controlling integration levels in the four dimensions.
- Borderline cases may require new dispute-resolution processes when metrics produce close or contested results.
- The approach could extend to hybrid systems that combine humans with AI components by treating the combined entity as a single assessed unit.
- Adoption might reduce regulatory fragmentation across jurisdictions by providing a shared set of classification criteria.
Load-bearing premise
The four CPST dimensions can be measured reliably enough to produce consistent category assignments that stay stable as systems evolve.
What would settle it
Two independent regulatory teams applying the proposed metrics to the same humanoid robot assign it to different tiers, or a minor software update moves an entity across tiers without clear justification.
Figures
read the original abstract
The rapid commercialization of humanoid robots and generative AI agents is outpacing legal frameworks built on a binary distinction between ``tools'' and ``persons.'' Current regulations, including the EU AI Act, classify systems by risk level but lack a foundational ontology for determining \emph{what kind of entity} an autonomous system is -- and what governance follows from that determination. We propose a classification framework grounded in Cyber-Physical-Social-Thinking (CPST) space theory, which categorizes autonomous entities by their degree of integration across four interconnected dimensions: computational, embodied, relational, and cognitive. The resulting three-tier taxonomy -- Confined Actors, Socially-Aware Interactors, and CPST-Integrated Agents -- provides principled scaffolding for proportional governance: enhanced product liability for isolated systems, relational duties of care for interactive companions, and qualified legal personhood for deeply integrated agents. We operationalize this taxonomy by identifying standardized assessment metrics drawn from robotics, human--robot interaction research, social computing, and cognitive science, and we propose a composite assessment protocol for regulatory use. We further address temporal dynamics -- how entities transition between categories as they evolve -- and the institutional design necessary for credible classification. We call for international standardization of this taxonomy before the 2027 review of the EU AI Act, and outline three concrete policy steps toward implementation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes a classification framework for autonomous robots and AI agents grounded in Cyber-Physical-Social-Thinking (CPST) space theory. It defines a three-tier taxonomy—Confined Actors, Socially-Aware Interactors, and CPST-Integrated Agents—based on degrees of integration across computational, embodied, relational, and cognitive dimensions. The framework maps these categories to proportional governance regimes (enhanced product liability, relational duties of care, and qualified legal personhood) and operationalizes the taxonomy via metrics from robotics/HRI/cognitive science, a composite assessment protocol, handling of temporal transitions, and institutional design recommendations, while calling for international standardization ahead of the 2027 EU AI Act review.
Significance. If the taxonomy can be made operational with reliable, stable classifications, the work would address a genuine gap in legal ontologies for humanoid robots and generative AI by offering a structured alternative to binary tool/person distinctions. The grounding in CPST theory, the explicit mapping to governance outcomes, and the attention to assessment protocols and temporal dynamics represent constructive contributions. The proposal's policy relevance is timely, though its significance is tempered by the absence of validation data.
major comments (3)
- [Operationalization and assessment protocol] The central claim that the taxonomy supplies 'principled scaffolding for proportional governance' rests on the assumption that the four CPST dimensions can be assessed to yield consistent three-tier assignments. However, the operationalization section identifies metrics and proposes a composite protocol without defining thresholds, scoring rules, or decision boundaries for category membership, leaving the taxonomy's boundaries undefined and open to subjective application.
- [Temporal dynamics] The discussion of temporal dynamics addresses how entities may transition between categories as they evolve, but provides no mechanisms, safeguards, or re-assessment procedures to prevent ambiguous, contested, or oscillating classifications (e.g., an interactor acquiring cognitive integration). This directly undermines the stability required for the proposed governance regimes to function reliably over time.
- [Assessment protocol and institutional design] No empirical validation, inter-rater reliability data, or worked examples on real systems are presented to demonstrate that the chosen dimensions and protocol produce non-arbitrary, reproducible classifications. Without such grounding, the mapping from CPST integration levels to distinct legal duties remains a definitional proposal rather than a tested framework.
minor comments (1)
- [Abstract] The abstract would be strengthened by including one brief, concrete example of how a specific system (e.g., a current humanoid robot or generative agent) would be scored and classified under the proposed protocol.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback, which identifies key areas where the manuscript can be strengthened to enhance the practicality of the proposed taxonomy. We address each major comment in turn, committing to revisions where appropriate to improve clarity and applicability.
read point-by-point responses
-
Referee: The central claim that the taxonomy supplies 'principled scaffolding for proportional governance' rests on the assumption that the four CPST dimensions can be assessed to yield consistent three-tier assignments. However, the operationalization section identifies metrics and proposes a composite protocol without defining thresholds, scoring rules, or decision boundaries for category membership, leaving the taxonomy's boundaries undefined and open to subjective application.
Authors: We concur that the absence of explicit thresholds and scoring rules in the operationalization section leaves room for subjective interpretation. The manuscript aims to provide a high-level framework grounded in CPST theory and existing metrics from the literature. In the revised manuscript, we will incorporate specific illustrative thresholds and decision boundaries, for example by adapting scales from HRI studies on social awareness and cognitive benchmarks from AI research. This will make the classification more operational while inviting community input for refinement. revision: yes
-
Referee: The discussion of temporal dynamics addresses how entities may transition between categories as they evolve, but provides no mechanisms, safeguards, or re-assessment procedures to prevent ambiguous, contested, or oscillating classifications (e.g., an interactor acquiring cognitive integration). This directly undermines the stability required for the proposed governance regimes to function reliably over time.
Authors: This is a fair critique. The original discussion of temporal dynamics is primarily descriptive. To address the need for stability, we will revise this section to include proposed re-assessment procedures, such as annual reviews by certified assessors, criteria for category transitions based on capability milestones, and dispute resolution mechanisms involving expert panels. These additions will draw from regulatory practices in adjacent fields to ensure the governance regimes remain reliable. revision: yes
-
Referee: No empirical validation, inter-rater reliability data, or worked examples on real systems are presented to demonstrate that the chosen dimensions and protocol produce non-arbitrary, reproducible classifications. Without such grounding, the mapping from CPST integration levels to distinct legal duties remains a definitional proposal rather than a tested framework.
Authors: We recognize that the manuscript lacks empirical validation and worked examples, consistent with its nature as a conceptual proposal for a new classification ontology. We will add a new subsection with worked examples applying the protocol to representative systems, such as a current-generation humanoid robot and a generative AI agent, using available capability data. While inter-rater reliability testing and large-scale validation are not feasible within this paper, we will expand the institutional design recommendations to explicitly call for such studies as part of the standardization process ahead of the EU AI Act review. revision: partial
Circularity Check
No circularity: taxonomy is an application of externally cited CPST theory without self-referential reduction or fitted inputs.
full rationale
The paper proposes a three-tier taxonomy grounded in CPST space theory by categorizing entities along four dimensions and mapping them to governance regimes. No equations, parameters, or predictions are present that reduce the output back to the inputs by construction. The framework draws on metrics from external fields (robotics, HRI, cognitive science) and cites CPST as prior theory rather than deriving it internally or via self-citation chains that bear the central load. The derivation chain remains independent of the taxonomy itself, satisfying the criteria for a non-circular conceptual contribution.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Cyber-Physical-Social-Thinking (CPST) space theory provides a valid and sufficient four-dimensional model for determining the legal status of autonomous entities.
invented entities (1)
-
Confined Actors, Socially-Aware Interactors, and CPST-Integrated Agents
no independent evidence
Reference graph
Works this paper leans on
-
[1]
International Federation of Robotics,World Robotics 2025: Service Robots, IFR, 2025
2025
-
[2]
Tool learning with large language models: A survey,
C. Qu, S. Dai, X. Wei, H. Cai, S. Wang, D. Yin, J. Xu, and J.-R. Wen, “Tool learning with large language models: A survey,”arXiv:2405.17935, 2024
-
[3]
European Parliament, Regulation (EU) 2024/1689 on harmonised rules on artificial intelli- gence (AI Act), 2024
2024
-
[4]
European Parliament, Regulation (EU) 2023/1230 on machinery products (Machinery Reg- ulation), 2023
2023
-
[5]
European Parliament, Directive (EU) 2024/2853 on liability for defective products (Revised Product Liability Directive), 2024
2024
-
[6]
Artificial Intelligence 2025 Legisla- tion,
National Conference of State Legislatures, “Artificial Intelligence 2025 Legisla- tion,” NCSL, 2025. Available:https://www.ncsl.org/technology-and-communication/ artificial-intelligence-2025-legislation
2025
-
[7]
Of, for, and by the people: the legal lacuna of synthetic persons,
J. J. Bryson, M. E. Diamantis, and T. D. Grant, “Of, for, and by the people: the legal lacuna of synthetic persons,”Artif. Intell. Law, vol. 25, pp. 273–291, 2017
2017
-
[8]
Theotherquestion: canandshouldrobotshaverights?,
D.J.Gunkel, “Theotherquestion: canandshouldrobotshaverights?,”Ethics Inf. Technol., vol. 20, pp. 87–99, 2018
2018
-
[9]
Cyberism: The Fourth Paradigm for the Digital Age,
H. Ning et al., “Cyberism: The Fourth Paradigm for the Digital Age,”Computer, vol. 59, no. 4, pp. 130–134, 2026. 12
2026
-
[10]
Cyberology: Cyber-Physical-Social-Thinking spaces based discipline and inter-discipline hierarchy for metaverse (general cyberspace),
H. Ning, Y. Lin, W. Wang, H. Wang, F. Shi, X. Zhang, and M. Daneshmand, “Cyberology: Cyber-Physical-Social-Thinking spaces based discipline and inter-discipline hierarchy for metaverse (general cyberspace),”IEEE Internet Things J., vol. 10, no. 5, pp. 4420–4430, 2023
2023
-
[11]
Arc prize 2024: Tech- nical report.ArXiv, abs/2412.04604, 2024
F. Chollet et al., “ARC Prize 2024: Technical report,”arXiv:2412.04604, 2024
-
[12]
Ethics of Social Robotics: Individual and Societal Concerns and Opportunities,
C. Torras, “Ethics of Social Robotics: Individual and Societal Concerns and Opportunities,” Annu. Rev. Control Robot. Auton. Syst., vol. 7, pp. 1–18, 2024
2024
-
[13]
Social robots on a global stage: Establishing a role for culture during human–robot interaction,
A. Henschel, R. Hortensius, and E. S. Cross, “Social robots on a global stage: Establishing a role for culture during human–robot interaction,”Int. J. Soc. Robot., vol. 13, pp. 1625–1654, 2021
2021
-
[14]
We need to talk about deception in social robotics!,
A. Sharkey and N. Sharkey, “We need to talk about deception in social robotics!,”Ethics Inf. Technol., vol. 23, no. 3, pp. 309–316, 2021
2021
-
[15]
AI as legal persons: past, patterns, and prospects.Journal of Law and Society2025 52(4):533–555
C. Novelli, L. Floridi, G. Sartor, and G. Teubner, “AI as legal persons: past, patterns, and prospects,”J. Law Soc., vol. 52, pp. 533–555, 2025, doi: 10.1111/jols.70021
-
[16]
How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity,
H. J. Alexander, J. Simon et al., “How Should the Law Treat Future AI Systems? Fictional Legal Personhood versus Legal Identity,”arXiv:2511.14964, 2025
-
[17]
SAE International, Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (J3016), 2021
2021
-
[18]
ISO, ISO 8373:2021 Robotics — Vocabulary; ISO/TR 23482 series on Safety for Personal Care Robots, 2021–2023
2021
-
[19]
Measurement instruments for the anthropomorphism, animacy, like- ability, perceived intelligence, and perceived safety of robots,
C. Bartneck et al., “Measurement instruments for the anthropomorphism, animacy, like- ability, perceived intelligence, and perceived safety of robots,”Int. J. Soc. Robot., vol. 1, pp. 71–81, 2009
2009
-
[20]
arXiv preprint arXiv:2503.14499 , year=
METR, “Measuring AI ability to complete long tasks,”arXiv:2503.14499, 2025
-
[21]
Malgieri et al.,Law-Following AI: Designing AI Agents to Obey Human Laws, Institute for Law & AI, 2025
G. Malgieri et al.,Law-Following AI: Designing AI Agents to Obey Human Laws, Institute for Law & AI, 2025
2025
-
[22]
Legal Personhood of Potential People: AI and Embryos,
S. Kalantry, “Legal Personhood of Potential People: AI and Embryos,”Calif. Law Rev. Online, 2025
2025
-
[23]
Emergent social conventions and collective bias in LLM populations,
A. F. Ashery, L. M. Aiello, and A. Baronchelli, “Emergent social conventions and collective bias in LLM populations,”Sci. Adv., vol. 11, no. 20, p. eadu9368, 2025
2025
-
[24]
Ministry of Trade, Industry and Energy, Republic of Korea,Intelligent Robot Development and Distribution Promotion Act, Revised 2023
2023
-
[25]
The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing It Apart,
K. Werbach, “The Centripetal Network: How the Internet Holds Itself Together, and the Forces Tearing It Apart,”U. Pa. L. Rev., vol. 172, pp. 1233–1320, 2024
2024
-
[26]
Romans would have denied robots legal personhood,
L. Floridi and M. Taddeo, “Romans would have denied robots legal personhood,”Nature, vol. 557, p. 309, 2018
2018
-
[27]
Machine behaviour,
I. Rahwan et al., “Machine behaviour,”Nature, vol. 568, pp. 477–486, 2019
2019
-
[28]
Socially intelligent robots: dimensions of human–robot interaction,
K. Dautenhahn, “Socially intelligent robots: dimensions of human–robot interaction,”Phil. Trans. R. Soc. B, vol. 362, pp. 679–704, 2007. 13
2007
-
[29]
Sharing a life with Harvey: Exploring the acceptance of and relationship building with a social robot,
M. M. A. de Graaf, S. Ben Allouch, and T. Klamer, “Sharing a life with Harvey: Exploring the acceptance of and relationship building with a social robot,”Comput. Hum. Behav., vol. 43, pp. 1–14, 2015
2015
-
[30]
Wallach and C
W. Wallach and C. Allen,Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009
2009
-
[31]
Guideline on good pharmacovigilance practices (GVP)— Module VIII: Post-authorisation safety studies,
European Medicines Agency, “Guideline on good pharmacovigilance practices (GVP)— Module VIII: Post-authorisation safety studies,” EMA/813938/2011 Rev. 3, 2017. 14
2011
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.