pith. machine review for the scientific record. sign in

arxiv: 2604.17960 · v1 · submitted 2026-04-20 · 🧬 q-bio.NC · cs.LG

Recognition: unknown

The Umwelt Representation Hypothesis: Rethinking Universality

Authors on Pith no claims yet

Pith reviewed 2026-05-10 03:46 UTC · model grok-4.3

classification 🧬 q-bio.NC cs.LG
keywords Umwelt Representation Hypothesisrepresentational alignmentecological constraintsartificial neural networksbiological brainsuniversality claimscognitive neuroscience
0
0 comments X

The pith

Representational alignment between artificial neural networks and biological brains arises from overlapping ecological constraints rather than convergence on universal representations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that recent observations of similar internal representations in brains and AI models do not indicate that all capable systems settle on the same optimal view of reality. Instead, the authors introduce the Umwelt Representation Hypothesis, which attributes such alignments to shared pressures from the environments in which the systems develop. Evidence from different species, individual humans, and varied neural networks shows systematic differences in representations that match each system's specific needs. This perspective makes claims of universality appear premature. It also suggests treating comparisons between models as a way to identify groups of systems that share similar constraints rather than searching for one best representation.

Core claim

The Umwelt Representation Hypothesis proposes that alignment in representations between systems such as biological brains and artificial neural networks results from overlap in the ecological constraints under which those systems develop, rather than from convergence toward a single global optimum or universal model of reality. Empirical observations indicate that representational differences across species, individuals, and different ANNs are systematic and reflect adaptations to particular environmental demands, which is difficult to reconcile with the idea of universality.

What carries the argument

The Umwelt Representation Hypothesis (URH), which accounts for observed representational alignments by reference to shared ecological constraints during development rather than convergence on universal representations.

If this is right

  • Comparisons between ANNs and brains should be reframed as mapping clusters of alignment within ecological constraint space.
  • Systematic differences in representations are expected and may be adaptive when systems face distinct constraints.
  • Model evaluation in AI can shift from seeking a single optimal world model toward identifying matches in constraint profiles.
  • Future experiments could test whether altering developmental constraints predictably shifts representations away from or toward those of other systems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This view could inform the design of training environments for AI models to produce representations suited to particular real-world niches.
  • It raises questions about whether interventions that change an individual's or model's ecological exposure would reliably alter their representations.
  • The hypothesis may connect to broader questions about how perceptual differences across organisms arise from their distinct sensory and behavioral demands.

Load-bearing premise

The reviewed empirical evidence must demonstrate that representational differences between species, individuals, and ANNs are systematic and adaptive in ways that cannot be explained by universality claims.

What would settle it

A finding that two systems with non-overlapping ecological constraints and developmental histories nevertheless produce highly aligned representations across multiple tasks would challenge the central claim.

read the original abstract

Recent studies reveal striking representational alignment between artificial neural networks (ANNs) and biological brains, leading to proposals that all sufficiently capable systems converge on universal representations of reality. Here, we argue that this claim of Universality is premature. We introduce the Umwelt Representation Hypothesis (URH), proposing that alignment arises not from convergence toward a single global optimum, but from overlap in ecological constraints under which systems develop. We review empirical evidence showing that representational differences between species, individuals, and ANNs are systematic and adaptive, which is difficult to reconcile with Universality. Finally, we reframe ANN model comparison as a method for mapping clusters of alignment in ecological constraint space rather than searching for a single optimal world model.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes the Umwelt Representation Hypothesis (URH) as an alternative interpretation of observed representational alignment between artificial neural networks (ANNs) and biological brains. Rather than attributing alignment to convergence on universal representations of reality, URH posits that it arises from overlap in the ecological constraints under which systems develop. The paper reviews empirical evidence of systematic and adaptive representational differences across species, individuals, and ANNs, arguing these are difficult to reconcile with strong universality claims, and reframes ANN model comparison as a means to map clusters of alignment within ecological constraint space.

Significance. If the central interpretive claim holds, the URH provides a useful conceptual framework for reconciling alignment findings with documented variation in representations. It shifts focus from searching for a single optimal world model toward understanding how shared ecological niches produce partial overlaps, which could inform more targeted experiments in systems neuroscience and AI interpretability. The hypothesis is presented as compatible with existing alignment data rather than falsifying it outright, offering a non-circular alternative lens that may guide future work on constraint mapping.

major comments (2)
  1. [Review of empirical evidence] Review of empirical evidence: The central claim that representational differences 'are systematic and adaptive, which is difficult to reconcile with Universality' rests on a qualitative literature synthesis. Specific examples (e.g., species differences in sensory tuning or individual variability in ANN feature selectivity) are invoked but not accompanied by quantitative measures of alignment strength versus divergence, making it difficult to evaluate whether the evidence truly challenges universality or remains compatible with weaker forms of it.
  2. [Implications for model comparison] Reframing of model comparison: The proposal to treat ANN comparisons as 'mapping clusters of alignment in ecological constraint space' is load-bearing for the practical implications of URH, yet no operational definition, metric, or illustrative case study is supplied for how constraint space would be constructed or how cluster mapping would differ methodologically from existing similarity analyses.
minor comments (2)
  1. [Abstract] Abstract and introduction: The term 'Umwelt' is introduced without a brief definition or reference to its original biological usage, which may reduce accessibility for readers outside ethology or philosophy of biology.
  2. [Introduction] Throughout: Distinctions between different possible meanings of 'universality' (e.g., low-level feature convergence versus high-level semantic equivalence) are mentioned but could be stated more explicitly when contrasting URH with prior proposals.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and insightful comments, which have helped us clarify the scope and implications of the Umwelt Representation Hypothesis. We address each major comment below and have revised the manuscript accordingly to incorporate quantitative context and operational details where feasible.

read point-by-point responses
  1. Referee: Review of empirical evidence: The central claim that representational differences 'are systematic and adaptive, which is difficult to reconcile with Universality' rests on a qualitative literature synthesis. Specific examples (e.g., species differences in sensory tuning or individual variability in ANN feature selectivity) are invoked but not accompanied by quantitative measures of alignment strength versus divergence, making it difficult to evaluate whether the evidence truly challenges universality or remains compatible with weaker forms of it.

    Authors: We agree that quantitative anchoring strengthens the presentation. The manuscript is a conceptual hypothesis paper rather than a meta-analysis, but the cited studies do report specific metrics (e.g., RSA correlations, CCA scores, and decoding accuracies). In revision we will expand the relevant paragraphs to explicitly quote representative quantitative values from the primary sources, thereby clarifying the magnitude of observed divergences relative to within-condition alignment and their tension with strong universality claims. revision: yes

  2. Referee: Reframing of model comparison: The proposal to treat ANN comparisons as 'mapping clusters of alignment in ecological constraint space' is load-bearing for the practical implications of URH, yet no operational definition, metric, or illustrative case study is supplied for how constraint space would be constructed or how cluster mapping would differ methodologically from existing similarity analyses.

    Authors: We accept that greater operational specificity is needed. We will add a new subsection that (i) defines ecological constraint space along three explicit axes (sensory statistics, task demands, and developmental priors), (ii) proposes a concrete metric (normalized alignment distance in a multi-dimensional embedding of RSA/CCA matrices), and (iii) provides a worked illustrative example using publicly available vision-model and primate IT datasets to show how clusters emerge and how the procedure differs from standard pairwise similarity searches. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper is a conceptual hypothesis proposal that introduces the Umwelt Representation Hypothesis as an alternative interpretive lens on reviewed empirical literature regarding representational alignment. It contains no equations, fitted parameters, or formal derivations that could reduce to self-definition or construction. The central claim rests on external evidence summaries and logical reframing of alignment studies as compatible with multiple interpretations, without load-bearing self-citations or ansatzes that loop back to the paper's own inputs. This structure is self-contained against external benchmarks and exhibits no reduction of predictions or uniqueness claims to prior author work.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The claim depends on interpreting prior empirical studies as showing adaptive, systematic differences inconsistent with universality, plus the introduction of URH as an explanatory lens without new derivations or independent tests.

axioms (1)
  • domain assumption Representational differences across systems are systematic and adaptive rather than random or suboptimal
    Invoked to argue that such differences contradict universality.
invented entities (1)
  • Umwelt Representation Hypothesis (URH) no independent evidence
    purpose: Alternative explanation for representational alignment based on ecological constraint overlap
    New conceptual framework proposed in the paper; no independent empirical validation or falsifiable prediction detailed in the abstract.

pith-pipeline@v0.9.0 · 5419 in / 1302 out tokens · 52801 ms · 2026-05-10T03:46:38.837855+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

77 extracted references · 5 canonical work pages

  1. [1]

    Doerig, A. et al. (2023) The neuroconnectionist research programme. Nat. Rev. Neurosci. 24, 431–450

  2. [2]

    Kietzmann, T.C. et al. (2018) Deep Neural Networks in Computational Neuroscience. bioRxiv DOI: 10.1101/133504

  3. [3]

    Richards, B.A. et al. (2019) A deep learning framework for neuroscience. Nat. Neurosci. 22, 1761–1770

  4. [4]

    Golan, T. et al. (2023) Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses. Behav. Brain Sci. 46, e392

  5. [5]

    Cichy, R.M. et al. (2016) Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Sci. Rep. 6, 27755

  6. [6]

    Conwell, C. et al. (2024) A large-scale examination of inductive biases shaping high-level visual representation in brains and machines. Nat. Commun. 15, 9383

  7. [7]

    Dobs, K. et al. (2022) Brain-like functional specialization emerges spontaneously in deep neural networks. Sci. Adv. 8, eabl8913

  8. [8]

    Dwivedi, K. et al. (2021) Unveiling functions of the visual cortex using task-specific deep neural networks. PLOS Comput. Biol. 17, e1009267

  9. [9]

    and Gerven, M.A.J

    Güçlü, U. and Gerven, M.A.J. van (2015) Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream. J. Neurosci. 35, 10005–10014

  10. [10]

    and Kriegeskorte, N

    Khaligh-Razavi, S.-M. and Kriegeskorte, N. (2014) Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation. PLoS Comput. Biol. 10, e1003915

  11. [11]

    Storrs, K.R. et al. (2021) Diverse Deep Neural Networks All Predict Human Inferior Temporal Cortex Well, After Training and Fitting. J. Cogn. Neurosci. 33, 2044–2064

  12. [12]

    Schrimpf, M. et al. (2018) Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? Neuroscience

  13. [13]

    and Bonner, M.F

    Chen, Z. and Bonner, M.F. (2025) Universal dimensions of visual representation. Sci. Adv. 11, eadw7697

  14. [14]

    Storrs, K.R. et al. (2020) Noise ceiling on the crossvalidated performance of reweighted models of representational dissimilarity: Addendum to Khaligh-Razavi & Kriegeskorte (2014). bioRxiv

  15. [15]

    and Epstein, R.A

    Bonner, M.F. and Epstein, R.A. (2021) Object representations in the human brain reflect the co-occurrence statistics of vision and language. Nat. Commun. 12, 4081

  16. [16]

    Huh, M. et al. (2024) The Platonic Representation Hypothesis. arXiv . 17. Li, Q. et al. (2024) Representations and generalization in artificial and brain neural networks. Proc. Natl. Acad. Sci. 121, e2311805121

  17. [17]

    Sorscher, B. et al. (2022) Neural representational geometry underlies few-shot concept learning. Proc. Natl. Acad. Sci. 119, e2200800119

  18. [18]

    Liu, H. et al. (2023) Visual Instruction Tuning. arXiv . 20. Lu, K. et al. (2021) Pretrained Transformers as Universal Computation Engines. arXiv . 21. Merullo, J. et al. (2023) Linearly Mapping from Image to Text Space. arXiv . 22. Hosseini, E. et al. (2024) Universality of representation in biological and artificial neural networks. bioRxiv

  19. [19]

    Bansal, Y. et al. (2021) Revisiting Model Stitching to Compare Neural Representations. arXiv . 24. Khosla, M. et al. (2024) Privileged representational axes in biological and artificial neural networks. bioRxiv

  20. [20]

    and Field, D.J

    Olshausen, B.A. and Field, D.J. (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609

  21. [21]

    and Ballard, D.H

    Rao, R.P.N. and Ballard, D.H. (1999) Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87

  22. [22]

    (1992) A stroll through the worlds of animals and men: A picture book of invisible worlds

    Von Uexküll, J. (1992) A stroll through the worlds of animals and men: A picture book of invisible worlds. Semiotica 89

  23. [23]

    von (1909) Umwelt und Innenwelt der Tiere , Julius Springer 29

    Uexküll, J. von (1909) Umwelt und Innenwelt der Tiere , Julius Springer 29. Thompson, E. (2007) Mind in life: Biology, phenomenology, and the sciences of mind , Belknap Press/Harvard University Press

  24. [24]

    and Spaulding, S

    Shapiro, L. and Spaulding, S. (2025) Embodied Cognition. In The Stanford Encyclopedia of 16 Philosophy (Summer 2025.) (Zalta, E. N. and Nodelman, U., eds), Metaphysics Research Lab, Stanford University

  25. [25]

    and Barsalou, L.W

    Matheson, H.E. and Barsalou, L.W. (2018) Embodiment and Grounding in Cognitive Neuroscience. In Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience ((1st edn) ) (Wixted, J. T., ed), pp. 1–27, Wiley

  26. [26]

    Varela, F.J. et al. (2017) The Embodied Mind, revised edition: Cognitive Science and Human Experience , MIT Press

  27. [27]

    Balas, B. et al. (2009) A summary-statistic representation in peripheral vision explains visual crowding. J. Vis. 9, 13.1-1318

  28. [28]

    Bar, M. et al. (2006) Top-down facilitation of visual recognition. Proc. Natl. Acad. Sci. 103, 449–454

  29. [29]

    and Oliva, A

    Greene, M.R. and Oliva, A. (2009) Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognit. Psychol. 58, 137–176

  30. [30]

    and Boynton, G.M

    Duncan, R.O. and Boynton, G.M. (2003) Cortical Magnification within Human Primary Visual Cortex Correlates with Acuity Thresholds. Neuron 38, 659–671

  31. [31]

    and Caramazza, A

    Konkle, T. and Caramazza, A. (2013) Tripartite Organization of the Ventral Stream by Animacy and Object Size. J. Neurosci. 33, 10235–10242

  32. [32]

    (2013) Eye evolution and its functional basis

    Nilsson, D.-E. (2013) Eye evolution and its functional basis. Vis. Neurosci. 30, 5–20 39. Xu, T. et al. (2020) Cross-species functional alignment reveals evolutionary hierarchy within the connectome. NeuroImage 223, 117346

  33. [33]

    and de Haas, B

    Borovska, P. and de Haas, B. (2024) Individual gaze shapes diverging neural representations. Proc. Natl. Acad. Sci. 121, e2405602121

  34. [34]

    Feilong, M. et al. (2018) Reliable individual differences in fine-grained cortical functional architecture. NeuroImage 183, 375–386

  35. [35]

    and Bonner, M.F

    Han, C. and Bonner, M.F. (2025) High-dimensional structure underlying individual differences in naturalistic visual experience. arXiv

  36. [36]

    Segall, M.H. et al. (1963) Cultural differences in the perception of geometric illusions. Science 139, 769–771

  37. [37]

    and Firestone, C

    Amir, D. and Firestone, C. (2025) Is visual perception WEIRD? The Müller-Lyer illusion and the Cultural Byproduct Hypothesis

  38. [38]

    Drissi Daoudi, L. et al. (2017) The role of one-shot learning in #TheDress. J. Vis. 17, 15 46. Charest, I. and Kriegeskorte, N. (2015) The brain of the beholder: honouring individual representational idiosyncrasies. Lang. Cogn. Neurosci. 30, 367–379

  39. [39]

    Dehaene-Lambertz, G. et al. (2018) The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition. PLoS Biol. 16, e2004103

  40. [40]

    Gomez, J. et al. (2019) Extensive childhood experience with Pokémon suggests eccentricity drives organization of visual cortex. Nat. Hum. Behav. 3, 611–624

  41. [41]

    Gilaie-Dotan, S. et al. (2012) Neuroanatomical correlates of visual car expertise. NeuroImage 62, 147–153

  42. [42]

    and Levinson, S.C

    Evans, N. and Levinson, S.C. (2009) The myth of language universals: Language diversity and its importance for cognitive science. Behav. Brain Sci. 32, 429–448

  43. [43]

    and Mollon, J

    Jordan, G. and Mollon, J. (2019) Tetrachromacy: the mysterious case of extra-ordinary color vision. Curr. Opin. Behav. Sci. 30, 130–134

  44. [44]

    and Hulse, S.H

    Takeuchi, A.H. and Hulse, S.H. (1993) Absolute pitch. Psychol. Bull. 113, 345–361 53. Feather, J. et al. (2023) Model metamers reveal divergent invariances between biological and artificial neural networks. Nat. Neurosci. 26, 2017–2034

  45. [45]

    Feather, J. et al. (2025) Brain-Model Evaluations Need the NeuroAI Turing Test. arXiv . 55. Golan, T. et al. (2020) Controversial stimuli: Pitting neural networks against each other as models of human cognition. Proc. Natl. Acad. Sci. 117, 29330–29337

  46. [46]

    Geirhos, R. et al. (2021) Partial success in closing the gap between human and machine vision. arXiv

  47. [47]

    Lu, Z. et al. (2025) Adopting a human developmental visual diet yields robust, shape-based AI vision. arXiv

  48. [48]

    Linsley, D. et al. (2025) Better artificial intelligence does not mean better models of biology. Trends Cogn. Sci. DOI: 10.1016/j.tics.2025.11.016

  49. [49]

    Mehrer, J. et al. (2020) Individual differences among deep neural network models. Nat. Commun. 17 11, 5725 60. Kar, K. et al. (2019) Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nat. Neurosci. 22, 974–983

  50. [50]

    Kietzmann, T.C. et al. (2019) Recurrence is required to capture the representational dynamics of the human visual system. Proc. Natl. Acad. Sci. 116, 21854–21863

  51. [51]

    Blauch, N.M. et al. (2022) A connectivity-constrained computational account of topographic organization in primate high-level visual cortex. Proc. Natl. Acad. Sci. 119, e2112566119

  52. [52]

    Lu, Z. et al. (2025) End-to-end topographic networks as models of cortical map formation and human visual behaviour. Nat. Hum. Behav. DOI: 10.1038/s41562-025-02220-7

  53. [53]

    Margalit, E. et al. (2024) A unifying framework for functional organization in early and higher ventral visual cortex. Neuron DOI: 10.1016/j.neuron.2024.04.018

  54. [54]

    and Welling, M

    Kingma, D.P. and Welling, M. (2022) Auto-Encoding Variational Bayes. arXiv . 66. Millidge, B. et al. (2022) Predictive Coding: a Theoretical and Experimental Review. arXiv . 67. Hu, Y. and Mohsenzadeh, Y. (2025) Neural processing of naturalistic audiovisual events in space and time. Commun. Biol. 8, 110

  55. [55]

    Lin, T.-Y. et al. (2015) Microsoft COCO: Common Objects in Context. arXiv . 69. Russakovsky, O. et al. (2015) ImageNet Large Scale Visual Recognition Challenge. arXiv . 70. Gifford, A.T. et al. (2026) A 7T fMRI dataset of synthetic images for out-of-distribution modeling of vision. Nat. Commun. 17, 1589

  56. [56]

    Golan, T. et al. (2023) Testing the limits of natural language models for predicting human language judgements. Nat. Mach. Intell. 5, 952–964

  57. [57]

    Garcia, K. et al. (2024) Modeling dynamic social vision highlights gaps between deep learning and humans. OSF

  58. [58]

    Doerig, A. et al. (2025) High-level visual representations in the human brain are aligned with large language models. Nat. Mach. Intell. 7, 1220–1234

  59. [59]

    Bartnik, C.G. et al. (2025) Representation of locomotive action affordances in human behavior, brains, and deep neural networks. Proc. Natl. Acad. Sci. 122, e2414005122

  60. [60]

    and Smith, L.B

    Thelen, E. and Smith, L.B. (1994) A Dynamic Systems Approach to the Development of Cognition and Action , The MIT Press

  61. [61]

    (1979) The Ecological Approach to Visual Perception: Classic Edition , Houghton Mifflin

    Gibson, J.J. (1979) The Ecological Approach to Visual Perception: Classic Edition , Houghton Mifflin

  62. [62]

    and Noë, A

    O’Regan, J.K. and Noë, A. (2001) A sensorimotor account of vision and visual consciousness. Behav. Brain Sci. 24, 939–973; discussion 973-1031

  63. [63]

    Allen, E.J. et al. (2022) A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat. Neurosci. 25, 116–126

  64. [64]

    Shirakawa, K. et al. (2025) Spurious reconstruction from brain activity. Neural Netw. 190, 107515 80. Hayhoe, M. and Ballard, D. (2005) Eye movements in natural behavior. Trends Cogn. Sci. 9, 188–194

  65. [65]

    Amme, C. et al. (2024) Saccade onset, not fixation onset, best explains early responses across the human visual cortex during naturalistic vision. bioRxiv

  66. [66]

    König, P. et al. (2016) Eye movements as a window to cognitive processes. J. Eye Mov. Res. 9 83. Carvalho, W. and Lampinen, A. (2025) Naturalistic Computational Cognitive Science: Towards generalizable models and theories that capture the full range of natural behavior. arXiv

  67. [67]

    Chakravartty, A. et al. (2025) Scientific Realism. In The Stanford Encyclopedia of Philosophy (Winter 2025.) (Zalta, E. N. and Nodelman, U., eds), Metaphysics Research Lab, Stanford University

  68. [68]

    (2024) Realism

    Miller, A. (2024) Realism. In The Stanford Encyclopedia of Philosophy (Summer 2024.) (Zalta, E. N. and Nodelman, U., eds), Metaphysics Research Lab, Stanford University

  69. [69]

    (2025) Challenges to Metaphysical Realism

    Khlentzos, D. (2025) Challenges to Metaphysical Realism. In The Stanford Encyclopedia of Philosophy (Fall 2025.) (Zalta, E. N. and Nodelman, U., eds), Metaphysics Research Lab, Stanford University

  70. [70]

    (2011) Writing the Book of the World , Oxford University Press 88

    Sider, T. (2011) Writing the Book of the World , Oxford University Press 88. James, W. (2019) Pragmatism - A New Name for Some Old Ways of Thinking by William James : With a Critical Introduction by Eric C. Sheffield. at <https://www.torrossa.com/en/resources/an/5671443>

  71. [71]

    (1930) The Quest for Certainty: A Study of the Relation of Knowledge and Action

    Dewey, J. (1930) The Quest for Certainty: A Study of the Relation of Knowledge and Action. J. Philos. 27, 14–25 18

  72. [72]

    (1979) Philosophy and the mirror of nature , Princeton University Press 91

    Rorty, R. (1979) Philosophy and the mirror of nature , Princeton University Press 91. Merleau-Ponty, M. (2006) Phenomenology of perception: an introduction , (Repr.), Routledge 92. Heidegger, M. (2010) Being and Time , SUNY Press 93. Deleuze, G. and Guattari, F. (1987) A Thousand Plateaus , University of Minnesota Press 94. Bateson, G. (2000) Steps to an ...

  73. [73]

    (2017) From Bacteria to Bach and Back: The Evolution of Minds , W

    Dennett, D.C. (2017) From Bacteria to Bach and Back: The Evolution of Minds , W. W. Norton & Company

  74. [74]

    and Golan, T

    Avitan, I. and Golan, T. (2025) Rethinking Representational Alignment: Linear Probing Fails to Identify the Ground-Truth Model. Cogn. Comput. Neurosci. CCN

  75. [75]

    Thobani, I. et al. (2025) Model-brain comparison using inter-animal transforms. arXiv . 98. Han, Y. et al. (2023) System Identification of Neural Systems: If We Got It Right, Would We Know? in Proceedings of the 40th International Conference on Machine Learning , pp. 12430–12444

  76. [76]

    Nonaka, S. et al. (2021) Brain hierarchy score: Which deep neural networks are hierarchically brain-like? iScience 24, 103013

  77. [77]

    Gröger, F. et al. (2026) Revisiting the Platonic Representation Hypothesis: An Aristotelian View. arXiv . 19