Recognition: 2 theorem links
· Lean TheoremQuantifying Spacetime Integration across a Partition with Synergy
Pith reviewed 2026-05-12 02:19 UTC · model grok-4.3
The pith
Synergy-based measures from partial information decomposition better quantify spacetime integration for IIT than current practice.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that four measures of integration derived from partial information decomposition, particularly those centered on synergy, provide a more suitable quantification of spacetime integration across partitions than the measures currently employed in IIT, as shown by explicit comparisons in simple deterministic networks. This holds because synergy captures the non-redundant, integrated contributions that IIT seeks to quantify, while current practice does not isolate these as effectively in the tested cases.
What carries the argument
Partial information decomposition framework, which decomposes joint information into unique, redundant, and synergistic atoms to isolate synergy as the basis for integration measures.
If this is right
- Synergy-based measures are more suitable for IIT's use-case than current practice.
- These measures serve as useful complexity metrics for discrete dynamical systems beyond IIT.
- Integration is quantified by isolating synergistic information contributions across space and time.
- The four new measures provide concrete alternatives that can be computed directly from system states.
Where Pith is reading between the lines
- If the preference for synergy holds more generally, these measures could be used to compare integration levels across different classes of dynamical systems in neuroscience.
- The framework might allow direct tests of whether adding synergistic structure increases a system's integration score in a predictable way.
- Outside consciousness research, the measures offer a way to track how integration changes under system perturbations without relying on IIT-specific assumptions.
Load-bearing premise
That the advantage of synergy-based measures seen in simple deterministic networks will extend to the broader, more varied networks and use cases relevant to IIT.
What would settle it
Applying the synergy-based measures and current IIT measures side-by-side to a larger stochastic or biological network and finding that current measures better match IIT's intended integration values.
Figures
read the original abstract
In service to the mathematical underpinnings of the Information Integration Theory of Consciousness (IIT), we introduce four measures of integration based on the partial information decomposition framework. We compare our measures to current IIT practice in simple deterministic networks. We find synergy-based measures more suitable for IIT's use-case than current practice. Outside IIT, these measures could also be useful as measures of complexity for discrete dynamical systems.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces four measures of integration derived from the partial information decomposition (PID) framework, motivated by the needs of the Information Integration Theory of Consciousness (IIT). It performs comparisons against existing IIT integration measures exclusively in small, fully deterministic networks and concludes that the synergy-based PID measures are more suitable for IIT's use-case; it also suggests the measures may serve as complexity quantifiers for discrete dynamical systems.
Significance. If the reported advantage of the synergy measures survives beyond the tested regime, the work would supply a principled alternative to current IIT integration quantifiers and a new tool for analyzing integration/complexity in discrete systems. The explicit grounding in PID is a methodological strength, as is the focus on spacetime partitions.
major comments (2)
- [Abstract and Methods] Abstract and Methods: The suitability claim for IIT is drawn directly from comparisons performed only in small, fully deterministic networks. IIT's core applications routinely involve recurrent, stochastic, or larger-scale systems; the manuscript contains no scaling analysis, analytic argument, or additional experiments showing that the observed advantage persists when determinism is relaxed or network size increases. This extrapolation is load-bearing for the central conclusion.
- [Results] Results: Without reported details on the precise network topologies, state-space sizes, or quantitative effect sizes (e.g., how much better the synergy measures capture integration relative to baselines), it is impossible to judge whether the advantage is robust or merely an artifact of the deterministic setting.
minor comments (2)
- [Methods] The four PID-based measures are introduced without explicit equations or algorithmic pseudocode in the main text; placing these in an appendix or dedicated subsection would improve reproducibility.
- [Notation] Notation for the synergy atoms and the spacetime partition is introduced without a consolidated table of symbols, making cross-references between sections harder to follow.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address each major comment below, agreeing where revisions are needed to strengthen the manuscript while defending the scope and contributions of the current work.
read point-by-point responses
-
Referee: [Abstract and Methods] Abstract and Methods: The suitability claim for IIT is drawn directly from comparisons performed only in small, fully deterministic networks. IIT's core applications routinely involve recurrent, stochastic, or larger-scale systems; the manuscript contains no scaling analysis, analytic argument, or additional experiments showing that the observed advantage persists when determinism is relaxed or network size increases. This extrapolation is load-bearing for the central conclusion.
Authors: We agree that the comparisons are limited to small, fully deterministic networks, as stated in the abstract. The manuscript's central finding is that synergy-based measures outperform existing IIT quantifiers within this controlled regime; we do not assert that the advantage necessarily holds universally. We will revise the abstract, introduction, and discussion to include explicit caveats emphasizing that further validation in stochastic, recurrent, and larger networks is required before broader claims can be made. No scaling analysis or analytic generalization argument is included because the paper focuses on introducing the PID-derived measures and demonstrating their behavior in the simplest cases where ground-truth integration can be unambiguously defined. revision: partial
-
Referee: [Results] Results: Without reported details on the precise network topologies, state-space sizes, or quantitative effect sizes (e.g., how much better the synergy measures capture integration relative to baselines), it is impossible to judge whether the advantage is robust or merely an artifact of the deterministic setting.
Authors: We accept that additional quantitative detail would improve interpretability. The manuscript describes the networks as small deterministic systems (e.g., chains, cycles, and fully connected topologies with binary node states), but we will expand the results section and add a supplementary table specifying exact topologies, node counts (3–5 nodes, yielding state spaces of size 8–32), and numerical effect sizes, including the magnitude of improvement of each synergy measure over the IIT baseline for every tested network. revision: yes
Circularity Check
No circularity; new measures defined from PID and evaluated empirically
full rationale
The paper introduces four PID-based synergy measures of integration, defines them mathematically, and compares their behavior to existing IIT practice via direct computation on small deterministic networks. No step equates a claimed prediction or result to its own inputs by construction, no parameter is fitted on a subset and then renamed as a prediction, and no load-bearing premise reduces to a self-citation whose content is unverified. The suitability conclusion is presented as an empirical observation within the tested regime rather than a deductive necessity, leaving the derivation self-contained.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We introduce four measures of integration based on the partial information decomposition framework... synergy-based measures more suitable for IIT's use-case
-
IndisputableMonolith/Foundation/RealityFromDistinction.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
ϕS2s(s, θ) ≡ SID(Ωθ → s) ... total intrinsic spacetime integrative glue
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Giulio Tononi and Gerald M. Edelman. “Consciousness and Complexity”. In:Science282.5395 (1998), pp. 1846–1851.DOI:10.1126/science.282.5395.1846. eprint:https: //www.science.org/doi/pdf/10.1126/science.282.5395.1846.URL: https : / / www . science . org / doi / abs / 10 . 1126 / science . 282 . 5395 . 1846
-
[2]
An information integration theory of consciousness
Giulio Tononi. “An information integration theory of consciousness”. In:BMC Neuroscience 5 (2004), pp. 42–42.URL:https://api.semanticscholar.org/CorpusID: 6987007
work page 2004
-
[3]
Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework
David Balduzzi and Giulio Tononi. “Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework”. In:PLOS Computational Biology4.6 (June 2008), pp. 1–18.DOI:10.1371/journal.pcbi.1000091.URL:https://doi.org/ 10.1371/journal.pcbi.1000091
work page doi:10.1371/journal.pcbi.1000091.url:https://doi.org/ 2008
-
[4]
Masafumi Oizumi, Larissa Albantakis, and Giulio Tononi. “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0”. In:PLOS Computa- tional Biology10.5 (May 2014), pp. 1–25.DOI:10.1371/journal.pcbi.1003588. URL:https://doi.org/10.1371/journal.pcbi.1003588
-
[5]
Larissa Albantakis et al. “Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms”. In:PLOS Computational Biology19.10 (Oct. 2023), pp. 1–45.DOI:10.1371/journal.pcbi.1011465.URL:https://doi. org/10.1371/journal.pcbi.1011465
work page doi:10.1371/journal.pcbi.1011465.url:https://doi 2023
-
[6]
On the non-uniqueness problem in integrated informa- tion theory
Jake R Hanson and Sara I Walker. “On the non-uniqueness problem in integrated informa- tion theory”. In:Neuroscience of Consciousness2023.1 (June 2023), niad014.ISSN: 2057- 2107.DOI:10.1093/nc/niad014. eprint:https://academic.oup.com/nc/ article - pdf / 2023 / 1 / niad014 / 57145957 / niad014 . pdf.URL:https : //doi.org/10.1093/nc/niad014. 12
-
[7]
The strength of weak integrated information theory
Pedro A.M. Mediano et al. “The strength of weak integrated information theory”. In:Trends in Cognitive Sciences26.8 (2022), pp. 646–655.ISSN: 1364-6613.DOI:https://doi. org/10.1016/j.tics.2022.04.008.URL:https://www.sciencedirect. com/science/article/pii/S1364661322000924
work page doi:10.1016/j.tics.2022.04.008.url:https://www.sciencedirect 2022
-
[8]
Complexity as Causal Information Integration
Carlotta Langer and Nihat Ay. “Complexity as Causal Information Integration”. In:Entropy 22.10 (2020).ISSN: 1099-4300.DOI:10.3390/e22101107.URL:https://www. mdpi.com/1099-4300/22/10/1107
-
[9]
Nonnegative Decomposition of Multivariate Information
Paul L. Williams and Randall D. Beer. “Nonnegative Decomposition of Multivariate Infor- mation”. In:CoRRabs/1004.2515 (2010). arXiv:1004.2515.URL:http://arxiv. org/abs/1004.2515
work page Pith review arXiv 2010
- [10]
-
[11]
Morphological Computation: Syn- ergy of Body and Brain
Keyan Ghazi-Zahedi, Carlotta Langer, and Nihat Ay. “Morphological Computation: Syn- ergy of Body and Brain”. In:Entropy19.9 (2017).ISSN: 1099-4300.DOI:10 . 3390 / e19090456.URL:https://www.mdpi.com/1099-4300/19/9/456
work page 2017
-
[12]
Giulio Tononi, Olaf Sporns, and Gerald M Edelman. “A measure for brain complexity: relating functional segregation and integration in the nervous system”. In:Proceedings of the National Academy of Sciences91.11 (1994), pp. 5033–5037.DOI:10.1073/pnas.91. 11.5033.URL:https://www.pnas.org/doi/10.1073/pnas.91.11.5033
-
[13]
William G. P. Mayner, William Marshall, and Giulio Tononi.Intrinsic cause-effect power: the tradeoff between differentiation and specification. 2025. arXiv:2510.03881 [q-bio.NC]. URL:https://arxiv.org/abs/2510.03881
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[14]
A measure for intrinsic information
Leonardo Barbosa et al. “A measure for intrinsic information”. In:Scientific Reports10 (Nov. 2020).DOI:10.1038/s41598-020-75943-4
-
[15]
Spivak.Category theory for scientists (Old version)
David I. Spivak.Category theory for scientists (Old version). 2013. arXiv:1302 . 6946 [math.CT].URL:https://arxiv.org/abs/1302.6946
-
[16]
A Novel Approach to the Partial Information Decomposition
Artemy Kolchinsky. “A Novel Approach to the Partial Information Decomposition”. In: Entropy24.3 (Mar. 2022), p. 403.ISSN: 1099-4300.DOI:10.3390/e24030403.URL: http://dx.doi.org/10.3390/e24030403
-
[17]
An operational information decomposition via synergistic disclo- sure
Fernando Rosas et al. “An operational information decomposition via synergistic disclo- sure”. In:CoRRabs/2001.10387 (2020). arXiv:2001.10387.URL:https://arxiv. org/abs/2001.10387
-
[18]
Partial Information Decomposition via Defi- ciency for Multivariate Gaussians
Gabriel Schamberg and Praveen Venkatesh. “Partial Information Decomposition via Defi- ciency for Multivariate Gaussians”. In:CoRRabs/2105.00769 (2021). arXiv:2105.00769. URL:https://arxiv.org/abs/2105.00769
-
[19]
Abdullah Makkeh, Dirk Oliver Theis, and Raul Vicente. “BROJA-2PID: A robust estima- tor for Bertschinger et al.’s bivariate partial information decomposition”. In:Entropy20.4 (2018), p. 271. 13
work page 2018
-
[20]
MAXENT3D_PID: An Estimator for the Maximum-Entropy Trivari- ate Partial Information Decomposition
Abdullah Makkeh et al. “MAXENT3D_PID: An Estimator for the Maximum-Entropy Trivari- ate Partial Information Decomposition”. In:Entropy21.9 (2019).ISSN: 1099-4300.DOI: 10.3390/e21090862.URL:https://www.mdpi.com/1099- 4300/21/9/ 862
work page doi:10.3390/e21090862.url:https://www.mdpi.com/1099- 2019
-
[21]
Samer A. Abdallah and Mark D. Plumbley.A measure of statistical complexity based on pre- dictive information. 2010. arXiv:1012.1890 [math.ST].URL:https://arxiv. org/abs/1012.1890
-
[22]
Why Does Space Feel the Way it Does? Towards a Prin- cipled Account of Spatial Experience
Andrew Haun and Giulio Tononi. “Why Does Space Feel the Way it Does? Towards a Prin- cipled Account of Spatial Experience”. In:Entropy21.12 (2019).ISSN: 1099-4300.DOI: 10.3390/e21121160.URL:https://www.mdpi.com/1099- 4300/21/12/ 1160
work page doi:10.3390/e21121160.url:https://www.mdpi.com/1099- 2019
- [23]
-
[24]
A Principled Infotheoreticφ-like Measure
Virgil Griffith. “A Principled Infotheoreticφ-like Measure”. In:CoRRabs/1401.0978 (2014). arXiv:1401.0978.URL:http://arxiv.org/abs/1401.0978
-
[25]
Jürgen Jost et al. “An information theoretic approach to system differentiation on the basis of statistical dependencies between subsystems”. In:Physica A: Statistical Mechanics and its Applications378.1 (Jan. 2007), pp. 1–10.DOI:10.1016/j.physa.2006.11.043. URL:https://ideas.repec.org/a/eee/phsmap/v378y2007i1p1- 10. html
-
[26]
Maps of random walks on complex networks reveal community structure
Martin Rosvall and Carl T. Bergstrom. “Maps of random walks on complex networks reveal community structure”. In:Proceedings of the National Academy of Sciences105.4 (Jan. 2008), pp. 1118–1123.ISSN: 1091-6490.DOI:10 . 1073 / pnas . 0706851105.URL: http://dx.doi.org/10.1073/pnas.0706851105
-
[27]
Computational Mechanics: Pattern and Prediction, Structure and Simplicity
Cosma Rohilla Shalizi and James P. Crutchfield. “Computational Mechanics: Pattern and Prediction, Structure and Simplicity”. In:Journal of Statistical Physics104.3–4 (Aug. 2001), pp. 817–879.ISSN: 1572-9613.DOI:10 . 1023 / a : 1010388907793.URL:http : //dx.doi.org/10.1023/A:1010388907793
-
[28]
Quantifying structure in networks
E. Olbrich et al. “Quantifying structure in networks”. In:The European Physical Journal B77.2 (July 2010), pp. 239–247.ISSN: 1434-6036.DOI:10 . 1140 / epjb / e2010 - 00209-0.URL:http://dx.doi.org/10.1140/epjb/e2010-00209-0
-
[29]
Computing the Integrated Informa- tion of a Quantum Mechanism
Larissa Albantakis, Robert Prentner, and Ian Durham. “Computing the Integrated Informa- tion of a Quantum Mechanism”. In:Entropy25.3 (Mar. 2023), p. 449.ISSN: 1099-4300. DOI:10.3390/e25030449.URL:http://dx.doi.org/10.3390/e25030449
work page doi:10.3390/e25030449.url:http://dx.doi.org/10.3390/e25030449 2023
-
[30]
PyPhi: A toolbox for integrated information theory
William G. P. Mayner et al. “PyPhi: A toolbox for integrated information theory”. In:PLOS Computational Biology14.7 (July 2018). Ed. by Kim T. Blackwell, e1006343.ISSN: 1553- 7358.DOI:10.1371/journal.pcbi.1006343.URL:http://dx.doi.org/ 10.1371/journal.pcbi.1006343. 14 Appendix A Another Way of Computing the Union Distribution When computing the union info...
work page doi:10.1371/journal.pcbi.1006343.url:http://dx.doi.org/ 2018
-
[31]
And until proven it is, it is unclear how to robustly numerically find the minimum of eq
Foremost, ID is seemingly a non-convex function. And until proven it is, it is unclear how to robustly numerically find the minimum of eq. (19)
-
[32]
(6) is reasonably well studied in the Partial Informa- tion Decomposition literature
The mathematical object ofp ∪ in eq. (6) is reasonably well studied in the Partial Informa- tion Decomposition literature. Therefore, unless there’s strong theoretical justification for preferringp 2 ∪, it’s sensible to err on the side of choosingp∪. That said, if the IIT world prefersp 2 ∪ overp ∪, further research into the minimization of eq. (19) could...
work page 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.