pith. machine review for the scientific record. sign in

arxiv: 2605.13589 · v1 · submitted 2026-05-13 · 📊 stat.ML · cs.LG

Recognition: unknown

Causal Learning with the Invariance Principle

Francesco Locatello, Francesco Montagna

Authors on Pith no claims yet

Pith reviewed 2026-05-14 17:51 UTC · model grok-4.3

classification 📊 stat.ML cs.LG
keywords causal discoverystructural causal modelsinvariance principleidentifiabilitynonlinear mechanismscounterfactual inferenceacyclic graphs
0
0 comments X

The pith

Assuming acyclicity and invariance, only two auxiliary environments suffice to identify the causal graph for arbitrary nonlinear mechanisms.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that causal discovery is no longer ill-posed once causal relations are required to be acyclic and to stay the same across environments. Under those conditions, data collected in just two additional settings are enough to recover the full directed graph even when every mechanism can be nonlinear. Because the graph is recovered, the functional forms inside the structural causal model also become identifiable, which in turn guarantees that counterfactual queries can be answered correctly. The argument is developed inside the language of structural causal models and is supported by experiments on synthetic data.

Core claim

Assuming that the causal relations are acyclic and invariant across multiple environments, only two auxiliary environments are sufficient to infer the causal graph for arbitrary nonlinear mechanisms. Moreover, this implies identifiability of the SCM functional mechanisms, so that two auxiliary environments guarantee correct counterfactual inference.

What carries the argument

The invariance of causal mechanisms across environments inside structural causal models, which supplies the extra constraints needed to pin down the graph from only two auxiliary data sets.

If this is right

  • The causal graph is uniquely recoverable from the original environment plus two auxiliary environments.
  • The functional mechanisms inside the structural causal model become identifiable.
  • Counterfactual predictions are guaranteed to be correct once the two auxiliary environments have been observed.
  • The result applies to arbitrary nonlinear mechanisms, not merely linear or additive ones.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Collecting data from two distinct contexts may be sufficient for many applied causal questions that currently demand far larger or more controlled data sets.
  • If invariance holds only approximately, the same algebraic argument might still yield a small set of candidate graphs whose disagreement can be quantified.
  • The construction suggests a practical experimental design: deliberately sample or intervene in two environments chosen to break the remaining symmetries.

Load-bearing premise

The causal relations must be acyclic and must remain exactly the same across the observed environments.

What would settle it

A concrete acyclic SCM with nonlinear mechanisms together with data from two auxiliary environments that still permits two or more distinct causal graphs would falsify the identifiability claim.

Figures

Figures reproduced from arXiv: 2605.13589 by Francesco Locatello, Francesco Montagna.

Figure 1
Figure 1. Figure 1: Natural experiments as instances of our multi-environment framework. [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Graph for the structural equation model of Example [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Samples of the random nonlinear mechanisms (one parent, one source) sampled from a [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Mean Dtop (20 seeds, the lower the better) performance on linear and nonlinear Gaussian data that are non-identifiable from pure observations. Error is 95% confidence interval. Given three environments, MCD can infer the causal direction where all other state-of-the-art methods for observational causal discovery are no better than a baseline randomly selecting the causal order. Dtop values (the lower, the … view at source ↗
Figure 5
Figure 5. Figure 5: Average (20 seeds) execution time of the MCD algorithm, per number of samples and number of nodes. Experiments are run on a laptop Lenovo ThinkPad T14 Gen 5 on CPU. F Algorithm and additional experimental results In this section we present further details on the MCD algorithm, details on the compute resources, and additional experimental results. F.1 The MCD algorithm The MCD algorithm is introduced in Sec… view at source ↗
Figure 6
Figure 6. Figure 6: Average normalized Dtop (the lower, the better) and 95% confidence interval (20 seeds) of MCD at varying datasets size. n refers to the number of samples observed in each environment. We see that for n = 1000 MCD already infers causality better than random (expected accuracy at 0.5); for 2000 samples MCD significantly increases the accuracy, which remains almost stable for n = 3000. 3 5 10 20 Number of nod… view at source ↗
Figure 7
Figure 7. Figure 7: We compare MCD average normalized Dtop (the lower, the better) when inference occurs on 3, 5, 7 environments; in line with the findings of Theorem 1, 3 datasets are sufficient for causal discovery. In fact, up to finite samples effects, adding environments does not clearly improves inference accuracy. Error bars denote 95% confidence intervals over 20 seeds. when increasing datasets size from 1000 to 2000;… view at source ↗
Figure 8
Figure 8. Figure 8: We study MCD average Dtop (the lower, the better) separately, on synthetic datasets whose causal mechanisms are only linear or only nonlinear (parametrized with a ResNet, as described in Section 4.2). We find that MCD accuracy is substantially unaffected by the nonlinearity, as predicted by the theory. Error bars denote 95% confidence intervals over 20 seeds. randomly sampled from a neural network (as desc… view at source ↗
read the original abstract

Causal discovery, the problem of inferring the direction of causality, is generally ill-posed. We use the language of structural causal models (SCM) to show that assuming that the causal relations are acyclic and invariant across multiple environments (e.g., the way minimum wage affects employment rate is stable across different geographical regions), \textit{only} two auxiliary environments are sufficient to infer the causal graph for arbitrary nonlinear mechanisms. Moreover, we demonstrate that this implies identifiability of the SCM functional mechanisms: as a corollary, we show that \textit{two} auxiliary environments are sufficient to guarantee correct counterfactual inference. We empirically support our theoretical results on synthetic data.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper claims that, assuming acyclicity and invariance of causal mechanisms across environments in structural causal models (SCMs), only two auxiliary environments suffice to uniquely identify the causal graph even for arbitrary nonlinear mechanisms. As a corollary, the functional mechanisms become identifiable, enabling correct counterfactual inference. The result is supported by synthetic data experiments.

Significance. If the central identifiability result holds with the stated generality, it would substantially advance causal discovery by showing that a minimal number of environments (two) suffices for nonlinear SCMs under standard invariance and acyclicity assumptions, with direct implications for counterfactual reasoning. The parameter-free nature of the claim and the reduction to two environments are notable strengths if the proof excludes degeneracies.

major comments (3)
  1. [§3] §3, Theorem 1 (or equivalent identifiability statement): the claim that two environments suffice for arbitrary nonlinear mechanisms requires that, for any non-parent set S, the conditional P(Y | S) differs across the two environments while the true-parent conditional remains invariant. No explicit regularity condition on the environment shifts (e.g., support overlap or non-degeneracy of the shift distribution) or measure-theoretic genericity argument is supplied to rule out compensatory nonlinearities that could make a spurious conditioning set produce identical conditionals in precisely those two environments.
  2. [§3.2] §3.2 (proof of uniqueness): the argument that invariance plus acyclicity implies the true parents are the only set whose conditional is stable across environments appears to rely on the assumption that any deviation in the non-parent conditional must be detectable in at least one of the two environments. This step is load-bearing for the 'arbitrary nonlinear' claim but lacks a concrete test or counterexample exclusion for cases where the two chosen environments happen to lie in a lower-dimensional subspace of possible shifts.
  3. [Corollary] Corollary on counterfactual identifiability: the reduction from graph identification to mechanism identification assumes that once the parents are known, the functional form is recoverable from the two environments. This step needs explicit verification that the invariance constraint plus two environments pins down the nonlinear function uniquely, rather than up to a measure-zero set of equivalent functions.
minor comments (2)
  1. [Abstract] The abstract and introduction should clarify whether the two auxiliary environments are assumed to be chosen adversarially or generically; the current wording leaves open whether the result is for almost-all pairs of environments or for some specific pair.
  2. [Experiments] Synthetic data experiments should include at least one constructed near-degenerate case (e.g., carefully chosen nonlinear compensations) to empirically probe the boundary of the identifiability claim.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the careful and constructive review. The comments highlight important points on regularity conditions and proof details that we will address in revision to strengthen the presentation of the identifiability result.

read point-by-point responses
  1. Referee: [§3] §3, Theorem 1 (or equivalent identifiability statement): the claim that two environments suffice for arbitrary nonlinear mechanisms requires that, for any non-parent set S, the conditional P(Y | S) differs across the two environments while the true-parent conditional remains invariant. No explicit regularity condition on the environment shifts (e.g., support overlap or non-degeneracy of the shift distribution) or measure-theoretic genericity argument is supplied to rule out compensatory nonlinearities that could make a spurious conditioning set produce identical conditionals in precisely those two environments.

    Authors: We agree that the theorem statement would benefit from an explicit regularity condition. In the revision we will add Assumption 3, requiring that the pair of environment shifts is generic: for every non-parent set S the induced conditional distributions P(Y|S) differ on a set of positive measure across the two environments. This rules out the measure-zero cases of perfectly compensatory nonlinearities for the specific pair chosen. We will also add a short paragraph on support overlap to ensure the conditionals are well-defined and comparable. revision: yes

  2. Referee: [§3.2] §3.2 (proof of uniqueness): the argument that invariance plus acyclicity implies the true parents are the only set whose conditional is stable across environments appears to rely on the assumption that any deviation in the non-parent conditional must be detectable in at least one of the two environments. This step is load-bearing for the 'arbitrary nonlinear' claim but lacks a concrete test or counterexample exclusion for cases where the two chosen environments happen to lie in a lower-dimensional subspace of possible shifts.

    Authors: The current proof sketch in §3.2 uses acyclicity to propagate the effect of an environment shift to any non-parent conditioning set, but we acknowledge that the argument is informal on the genericity of the two environments. We will revise the proof to include an explicit genericity lemma: for almost every pair of shifts (in the sense of Lebesgue measure on the space of possible interventions), any non-parent set produces a detectable difference in at least one conditional. We do not currently have a concrete counterexample that survives acyclicity and invariance; if the referee can supply one we will incorporate it or strengthen the genericity statement accordingly. revision: partial

  3. Referee: [Corollary] Corollary on counterfactual identifiability: the reduction from graph identification to mechanism identification assumes that once the parents are known, the functional form is recoverable from the two environments. This step needs explicit verification that the invariance constraint plus two environments pins down the nonlinear function uniquely, rather than up to a measure-zero set of equivalent functions.

    Authors: We will expand the corollary and its proof to make the mechanism-identification step fully explicit. Once the parent set is known, the same functional mechanism f must hold in both environments. With two distinct distributions of the parents (guaranteed by the new Assumption 3), the equation Y = f(X_pa, N) together with independence of N allows unique recovery of f almost everywhere; we will add a short lemma showing that any two candidate functions agreeing on two sufficiently rich distributions of X_pa must coincide almost surely. This closes the reduction to counterfactual identifiability. revision: yes

Circularity Check

0 steps flagged

No circularity: result follows from posited SCM assumptions

full rationale

The derivation posits acyclicity and invariance as inputs, then shows that these suffice for graph identification from two environments. This is a standard implication under the stated assumptions rather than a reduction to fitted parameters, self-definition, or self-citation chains. No equations rename known results or smuggle ansatzes; the claim is mathematically derived from the premises without collapsing to them by construction. The empirical support on synthetic data is separate from the theoretical step.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on the standard SCM framework together with the two domain assumptions of acyclicity and cross-environment invariance; no free parameters or new entities are introduced in the abstract.

axioms (2)
  • domain assumption Causal relations are acyclic
    Required for the SCM to define a well-posed joint distribution without feedback loops.
  • domain assumption Causal mechanisms are invariant across environments
    The key invariance principle used to obtain identifiability from multiple environments.

pith-pipeline@v0.9.0 · 5396 in / 1178 out tokens · 28285 ms · 2026-05-14T17:51:03.982047+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

97 extracted references · 7 canonical work pages

  1. [1]

    , title =

    Acemoglu, Daron and Johnson, Simon and Robinson, James A. , title =. American Economic Review , volume =

  2. [2]

    , title =

    Card, David and Krueger, Alan B. , title =. American Economic Review , volume =

  3. [3]

    Forty-second International Conference on Machine Learning , year=

    Exogenous Isomorphism for Counterfactual Identifiability , author=. Forty-second International Conference on Machine Learning , year=

  4. [4]

    The Thirteenth International Conference on Learning Representations , year=

    Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm , author=. The Thirteenth International Conference on Learning Representations , year=

  5. [5]

    The Fourteenth International Conference on Learning Representations , year=

    On the identifiability of causal graphs with multiple environments , author=. The Fourteenth International Conference on Learning Representations , year=

  6. [6]

    On the Identifiability of Sparse

    Ignavier Ng and Yujia Zheng and Xinshuai Dong and Kun Zhang , booktitle=. On the Identifiability of Sparse. 2023 , url=

  7. [7]

    Jacobian-based Causal Discovery with Nonlinear

    Patrik Reizinger and Yash Sharma and Matthias Bethge and Bernhard Sch. Jacobian-based Causal Discovery with Nonlinear. Transactions on Machine Learning Research , issn=. 2023 , url=

  8. [8]

    Advances in Neural Information Processing Systems , editor=

    Function Classes for Identifiable Nonlinear Independent Component Analysis , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  9. [9]

    Proceedings of the 39th International Conference on Machine Learning , pages =

    Score Matching Enables Causal Discovery of Nonlinear Additive Noise Models , author =. Proceedings of the 39th International Conference on Machine Learning , pages =. 2022 , editor =

  10. [10]

    and Hyv\"

    Shimizu, Shohei and Hoyer, Patrik O. and Hyv\". A Linear Non-Gaussian Acyclic Model for Causal Discovery , year =. J. Mach. Learn. Res. , month = dec, pages =

  11. [11]

    Proceedings of the 37th International Conference on Machine Learning , articleno =

    Ghassami, AmirEmad and Yang, Alan and Kiyavash, Negar and Zhang, Kun , title =. Proceedings of the 37th International Conference on Machine Learning , articleno =. 2020 , publisher =

  12. [12]

    The Annals of Statistics , number =

    Dan Geiger and David Heckerman and Henry King and Christopher Meek , title =. The Annals of Statistics , number =. 2001 , doi =

  13. [13]

    2nd Conference on Causal Learning and Reasoning , year=

    Scalable Causal Discovery with Score Matching , author=. 2nd Conference on Causal Learning and Reasoning , year=

  14. [14]

    Fourth Conference on Causal Learning and Reasoning , year=

    Score matching through the roof: linear, nonlinear, and latent variables causal discovery , author=. Fourth Conference on Causal Learning and Reasoning , year=

  15. [15]

    Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear

    Lachapelle, Sebastien and Rodriguez, Pau and Sharma, Yash and Everett, Katie E and PRIOL, R. Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear. Proceedings of the First Conference on Causal Learning and Reasoning , pages =. 2022 , editor =

  16. [16]

    Factorizing Multivariate Function Classes , url =

    Lin, Juan , booktitle =. Factorizing Multivariate Function Classes , url =

  17. [17]

    Spantini, Alessio and Bigoni, Daniele and Marzouk, Youssef , title =. J. Mach. Learn. Res. , month = jan, pages =. 2018 , issue_date =

  18. [18]

    Multi-domain Causal Structure Learning in Linear Systems , url =

    Ghassami, AmirEmad and Kiyavash, Negar and Huang, Biwei and Zhang, Kun , booktitle =. Multi-domain Causal Structure Learning in Linear Systems , url =

  19. [19]

    Causal discovery from heterogeneous/nonstationary data , year =

    Huang, Biwei and Zhang, Kun and Zhang, Jiji and Ramsey, Joseph and Sanchez-Romero, Ruben and Glymour, Clark and Sch\". Causal discovery from heterogeneous/nonstationary data , year =. J. Mach. Learn. Res. , month = jan, articleno =

  20. [20]

    Proceedings of the 31st International Conference on Neural Information Processing Systems , pages =

    Ghassami, Amir Emad and Salehkaleybar, Saber and Kiyavash, Negar and Zhang, Kun , title =. Proceedings of the 31st International Conference on Neural Information Processing Systems , pages =. 2017 , isbn =

  21. [21]

    Advances in Neural Information Processing Systems , editor=

    Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  22. [22]

    and Magliacane, Sara and Claassen, Tom , title =

    Mooij, Joris M. and Magliacane, Sara and Claassen, Tom , title =. J. Mach. Learn. Res. , month = jan, articleno =. 2020 , issue_date =

  23. [23]

    Journal of Causal Inference , doi =

    Invariant Causal Prediction for Nonlinear Models , author =. Journal of Causal Inference , doi =. 2018 , lastchecked =

  24. [24]

    Journal of the Royal Statistical Society: Series B (Statistical Methodology) , year=

    Causal inference by using invariant prediction: identification and confidence intervals , author=. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , year=

  25. [25]

    Fourth Conference on Causal Learning and Reasoning , year=

    Multi-Domain Causal Discovery in Bijective Causal Models , author=. Fourth Conference on Causal Learning and Reasoning , year=

  26. [26]

    Differentiable Causal Discovery from Interventional Data , url =

    Brouillard, Philippe and Lachapelle, S\'. Differentiable Causal Discovery from Interventional Data , url =. Advances in Neural Information Processing Systems , editor =

  27. [27]

    Transactions on Machine Learning Research , issn=

    Neural Causal Structure Discovery from Interventions , author=. Transactions on Machine Learning Research , issn=. 2023 , url=

  28. [28]

    Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning , url =

    Jaber, Amin and Kocaoglu, Murat and Shanmugam, Karthikeyan and Bareinboim, Elias , booktitle =. Causal Discovery from Soft Interventions with Unknown Targets: Characterization and Learning , url =

  29. [29]

    2009.Causality(2 ed.)

    Pearl, Judea , biburl =. doi:10.1017/CBO9780511803161 , edition = 2, file =

  30. [30]

    2017 , isbn =

    Peters, Jonas and Janzing, Dominik and Schlkopf, Bernhard , title =. 2017 , isbn =

  31. [31]

    and Glymour, C

    Spirtes, P. and Glymour, C. and Scheines, R. , biburl =

  32. [32]

    Journal of Machine Learning Research , year =

    Peter Spirtes , title =. Journal of Machine Learning Research , year =

  33. [33]

    2025 , eprint=

    Identifiable Multi-View Causal Discovery Without Non-Gaussianity , author=. 2025 , eprint=

  34. [34]

    Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI-05) , pages =

    Frederick Eberhardt and Clark Glymour and Richard Scheines , title =. Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI-05) , pages =. 2005 , publisher =

  35. [35]

    Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence (UAI-08) , pages =

    Frederick Eberhardt , title =. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence (UAI-08) , pages =. 2008 , publisher =

  36. [36]

    Journal of Machine Learning Research , volume =

    Yang-Bo He and Zhi Geng , title =. Journal of Machine Learning Research , volume =

  37. [37]

    Characterization and Greedy Learning of Interventional Markov Equivalence Classes of

    Alain Hauser and Peter B. Characterization and Greedy Learning of Interventional Markov Equivalence Classes of. Journal of Machine Learning Research , volume =

  38. [38]

    Dimakis and Sriram Vishwanath , title =

    Karthikeyan Shanmugam and Murat Kocaoglu and Alexandros G. Dimakis and Sriram Vishwanath , title =. Advances in Neural Information Processing Systems 28 (NeurIPS 2015) , pages =

  39. [39]

    Dimakis and Sriram Vishwanath , title =

    Murat Kocaoglu and Alexandros G. Dimakis and Sriram Vishwanath , title =. Proceedings of the 34th International Conference on Machine Learning , series =

  40. [40]

    Yang and Caroline Uhler , title =

    Yuhao Wang and Liam Solus and Karren D. Yang and Caroline Uhler , title =. Advances in Neural Information Processing Systems 30 (NeurIPS 2017) , pages =

  41. [41]

    Yang and Abigail Katcoff and Caroline Uhler , title =

    Karren D. Yang and Abigail Katcoff and Caroline Uhler , title =. Proceedings of the 35th International Conference on Machine Learning , series =

  42. [42]

    Proceedings of the 35th International Conference on Machine Learning , series =

    AmirEmad Ghassami and Saber Salehkaleybar and Negar Kiyavash and Elias Bareinboim , title =. Proceedings of the 35th International Conference on Machine Learning , series =

  43. [43]

    Lindgren and Murat Kocaoglu and Alexandros G

    Erik M. Lindgren and Murat Kocaoglu and Alexandros G. Dimakis and Sriram Vishwanath , title =. Advances in Neural Information Processing Systems 31 (NeurIPS 2018) , year =

  44. [44]

    Advances in Neural Information Processing Systems 33 (NeurIPS 2020) , year =

    Chandler Squires and Sara Magliacane and Kristjan Greenewald and Daniel Katz and Murat Kocaoglu and Karthikeyan Shanmugam , title =. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) , year =

  45. [45]

    Proceedings of the 11th International Conference on Artificial Intelligence and Statistics , series =

    Daniel Eaton and Kevin Murphy , title =. Proceedings of the 11th International Conference on Artificial Intelligence and Statistics , series =

  46. [46]

    Journal of Machine Learning Research , volume =

    Sofia Triantafillou and Ioannis Tsamardinos , title =. Journal of Machine Learning Research , volume =

  47. [47]

    Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics , series =

    Daniel Katz and Karthikeyan Shanmugam and Caroline Uhler , title =. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics , series =

  48. [48]

    Advances in Neural Information Processing Systems , editor=

    Amortized Inference for Causal Structure Learning , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  49. [49]

    International Conference on Learning Representations , year=

    Learning to Induce Causal Structure , author=. International Conference on Learning Representations , year=

  50. [50]

    Unsupervised feature extraction by time-contrastive learning and nonlinear ICA , year =

    Hyv\". Unsupervised feature extraction by time-contrastive learning and nonlinear ICA , year =. Proceedings of the 30th International Conference on Neural Information Processing Systems , pages =

  51. [51]

    Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) , series =

    Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning , author =. Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS) , series =. 2019 , publisher =

  52. [52]

    Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) , series =

    Variational Autoencoders and Nonlinear ICA: A Unifying Framework , author =. Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) , series =. 2020 , publisher =

  53. [53]

    Advances in Neural Information Processing Systems 33 (NeurIPS 2020) , year =

    ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA , author =. Advances in Neural Information Processing Systems 33 (NeurIPS 2020) , year =

  54. [54]

    Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI) , series =

    The Incomplete Rosetta Stone Problem: Identifiability Results for Multi-View Nonlinear ICA , author =. Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence (UAI) , series =. 2019 , publisher =

  55. [55]

    Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) , series =

    Hidden Markov Nonlinear ICA: Unsupervised Learning from Nonstationary Time Series , author =. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) , series =. 2020 , publisher =

  56. [56]

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) , series =

    Nonlinear ICA of Temporally Dependent Stationary Sources , author =. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) , series =. 2017 , publisher =

  57. [57]

    Advances in Neural Information Processing Systems 34 (NeurIPS 2021) , year =

    Disentangling Identifiable Features from Noisy Data with Structured Nonlinear ICA , author =. Advances in Neural Information Processing Systems 34 (NeurIPS 2021) , year =

  58. [58]

    Proceedings of The 35th Uncertainty in Artificial Intelligence Conference , editor =

    Causal Discovery with General Non-Linear Relationships using Non-Linear ICA , author =. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference , editor =. 2020 , month =

  59. [59]

    On Causal Discovery with Cyclic Additive Noise Models , url =

    Mooij, Joris M and Janzing, Dominik and Heskes, Tom and Sch\". On Causal Discovery with Cyclic Additive Noise Models , url =. Advances in Neural Information Processing Systems , editor =

  60. [60]

    Nonlinear causal discovery with additive noise models , url =

    Hoyer, Patrik and Janzing, Dominik and Mooij, Joris M and Peters, Jonas and Sch\". Nonlinear causal discovery with additive noise models , url =. Advances in Neural Information Processing Systems , editor =

  61. [61]

    On the identifiability of the post-nonlinear causal model , year =

    Zhang, Kun and Hyv\". On the identifiability of the post-nonlinear causal model , year =. Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence , pages =

  62. [62]

    ICML 2024 Workshop on Structured Probabilistic Inference

    Demystifying amortized causal discovery with transformers , author=. ICML 2024 Workshop on Structured Probabilistic Inference. 2024 , url=

  63. [63]

    International Conference on Machine Learning , year=

    On the Identifiability and Estimation of Causal Location-Scale Noise Models , author=. International Conference on Machine Learning , year=

  64. [64]

    Forty-second International Conference on Machine Learning , year=

    Distinguishing Cause from Effect with Causal Velocity Models , author=. Forty-second International Conference on Machine Learning , year=

  65. [65]

    Strobl and Thomas A

    Eric V. Strobl and Thomas A. Lasko , keywords =. Identifying patient-specific root causes with the heteroscedastic noise model , journal =. 2023 , issn =. doi:https://doi.org/10.1016/j.jocs.2023.102099 , url =

  66. [66]

    International Conference on Learning Representations , year=

    Gradient Estimators for Implicit Models , author=. International Conference on Learning Representations , year=

  67. [67]

    Proceedings of the Second Conference on Causal Learning and Reasoning , pages =

    Causal Discovery with Score Matching on Additive Models with Arbitrary Noise , author =. Proceedings of the Second Conference on Causal Learning and Reasoning , pages =. 2023 , editor =

  68. [68]

    Mooij and Dominik Janzing and Bernhard Sch

    Jonas Peters and Joris M. Mooij and Dominik Janzing and Bernhard Sch. Causal Discovery with Continuous Additive Noise Models , journal =. 2014 , volume =

  69. [69]

    Lauffenburger and Garry P

    Karen Sachs and Omar Perez and Dana Pe'er and Douglas A. Lauffenburger and Garry P. Nolan , title =. Science , volume =. 2005 , URL =

  70. [70]

    BACKSHIFT: Learning causal cyclic graphs from unknown shift interventions , url =

    Rothenh\". BACKSHIFT: Learning causal cyclic graphs from unknown shift interventions , url =. Advances in Neural Information Processing Systems , editor =

  71. [71]

    Proceedings of the 40th International Conference on Machine Learning , pages =

    On Data Manifolds Entailed by Structural Causal Models , author =. Proceedings of the 40th International Conference on Machine Learning , pages =. 2023 , editor =

  72. [72]

    Score-based Causal Representation Learning: Linear and General Transformations , journal =

    Burak Varici and Emre Acart. Score-based Causal Representation Learning: Linear and General Transformations , journal =. 2025 , volume =

  73. [73]

    Mooij and Jonas Peters and Philip Versteeg and Peter Bühlmann , title =

    Nicolai Meinshausen and Alain Hauser and Joris M. Mooij and Jonas Peters and Philip Versteeg and Peter Bühlmann , title =. Proceedings of the National Academy of Sciences , volume =. 2016 , doi =

  74. [74]

    bioRxiv , pages=

    Learning Genetic Perturbation Effects with Variational Causal Inference , author=. bioRxiv , pages=. 2025 , publisher=

  75. [75]

    Nature Methods , pages=

    Causal identification of single-cell experimental perturbation effects with CINEMA-OT , author=. Nature Methods , pages=. 2023 , publisher=

  76. [76]

    Conference on Causal Learning and Reasoning , year=

    Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling , author=. Conference on Causal Learning and Reasoning , year=

  77. [77]

    Likelihood-Free Overcomplete ICA and Applications In Causal Discovery , url =

    Ding, Chenwei and Gong, Mingming and Zhang, Kun and Tao, Dacheng , booktitle =. Likelihood-Free Overcomplete ICA and Applications In Causal Discovery , url =

  78. [78]

    Erdos, Paul and Renyi, Alfred , biburl =. Publ. Math. Inst. Hungary. Acad. Sci. , keywords =

  79. [79]

    Toward causal representation learning.Proceedings of the IEEE, 109(5):612–634, 2021

    Toward Causal Representation Learning , journal =. 2021 , author =. doi:10.1109/JPROC.2021.3058954 , url =

  80. [80]

    Natural Experiments , isbn =

    Titiunik, Rocío , year =. Natural Experiments , isbn =

Showing first 80 references.