An influence function projection approach exploits graph-implied conditional independences to improve the efficiency of semiparametric estimators for upper and lower bounds on average causal effects under sensitivity models for unmeasured confounding.
Title resolution pending
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5representative citing papers
Prediction bottlenecks do not discover causal structure beyond what linear models, Lasso, and classical Granger/PCMCI methods achieve; intervention benefits are mostly sample-size confounds, leaving a standardized falsification benchmark as the main contribution.
FFML, TRFF, and FFCI are practical RFF-based approximations that replace expensive GP kernel matrices with finite feature maps, delivering competitive precision-recall trade-offs for score-based and constraint-based causal discovery in nonlinear mixed data.
TTCD uses a non-stationary feature learner and reconstruction-guided distillation inside a transformer to infer contemporaneous and lagged causal graphs from non-stationary time series without strong noise assumptions.
The authors introduce a validation framework showing LLMs can pull causal links from disaster social media but require checks against post-event evidence to avoid relying on model priors.
citing papers explorer
-
Exploiting independence constraints for efficient estimation of bounds on causal effects in the presence of unmeasured confounding
An influence function projection approach exploits graph-implied conditional independences to improve the efficiency of semiparametric estimators for upper and lower bounds on average causal effects under sensitivity models for unmeasured confounding.
-
Prediction Bottlenecks Don't Discover Causal Structure (But Here's What They Actually Do)
Prediction bottlenecks do not discover causal structure beyond what linear models, Lasso, and classical Granger/PCMCI methods achieve; intervention benefits are mostly sample-size confounds, leaving a standardized falsification benchmark as the main contribution.
-
Fourier Feature Methods for Nonlinear Causal Discovery: FFML Scoring, TRFF Scoring, and FFCI Testing in Mixed Data
FFML, TRFF, and FFCI are practical RFF-based approximations that replace expensive GP kernel matrices with finite feature maps, delivering competitive precision-recall trade-offs for score-based and constraint-based causal discovery in nonlinear mixed data.
-
TTCD:Transformer Integrated Temporal Causal Discovery from Non-Stationary Time Series Data
TTCD uses a non-stationary feature learner and reconstruction-guided distillation inside a transformer to infer contemporaneous and lagged causal graphs from non-stationary time series without strong noise assumptions.
-
Large Language Models for Causal Relations Extraction in Social Media: A Validation Framework for Disaster Intelligence
The authors introduce a validation framework showing LLMs can pull causal links from disaster social media but require checks against post-event evidence to avoid relying on model priors.