A prior-data fitted network amortizes causal sensitivity analysis by generating training labels via Lagrangian scalarization, achieving orders-of-magnitude faster bounds computation than per-instance methods.
arXiv preprint arXiv:2506.10914 , year=
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
TabDistill distills feature interactions from tabular foundation models via post-hoc attribution and inserts them into GAMs, yielding consistent predictive gains.
Counterfactual metrics on semi-simulated benchmarks fail to identify the treatment effect estimators preferred by observable metrics on real datasets, with simple meta-learners outperforming specialized causal models.
citing papers explorer
-
Amortizing Causal Sensitivity Analysis via Prior Data-Fitted Networks
A prior-data fitted network amortizes causal sensitivity analysis by generating training labels via Lagrangian scalarization, achieving orders-of-magnitude faster bounds computation than per-instance methods.
-
Selecting Feature Interactions for Generalized Additive Models by Distilling Foundation Models
TabDistill distills feature interactions from tabular foundation models via post-hoc attribution and inserts them into GAMs, yielding consistent predictive gains.
-
Real vs. Semi-Simulated: Rethinking Evaluation for Treatment Effect Estimation
Counterfactual metrics on semi-simulated benchmarks fail to identify the treatment effect estimators preferred by observable metrics on real datasets, with simple meta-learners outperforming specialized causal models.