Recognition: unknown
Causal Persuasion
Pith reviewed 2026-05-09 22:38 UTC · model grok-4.3
The pith
A sender can establish a causal link by disclosing only one or two variables but must disclose every common cause to rule out a perceived link.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that establishing a genuine causal link often succeeds after the sender discloses only one or two well-chosen variables together with their joint distribution, whereas persuading the receiver that no causal link exists requires the sender to disclose every common cause. The model also shows that debunking a receiver's pre-existing subjective causal model is informationally comparable to persuading a receiver who begins with no model at all.
What carries the argument
The causal persuasion game in which the sender chooses a subset of variables and their joint distribution to support a proposed causal model that the receiver accepts only if the data uniquely identifies the target causal relationship.
If this is right
- Senders face lower disclosure costs when they aim to create rather than eliminate a causal belief.
- Debunking an existing causal belief requires the same exhaustive revelation as persuading a blank-slate receiver.
- Persuasion succeeds precisely when the chosen variables and their distribution admit no other causal structure consistent with the data.
- The minimal disclosure sets that establish causality are typically far smaller than those that rule it out.
Where Pith is reading between the lines
- Information campaigns in markets or politics can exploit the asymmetry by releasing limited data to imply causation while withholding confounders that would disprove it.
- Regulatory requirements for full disclosure may be especially important when the goal is to prevent false causal beliefs rather than to create true ones.
- The framework predicts observable differences in the amount of data released by advocates versus skeptics in real debates over causation.
Load-bearing premise
The receiver accepts the proposed causal model only when the disclosed data conclusively identifies the causal link of interest and rules out all alternatives.
What would settle it
An experiment that presents subjects with either minimal or exhaustive variable disclosures drawn from the same joint distribution and measures whether belief in the causal link rises with the minimal set but falls only with the exhaustive set.
Figures
read the original abstract
We propose a model of causal persuasion, in which a sender selectively discloses a set of variables together with their true joint distribution and proposes a subjective causal model that binds them. A receiver is persuaded by this model only if the data conclusively identifies the causal link of interest. We characterize when such persuasion succeeds or fails, and how easily it can be achieved. We further show that if the receiver holds a pre-existing subjective model, debunking it is similar to persuading a receiver without one. To establish a true causal link, the sender often needs to disclose only one or two well-chosen variables. But to dispel a perceived link -- to persuade the receiver there is no causal relationship -- every common cause must be disclosed. Our results highlight a fundamental asymmetry in causal persuasion: Establishing causality is often much easier than ruling it out.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a model of causal persuasion in which a sender selectively discloses variables and their true joint distribution while proposing a subjective causal model. The receiver is persuaded only if the disclosed data conclusively identifies the causal link of interest (or its absence) within the proposed graph. The authors characterize conditions for successful persuasion, show that debunking a receiver's pre-existing model is similar to persuading from scratch, and establish an asymmetry: establishing a non-zero causal link often requires disclosing only one or two well-chosen variables, while ruling out a link requires disclosing every common cause.
Significance. If the characterizations hold, the paper contributes a clean theoretical framework linking selective disclosure to causal identification, with implications for Bayesian persuasion, information economics, and debates over causality in policy or science. The self-contained model (no free parameters or ad-hoc axioms) and the explicit asymmetry result are strengths that could generate falsifiable predictions for empirical work on disclosure strategies. The extension to pre-existing receiver models adds robustness.
major comments (3)
- [§2] §2 (model primitives and persuasion definition): Successful persuasion is defined directly as the joint distribution 'conclusively identifies the causal link' inside the sender-proposed graph. This strict identification criterion is imposed rather than derived from receiver utilities, priors, or Bayesian updating; it is load-bearing for the asymmetry because a weaker threshold rule (e.g., posterior probability of effect > threshold) could alter minimal disclosure cardinalities.
- [Characterization theorems] Characterization of minimal disclosures (likely Theorem 1 or Proposition 2): The claim that 'often only one or two well-chosen variables' suffice to establish a true causal link requires explicit statement of the identification strategy (e.g., which variables enable front-door or back-door identification) and the class of graphs considered. Without the full derivation visible, it is unclear whether the result holds generally or only under unstated restrictions on the true DGP.
- [§4] Debunking result (likely §4): The statement that 'debunking it is similar to persuading a receiver without one' needs verification that the receiver's pre-existing subjective model is correctly folded into the identification check; if the prior model alters how the disclosed joint distribution is interpreted, the similarity may fail to hold.
minor comments (2)
- [Abstract] The abstract's phrasing 'every common cause must be disclosed' should be qualified as 'every common cause of the treatment and outcome' for precision.
- [§2] Notation for the subjective causal model (e.g., how the proposed graph is formally represented) would benefit from a simple running example with an explicit DAG in the main text.
Circularity Check
Theoretical model self-contained; asymmetry follows directly from persuasion definition
full rationale
The paper defines a model of causal persuasion in which the receiver is persuaded only if the disclosed joint distribution conclusively identifies the causal link of interest (or its absence) within the proposed graph. The claimed asymmetry—that establishing a non-zero causal link often requires disclosing only one or two variables while dispelling a perceived link requires disclosing every common cause—follows immediately from this definition combined with standard causal identification results (e.g., back-door criterion). No step reduces a derived prediction to a fitted parameter, self-citation chain, or ansatz smuggled from prior work by the same authors. The construction is self-contained as pure theory; the persuasion success condition is stipulated rather than derived from receiver utilities or belief updating, but this is an explicit modeling choice, not a circular reduction.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption A receiver is persuaded by the proposed causal model only if the disclosed data conclusively identifies the causal link of interest.
invented entities (1)
-
subjective causal model proposed by the sender
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Aina, C. (2024). T ailored S tories. Working paper
2024
-
[2]
and Schneider, F
Aina, C. and Schneider, F. H. (2024). W eighting C ompeting M odels. Working paper
2024
-
[3]
Al-Najjar, N., Pomatto, L., and Sandroni, A. (2014). C laim V alidation. American Economic Review , 104(11):3725--36
2014
-
[4]
Ambuehl, S., Bhui, R., and Thysen, H. C. (2026). Mental models of causal structure in economics and psychology
2026
-
[5]
and Morris, S
Bergemann, D. and Morris, S. (2019). I nformation D esign: A unified P erspective. Journal of Economic Literature , 57(1):44--95
2019
-
[6]
Card, D. (1999). T he C ausal E ffect of E ducation on E arnings. Handbook of Labor Economics , 3:1801--1863
1999
-
[7]
and Hansen, K
Damsbo-Svendsen, S. and Hansen, K. M. (2023). When the election rains out and how bad weather excludes marginal voters from turning out. Electoral Studies , 81:102573
2023
-
[8]
Di Tillio, A., Ottaviani, M., and S rensen, P. N. (2021). Strategic Sample Selection . Econometrica , 89(2):911--953
2021
-
[9]
and Jin, G
Dranove, D. and Jin, G. Z. (2010). Q uality D isclosure and C ertification: T heory and P ractice. Journal of Economic Literature , 48(4):935--963
2010
-
[10]
Eliaz, K., Galperti, S., and Spiegler, R. (2025). False narratives and political mobilization. Journal of the European Economic Association , 23(3):983--1027. Publisher: Oxford University Press
2025
-
[11]
and Rubinstein, A
Eliaz, K. and Rubinstein, A. (2025). W asonian P ersuasion. Working paper
2025
-
[12]
and Spiegler, R
Eliaz, K. and Spiegler, R. (2020). A Model of Competing Narratives . American Economic Review , 110(12):3786--3816
2020
-
[13]
Eliaz, K., Spiegler, R., and Weiss, Y. (2021). C heating with M odels. American Economic Review: Insights , 3(4):417--434
2021
-
[14]
Gottesman, A. (2025). A limitation of E vidence- B acked C ommunication. Working paper
2025
-
[15]
E., and Yousaf, H
Gratton, G., Lee, B. E., and Yousaf, H. (2025). Bad Democracy Traps . The Economic Journal , page ueaf111
2025
-
[16]
Grossman, S. J. and Hart, O. D. (1980). D isclosure laws and takeover bids. The Journal of Finance , 35(2):323--334
1980
-
[17]
R., and Liu, H
Guo, R., Cheng, L., Li, J., Hahn, P. R., and Liu, H. (2021). A survey of L earning C ausality with D ata: P roblems and M ethods. ACM Computing Surveys , 53(4):1--37
2021
-
[18]
Hume, D. (1748). A n E nquiry C oncerning H uman U nderstanding . London: Millar
-
[19]
Ispano, A. (2025). T he perils of a coherent narrative. Economic Theory
2025
-
[20]
Johnson, D. S. (1974). Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences , 9(3):256--278
1974
-
[21]
Kamenica, E. (2019). B ayesian P ersuasion and I nformation D esign. Annual Review of Economics , 11(1):249--272
2019
-
[22]
and Gentzkow, M
Kamenica, E. and Gentzkow, M. (2011). B ayesian P ersuasion. American Economic Review , 101(6):2590--2615
2011
-
[23]
Karp, R. M. (2009). Reducibility among combinatorial problems. In 50 Years of Integer Programming 1958-2008: from the Early Years to the State-of-the-Art , pages 219--241. Springer Berlin Heidelberg
2009
-
[24]
Kaur, S., Mullainathan, S., Oh, S., and Schilbach, F. (2025). D o F inancial C oncerns M ake W orkers L ess P roductive? The Quarterly Journal of Economics , 140(1):635--689
2025
-
[25]
Little, A. T. (2023). B ayesian E xplanations for P ersuasion. Journal of Theoretical Politics , forthcoming
2023
-
[26]
Lov \'a sz, L. (1975). On the ratio of optimal integral and fractional covers. Discrete Mathematics , 13(4):383--390
1975
-
[27]
and Pinotti, P
Marie, O. and Pinotti, P. (2024). I mmigration and C rime: A n I nternational P erspective. Journal of Economic Perspectives , 38(1)
2024
-
[28]
Meek, C. (1995). C ausal I nference and C ausal E xplanation with B ackground K nowledge. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI-95) , pages 403--410
1995
-
[29]
M., Mehndiratta, P., and Pande, R
Mehndiratta, M. M., Mehndiratta, P., and Pande, R. (2014). P oliomyelitis: H istorical F acts, E pidemiology, and C urrent C hallenges in E radication. Neurohospitalist , 4(4):223--229
2014
-
[30]
Milgrom, P. R. (1981). G ood news and bad news: R epresentation theorems and applications. The Bell Journal of Economics , pages 380--391
1981
-
[31]
Nemhauser, G. L. and Wolsey, L. A. (1988). Integer and Combinatorial Optimization . Wiley, New York
1988
-
[32]
R., Pugnana, A., Ruggieri, S., Pedreschi, D., and Gama, J
Nogueira, A. R., Pugnana, A., Ruggieri, S., Pedreschi, D., and Gama, J. (2022). M ethods and tools for causal discovery and causal inference. WIREs Data Mining and Knowledge Discovery , 12(2):e1449
2022
-
[33]
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems : Networks of Plausible Inference . Morgan Kaufmann
1988
-
[34]
Pearl, J. (2009). C ausality . Cambridge university press
2009
-
[35]
and Williams, M.-A
Peppas, P. and Williams, M.-A. (1995). C onstructive M odelings for T heory C hange. Notre Dame Journal of Formal Logic , 36(1)
1995
-
[36]
Peters, J., Janzing, D., and Scholkopf, B. (2017). Elements of causal inference: foundations and learning algorithms . MIT press
2017
-
[37]
E., Green, J., Ruck, D
Robertson, R. E., Green, J., Ruck, D. J., Ognyanova, K., Wilson, C., and Lazer, D. (2023). Users choose to engage with more partisan news than they are exposed to on Google Search . Nature , 618(7964):342--348
2023
-
[38]
and Sunderam, A
Schwartzstein, J. and Sunderam, A. (2021). U sing M odels to P ersuade. American Economic Review , 111(1):276--323
2021
-
[39]
Spiegler, R. (2020). C an A gents with C ausal M isperceptions be S ystematically F ooled? Journal of the European Economic Association , 18(2):583--617
2020
-
[40]
R., Kim, C., and Sakamoto, A
Tamborini, C. R., Kim, C., and Sakamoto, A. (2015). E ducation and L ifetime E arnings in the U nited S tates. Demography , 52(4):1383--1407
2015
-
[41]
and Pearl, J
Verma, T. and Pearl, J. (1990). E quivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence , pages 255--270
1990
-
[42]
and Pearl, J
Verma, T. and Pearl, J. (2022). E quivalence and S ynthesis of C ausal M odels. In Geffner, H., Dechter, R., and Halpern, J. Y., editors, Probabilistic and Causal Inference , pages 221--236. ACM, New York, NY, USA, 1 edition
2022
-
[43]
Williams, G. (2013). Paralysed with Fear: The Story of Polio . Palgrave Macmillan UK, London
2013
- [44]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.