pith. machine review for the scientific record. sign in

arxiv: 2604.20664 · v2 · submitted 2026-04-22 · 💰 econ.TH

Recognition: unknown

Causal Persuasion

Anastasia Burkovskaya, Egor Starkov

Authors on Pith no claims yet

Pith reviewed 2026-05-09 22:38 UTC · model grok-4.3

classification 💰 econ.TH
keywords causal persuasionselective disclosurecausal identificationasymmetry in informationcommon causessubjective causal modelspersuasion with data
0
0 comments X

The pith

A sender can establish a causal link by disclosing only one or two variables but must disclose every common cause to rule out a perceived link.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper models a sender who selectively reveals variables along with their true joint distribution and offers a proposed causal structure linking them. The receiver adopts this structure only when the revealed data rules out every other possible causal explanation for the observed relationships. Under these rules the authors derive the conditions under which persuasion succeeds and show a sharp asymmetry: confirming an actual causal connection usually requires very little information, while eliminating a false one demands exhaustive disclosure of all potential confounders. The same asymmetry persists when the receiver already holds a prior causal belief. These results matter because they identify when selective information release can reliably shape beliefs about cause and effect.

Core claim

The central claim is that establishing a genuine causal link often succeeds after the sender discloses only one or two well-chosen variables together with their joint distribution, whereas persuading the receiver that no causal link exists requires the sender to disclose every common cause. The model also shows that debunking a receiver's pre-existing subjective causal model is informationally comparable to persuading a receiver who begins with no model at all.

What carries the argument

The causal persuasion game in which the sender chooses a subset of variables and their joint distribution to support a proposed causal model that the receiver accepts only if the data uniquely identifies the target causal relationship.

If this is right

  • Senders face lower disclosure costs when they aim to create rather than eliminate a causal belief.
  • Debunking an existing causal belief requires the same exhaustive revelation as persuading a blank-slate receiver.
  • Persuasion succeeds precisely when the chosen variables and their distribution admit no other causal structure consistent with the data.
  • The minimal disclosure sets that establish causality are typically far smaller than those that rule it out.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Information campaigns in markets or politics can exploit the asymmetry by releasing limited data to imply causation while withholding confounders that would disprove it.
  • Regulatory requirements for full disclosure may be especially important when the goal is to prevent false causal beliefs rather than to create true ones.
  • The framework predicts observable differences in the amount of data released by advocates versus skeptics in real debates over causation.

Load-bearing premise

The receiver accepts the proposed causal model only when the disclosed data conclusively identifies the causal link of interest and rules out all alternatives.

What would settle it

An experiment that presents subjects with either minimal or exhaustive variable disclosures drawn from the same joint distribution and measures whether belief in the causal link rises with the minimal set but falls only with the exhaustive set.

Figures

Figures reproduced from arXiv: 2604.20664 by Anastasia Burkovskaya, Egor Starkov.

Figure 1
Figure 1. Figure 1: The true DAG and the employee’s model after the promo campaign. [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Employee’s model after the first chat with the employer. [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Multiple models consistent with the same data. [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Meek (1995) rules. (a) true model (b) IC algorithm output [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Applying the IC algorithm. 2. For any a, b, c ∈ Ω that constitute a direct V-structure, direct the links towards b. 3. For any a, b ∈ Ω connected by an undirected link, direct the link from a to b if: (a) link b → a creates a V-structure that would have been identified in Step 2, or (b) link b → a creates a cycle. The IC algorithm outputs a partially-directed graph C that corresponds to an equiva￾lence cla… view at source ↗
Figure 6
Figure 6. Figure 6: No models are consistent with the data on [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Simple (a and c) and non-simple (b) models. [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: (Ωt , Ct) with obvious and non-obvious causes. most two variables in addition to x, y. If such variables exist, we say that persuasion is “easy”. We show later that in case these conditions do not hold, then even if persuasion is possible, it may be “difficult” in that it may require disclosing arbitrarily many variables. Definition 8. Given x, y ∈ Ωt, we call z ∈ Ωt an obvious cause of y given x if z ∈ C¯… view at source ↗
Figure 9
Figure 9. Figure 9: Misleading persuasion using an obvious cause. [PITH_FULL_IMAGE:figures/full_fig_p015_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Persuasion without a cause. Example 12. Suppose the true model is such that Ωt = {x, y, a, b1, ..., bn, c1, ..., cn}, and the causal graph is given by {a, bi} →t ci →t {x, y} for all i ∈ {1, ..., n}, a →t x, x →t y, and ci →t cj for any i > j. Such a model for n = 2 is depicted in [PITH_FULL_IMAGE:figures/full_fig_p016_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Debunking with a non-obvious cause. 1. If there exists an obvious cause of y given x, then the receiver’s model can be debunked by revealing one new variable (the obvious cause). 2. If x ⇒t y and there exists a non-obvious cause of y given x, then the receiver’s model can be debunked by revealing at most two new variables. Proof. See Appendix. Theorem 2 generalizes Proposition 2 to the case of the receive… view at source ↗
Figure 12
Figure 12. Figure 12: Debunking may not lead to persuasion. attempt to curb the business cycles. To do so with data, the sender can show another factor z that affects interest rates but is not correlated with inflation, which would be the obvious cause in this case. Any variable affecting the savings rate, such as ageing population18 or a shock to the intertemporal discount factor β, would be the natural candidate. Alternative… view at source ↗
Figure 13
Figure 13. Figure 13: Replacing a defective model with a defective model. [PITH_FULL_IMAGE:figures/full_fig_p022_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Persuasion by nitpicking. However, such deception is only possible if the true model allows it and if the receiver is sufficiently unaware. In particular, the receiver must not be aware of enough variables to realize that x and y are not adjacent, as discussed above. But on top of this, the receiver’s (un)awareness must accommodate the sender’s defective model, as illustrated by the following example. Exa… view at source ↗
Figure 15
Figure 15. Figure 15: Example: persuasion by nitpicking not possible. [PITH_FULL_IMAGE:figures/full_fig_p024_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Ruling out a defective link: x ←r y, x ⇒t y. indeed be more feasible than persuading the receiver that x and y do not have a causal relation. Indeed, the same receiver’s model is debunked in Example 18 by a (defective) model that reveals one new variable, while debunking it in Example 23 by a non-defective model requires revealing arbitrarily many new variables. Persuading with a defective model may thus … view at source ↗
Figure 17
Figure 17. Figure 17: Ruling out a correct link by nitpicking: [PITH_FULL_IMAGE:figures/full_fig_p027_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: R2 and non-simple models required to orient a link incorrectly. [PITH_FULL_IMAGE:figures/full_fig_p031_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: R3 and non-simple models required to orient a link incorrectly. [PITH_FULL_IMAGE:figures/full_fig_p031_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: R4 and non-simple models required to orient a link incorrectly. [PITH_FULL_IMAGE:figures/full_fig_p032_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: R1, R2, and models required to orient a link incorrectly, indirect case. [PITH_FULL_IMAGE:figures/full_fig_p033_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: R3, R4, and models required to orient a link incorrectly, indirect case. [PITH_FULL_IMAGE:figures/full_fig_p034_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Options for orienting link g →t x. (a) (b) (c) [PITH_FULL_IMAGE:figures/full_fig_p035_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Options for orienting link x →t d via R4. to d s.t. g →t x, and this link was oriented earlier in the algorithm. Consider again cases regarding how g →t x could have been oriented: (a) If g →t x was oriented in Step 2 of the IC algorithm as a part of some direct V-structure g, x, h then this is exactly the V-structure required by the lemma, so we are done. This case is depicted in [PITH_FULL_IMAGE:figure… view at source ↗
read the original abstract

We propose a model of causal persuasion, in which a sender selectively discloses a set of variables together with their true joint distribution and proposes a subjective causal model that binds them. A receiver is persuaded by this model only if the data conclusively identifies the causal link of interest. We characterize when such persuasion succeeds or fails, and how easily it can be achieved. We further show that if the receiver holds a pre-existing subjective model, debunking it is similar to persuading a receiver without one. To establish a true causal link, the sender often needs to disclose only one or two well-chosen variables. But to dispel a perceived link -- to persuade the receiver there is no causal relationship -- every common cause must be disclosed. Our results highlight a fundamental asymmetry in causal persuasion: Establishing causality is often much easier than ruling it out.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes a model of causal persuasion in which a sender selectively discloses variables and their true joint distribution while proposing a subjective causal model. The receiver is persuaded only if the disclosed data conclusively identifies the causal link of interest (or its absence) within the proposed graph. The authors characterize conditions for successful persuasion, show that debunking a receiver's pre-existing model is similar to persuading from scratch, and establish an asymmetry: establishing a non-zero causal link often requires disclosing only one or two well-chosen variables, while ruling out a link requires disclosing every common cause.

Significance. If the characterizations hold, the paper contributes a clean theoretical framework linking selective disclosure to causal identification, with implications for Bayesian persuasion, information economics, and debates over causality in policy or science. The self-contained model (no free parameters or ad-hoc axioms) and the explicit asymmetry result are strengths that could generate falsifiable predictions for empirical work on disclosure strategies. The extension to pre-existing receiver models adds robustness.

major comments (3)
  1. [§2] §2 (model primitives and persuasion definition): Successful persuasion is defined directly as the joint distribution 'conclusively identifies the causal link' inside the sender-proposed graph. This strict identification criterion is imposed rather than derived from receiver utilities, priors, or Bayesian updating; it is load-bearing for the asymmetry because a weaker threshold rule (e.g., posterior probability of effect > threshold) could alter minimal disclosure cardinalities.
  2. [Characterization theorems] Characterization of minimal disclosures (likely Theorem 1 or Proposition 2): The claim that 'often only one or two well-chosen variables' suffice to establish a true causal link requires explicit statement of the identification strategy (e.g., which variables enable front-door or back-door identification) and the class of graphs considered. Without the full derivation visible, it is unclear whether the result holds generally or only under unstated restrictions on the true DGP.
  3. [§4] Debunking result (likely §4): The statement that 'debunking it is similar to persuading a receiver without one' needs verification that the receiver's pre-existing subjective model is correctly folded into the identification check; if the prior model alters how the disclosed joint distribution is interpreted, the similarity may fail to hold.
minor comments (2)
  1. [Abstract] The abstract's phrasing 'every common cause must be disclosed' should be qualified as 'every common cause of the treatment and outcome' for precision.
  2. [§2] Notation for the subjective causal model (e.g., how the proposed graph is formally represented) would benefit from a simple running example with an explicit DAG in the main text.

Circularity Check

0 steps flagged

Theoretical model self-contained; asymmetry follows directly from persuasion definition

full rationale

The paper defines a model of causal persuasion in which the receiver is persuaded only if the disclosed joint distribution conclusively identifies the causal link of interest (or its absence) within the proposed graph. The claimed asymmetry—that establishing a non-zero causal link often requires disclosing only one or two variables while dispelling a perceived link requires disclosing every common cause—follows immediately from this definition combined with standard causal identification results (e.g., back-door criterion). No step reduces a derived prediction to a fitted parameter, self-citation chain, or ansatz smuggled from prior work by the same authors. The construction is self-contained as pure theory; the persuasion success condition is stipulated rather than derived from receiver utilities or belief updating, but this is an explicit modeling choice, not a circular reduction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on a behavioral assumption about when a receiver accepts a causal model and on the formal definition of conclusive identification from disclosed data.

axioms (1)
  • domain assumption A receiver is persuaded by the proposed causal model only if the disclosed data conclusively identifies the causal link of interest.
    This defines the success condition for persuasion and is stated directly in the abstract.
invented entities (1)
  • subjective causal model proposed by the sender no independent evidence
    purpose: Binds the disclosed variables into a causal story that the receiver evaluates against the data.
    Introduced as part of the persuasion mechanism; no independent empirical test is provided in the abstract.

pith-pipeline@v0.9.0 · 5423 in / 1197 out tokens · 123392 ms · 2026-05-09T22:38:44.374526+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

44 extracted references · 1 canonical work pages

  1. [1]

    Aina, C. (2024). T ailored S tories. Working paper

  2. [2]

    and Schneider, F

    Aina, C. and Schneider, F. H. (2024). W eighting C ompeting M odels. Working paper

  3. [3]

    Al-Najjar, N., Pomatto, L., and Sandroni, A. (2014). C laim V alidation. American Economic Review , 104(11):3725--36

  4. [4]

    Ambuehl, S., Bhui, R., and Thysen, H. C. (2026). Mental models of causal structure in economics and psychology

  5. [5]

    and Morris, S

    Bergemann, D. and Morris, S. (2019). I nformation D esign: A unified P erspective. Journal of Economic Literature , 57(1):44--95

  6. [6]

    Card, D. (1999). T he C ausal E ffect of E ducation on E arnings. Handbook of Labor Economics , 3:1801--1863

  7. [7]

    and Hansen, K

    Damsbo-Svendsen, S. and Hansen, K. M. (2023). When the election rains out and how bad weather excludes marginal voters from turning out. Electoral Studies , 81:102573

  8. [8]

    Di Tillio, A., Ottaviani, M., and S rensen, P. N. (2021). Strategic Sample Selection . Econometrica , 89(2):911--953

  9. [9]

    and Jin, G

    Dranove, D. and Jin, G. Z. (2010). Q uality D isclosure and C ertification: T heory and P ractice. Journal of Economic Literature , 48(4):935--963

  10. [10]

    Eliaz, K., Galperti, S., and Spiegler, R. (2025). False narratives and political mobilization. Journal of the European Economic Association , 23(3):983--1027. Publisher: Oxford University Press

  11. [11]

    and Rubinstein, A

    Eliaz, K. and Rubinstein, A. (2025). W asonian P ersuasion. Working paper

  12. [12]

    and Spiegler, R

    Eliaz, K. and Spiegler, R. (2020). A Model of Competing Narratives . American Economic Review , 110(12):3786--3816

  13. [13]

    Eliaz, K., Spiegler, R., and Weiss, Y. (2021). C heating with M odels. American Economic Review: Insights , 3(4):417--434

  14. [14]

    Gottesman, A. (2025). A limitation of E vidence- B acked C ommunication. Working paper

  15. [15]

    E., and Yousaf, H

    Gratton, G., Lee, B. E., and Yousaf, H. (2025). Bad Democracy Traps . The Economic Journal , page ueaf111

  16. [16]

    Grossman, S. J. and Hart, O. D. (1980). D isclosure laws and takeover bids. The Journal of Finance , 35(2):323--334

  17. [17]

    R., and Liu, H

    Guo, R., Cheng, L., Li, J., Hahn, P. R., and Liu, H. (2021). A survey of L earning C ausality with D ata: P roblems and M ethods. ACM Computing Surveys , 53(4):1--37

  18. [18]

    Hume, D. (1748). A n E nquiry C oncerning H uman U nderstanding . London: Millar

  19. [19]

    Ispano, A. (2025). T he perils of a coherent narrative. Economic Theory

  20. [20]

    Johnson, D. S. (1974). Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences , 9(3):256--278

  21. [21]

    Kamenica, E. (2019). B ayesian P ersuasion and I nformation D esign. Annual Review of Economics , 11(1):249--272

  22. [22]

    and Gentzkow, M

    Kamenica, E. and Gentzkow, M. (2011). B ayesian P ersuasion. American Economic Review , 101(6):2590--2615

  23. [23]

    Karp, R. M. (2009). Reducibility among combinatorial problems. In 50 Years of Integer Programming 1958-2008: from the Early Years to the State-of-the-Art , pages 219--241. Springer Berlin Heidelberg

  24. [24]

    Kaur, S., Mullainathan, S., Oh, S., and Schilbach, F. (2025). D o F inancial C oncerns M ake W orkers L ess P roductive? The Quarterly Journal of Economics , 140(1):635--689

  25. [25]

    Little, A. T. (2023). B ayesian E xplanations for P ersuasion. Journal of Theoretical Politics , forthcoming

  26. [26]

    Lov \'a sz, L. (1975). On the ratio of optimal integral and fractional covers. Discrete Mathematics , 13(4):383--390

  27. [27]

    and Pinotti, P

    Marie, O. and Pinotti, P. (2024). I mmigration and C rime: A n I nternational P erspective. Journal of Economic Perspectives , 38(1)

  28. [28]

    Meek, C. (1995). C ausal I nference and C ausal E xplanation with B ackground K nowledge. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI-95) , pages 403--410

  29. [29]

    M., Mehndiratta, P., and Pande, R

    Mehndiratta, M. M., Mehndiratta, P., and Pande, R. (2014). P oliomyelitis: H istorical F acts, E pidemiology, and C urrent C hallenges in E radication. Neurohospitalist , 4(4):223--229

  30. [30]

    Milgrom, P. R. (1981). G ood news and bad news: R epresentation theorems and applications. The Bell Journal of Economics , pages 380--391

  31. [31]

    Nemhauser, G. L. and Wolsey, L. A. (1988). Integer and Combinatorial Optimization . Wiley, New York

  32. [32]

    R., Pugnana, A., Ruggieri, S., Pedreschi, D., and Gama, J

    Nogueira, A. R., Pugnana, A., Ruggieri, S., Pedreschi, D., and Gama, J. (2022). M ethods and tools for causal discovery and causal inference. WIREs Data Mining and Knowledge Discovery , 12(2):e1449

  33. [33]

    Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems : Networks of Plausible Inference . Morgan Kaufmann

  34. [34]

    Pearl, J. (2009). C ausality . Cambridge university press

  35. [35]

    and Williams, M.-A

    Peppas, P. and Williams, M.-A. (1995). C onstructive M odelings for T heory C hange. Notre Dame Journal of Formal Logic , 36(1)

  36. [36]

    Peters, J., Janzing, D., and Scholkopf, B. (2017). Elements of causal inference: foundations and learning algorithms . MIT press

  37. [37]

    E., Green, J., Ruck, D

    Robertson, R. E., Green, J., Ruck, D. J., Ognyanova, K., Wilson, C., and Lazer, D. (2023). Users choose to engage with more partisan news than they are exposed to on Google Search . Nature , 618(7964):342--348

  38. [38]

    and Sunderam, A

    Schwartzstein, J. and Sunderam, A. (2021). U sing M odels to P ersuade. American Economic Review , 111(1):276--323

  39. [39]

    Spiegler, R. (2020). C an A gents with C ausal M isperceptions be S ystematically F ooled? Journal of the European Economic Association , 18(2):583--617

  40. [40]

    R., Kim, C., and Sakamoto, A

    Tamborini, C. R., Kim, C., and Sakamoto, A. (2015). E ducation and L ifetime E arnings in the U nited S tates. Demography , 52(4):1383--1407

  41. [41]

    and Pearl, J

    Verma, T. and Pearl, J. (1990). E quivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence , pages 255--270

  42. [42]

    and Pearl, J

    Verma, T. and Pearl, J. (2022). E quivalence and S ynthesis of C ausal M odels. In Geffner, H., Dechter, R., and Halpern, J. Y., editors, Probabilistic and Causal Inference , pages 221--236. ACM, New York, NY, USA, 1 edition

  43. [43]

    Williams, G. (2013). Paralysed with Fear: The Story of Polio . Palgrave Macmillan UK, London

  44. [44]

    Zanga, A., Ozkirimli, E., and Stella, F. (2022). A survey on C ausal D iscovery: T heory and P ractice. International Journal of Approximate Reasoning , 151:101--129. arXiv:2305.10032 [cs]