pith. machine review for the scientific record. sign in
theorem

frequentist_vindicated

proved
show as:
view math explainer →
module
IndisputableMonolith.Philosophy.ProbabilityMeaningStructure
domain
Philosophy
line
149 · github
papers citing
none yet

open explainer

Generate a durable explainer page for this declaration.

open lean source

IndisputableMonolith.Philosophy.ProbabilityMeaningStructure on GitHub at line 149.

browse module

All declarations in this module, on Recognition.

explainer page

Tracked in the explainer inventory; generation is lazy so crawlers do not trigger LLM jobs.

open explainer

depends on

formal source

 146    Formalization: If the projection function is deterministic (same
 147    input → same output), the frequency of outcome v in a sequence
 148    depends only on which inputs appear, not on any "randomness." -/
 149theorem frequentist_vindicated (obs : Observer) (x : ℝ) :
 150    -- The projection is deterministic: same input → same output
 151    ∀ y : ℝ, x = y → project obs x = project obs y := by
 152  intro y hxy
 153  rw [hxy]
 154
 155/-- **Theorem (Bayesian Vindicated)**:
 156    The Bayesian insight is correct: probability is about information states.
 157
 158    Formalization: For any two observers with DIFFERENT resolutions, there
 159    exists a state x such that the two observers project x to DIFFERENT outcomes.
 160    This means their "probability assignments" differ — probability depends on
 161    the observer's information state (resolution), not just on reality.
 162
 163    This is the formal content of Bayesian subjectivism: same underlying state,
 164    different probabilities, because different observers have different resolutions. -/
 165theorem bayesian_vindicated :
 166    -- Every observer conflates distinct states (information is lost)
 167    -- This means probability depends on the observer's information state
 168    ∀ (obs : Observer),
 169    ∃ x y : ℝ, x ≠ y ∧ project obs x = project obs y := by
 170  intro obs
 171  exact projection_lossy obs
 172
 173/-- **Theorem (Resolution Increases Precision)**:
 174    Both a low-resolution and a high-resolution observer are lossy,
 175    but the high-resolution observer has finer fiber structure.
 176    This is the Bayesian "learning = refinement" claim.
 177
 178    Formalization: Both observers conflate some states, but the number of
 179    distinct fibers (= distinct probabilities) is different. -/