pith. machine review for the scientific record. sign in

Explain the Lean def `gapToCapacity` in module `IndisputableMonolith.Information.LDPCCodeRateFromPhi`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. What the declaration says in plain English

The gapToCapacity declaration defines a mathematical curve $g(N) = \frac{1}{\phi N}$, where $\phi$ is the golden ratio and $N$ is the block length. It calculates the theoretical performance penalty (the "gap") for transmitting data in finite blocks rather than infinite ones.

2. Why it matters in Recognition Science

In RS, this equation MODELs the finite-size scaling of information capacity. RS PREDICTS that for optimized Low-Density Parity-Check (LDPC) codes, the empirical gap to Shannon capacity follows this exact $\phi$-suppressed law. The module docstring notes an empirical HYPOTHESIS: at $N \approx 10,000$, the predicted gap of $1/(\phi \cdot 10^4)$ aligns with the industry-observed ~0.5 dB penalty. A named falsifier is provided: the framework would be falsified if long-block-length LDPC codes stably exhibited a gap outside this $1/(\phi N)$ prediction.

3. How to read the formal statement

def gapToCapacity (N : ℝ) : ℝ := 1 / (phi * N)
  • def gapToCapacity: Defines a new function.
  • (N : ℝ): Takes a real number $N$ as its parameter.
  • : ℝ: Returns a real number.
  • := 1 / (phi * N): The body computes the inverse of the product of $\phi$ and $N$.

4. Visible dependencies and certificates

The module PROVES fundamental algebraic properties of this defined curve and packages them into a certificate structure instantiated as cert:

5. What this declaration does not prove

This module does NOT PROVE anything about information theory, Tanner graphs, or belief-propagation decoders. It merely defines a geometric curve and proves its elementary algebraic behavior in Lean. The connection between this curve and the actual performance of telecommunications codes is an external empirical HYPOTHESIS, not a theorem proved in the canon.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • Information-theoretic properties of actual LDPC codes, Tanner graphs, and belief-propagation decoders are not formalized here; only the geometric algebraic properties of the predicted scaling curve are proved.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.