1. What the declaration says in plain English
The gapToCapacity declaration defines a mathematical curve $g(N) = \frac{1}{\phi N}$, where $\phi$ is the golden ratio and $N$ is the block length. It calculates the theoretical performance penalty (the "gap") for transmitting data in finite blocks rather than infinite ones.
2. Why it matters in Recognition Science
In RS, this equation MODELs the finite-size scaling of information capacity. RS PREDICTS that for optimized Low-Density Parity-Check (LDPC) codes, the empirical gap to Shannon capacity follows this exact $\phi$-suppressed law. The module docstring notes an empirical HYPOTHESIS: at $N \approx 10,000$, the predicted gap of $1/(\phi \cdot 10^4)$ aligns with the industry-observed ~0.5 dB penalty. A named falsifier is provided: the framework would be falsified if long-block-length LDPC codes stably exhibited a gap outside this $1/(\phi N)$ prediction.
3. How to read the formal statement
def gapToCapacity (N : ℝ) : ℝ := 1 / (phi * N)
def gapToCapacity: Defines a new function.(N : ℝ): Takes a real number $N$ as its parameter.: ℝ: Returns a real number.:= 1 / (phi * N): The body computes the inverse of the product of $\phi$ and $N$.
4. Visible dependencies and certificates
The module PROVES fundamental algebraic properties of this defined curve and packages them into a certificate structure instantiated as cert:
- gap_pos: The gap is strictly positive for $N > 0$.
- gap_decreasing: The gap strictly decreases as $N$ increases.
- gap_doubling_halves: Doubling the block length exactly halves the gap.
- gap_times_N_invariant: The product $g(N) \cdot N$ is always $1/\phi$.
5. What this declaration does not prove
This module does NOT PROVE anything about information theory, Tanner graphs, or belief-propagation decoders. It merely defines a geometric curve and proves its elementary algebraic behavior in Lean. The connection between this curve and the actual performance of telecommunications codes is an external empirical HYPOTHESIS, not a theorem proved in the canon.