pith. machine review for the scientific record. sign in
def definition def or abbrev high

binarySymmetricCapacity

show as:
view Lean formalization →

The binary symmetric channel capacity is defined as the real-valued expression 1 plus p times log p over log 2 plus (1-p) times log(1-p) over log 2. Information theorists applying Recognition Science error bounds cite it when quantifying reliable rates under 8-tick phase redundancy. The definition is a direct algebraic expansion of the standard formula C = 1 - H_2(p).

claimThe capacity $C(p)$ of a binary symmetric channel with crossover probability $p$ equals $1 + p (log p / log 2) + (1-p) (log (1-p) / log 2)$, which is equivalent to $1 - H_2(p)$ for binary entropy $H_2(p) = -p log_2 p - (1-p) log_2 (1-p)$.

background

The Information.ErrorCorrectionBounds module derives error correction bounds from the 8-tick structure, where 8-tick phases supply natural redundancy by encoding bits across phases with recoverable correlations. Shannon's channel capacity theorem supplies the maximum reliable rate C = 1 - H(p) for binary symmetric noise probability p. Upstream results include CostAlgebra.H, the shifted cost H(x) = J(x) + 1 = ½(x + x^{-1}) under which the Recognition Composition Law reduces to the d'Alembert equation, together with structures from PhiForcingDerived and SpectralEmergence that fix the J-cost and force SU(3) × SU(2) × U(1) content with three generations.

proof idea

The declaration is a direct definition that expands the binary entropy formula into real arithmetic. It rewrites H_2(p) as the sum of the two positive log terms scaled by 1/log 2 and subtracts from 1.

why it matters in Recognition Science

This definition supplies the capacity function used by capacity_at_10_percent, which shows reliable transmission at 53 percent of raw rate for p = 0.1. It fills the Shannon capacity step inside the INFO-005 module on error correction bounds from 8-tick structure. The result connects to the eight-tick octave by quantifying how phase redundancy supports rates below capacity.

scope and limits

Lean usage

binarySymmetricCapacity 0.1

formal statement (Lean)

  49noncomputable def binarySymmetricCapacity (p : ℝ) : ℝ :=

proof body

Definition body.

  50  1 + p * (Real.log p / Real.log 2) + (1-p) * (Real.log (1-p) / Real.log 2)
  51
  52/-- For p = 0.1 (10% error rate):
  53    H(0.1) ≈ 0.47 bits
  54    C ≈ 0.53 bits per channel use
  55
  56    Can still transmit reliably at 53% of raw capacity! -/

used by (1)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (7)

Lean names referenced from this declaration's body.