pith. machine review for the scientific record. sign in
def definition def or abbrev high

languageModelAlignmentFraction

show as:
view Lean formalization →

Defines the language model alignment fraction as the rational number 7/8. Researchers auditing the three-substrate validation certificate cite this value when checking J-cost performance against cross-entropy baselines. The definition is a direct constant assignment that requires no lemmas or computation steps.

claimThe language model alignment fraction is the rational number $7/8$.

background

Recognition Science tests J-cost uniqueness across three substrates in the validation certificate: language models, photonic qubits, and magnetized plasma. All three share the fixed point at argument 1 where J-cost vanishes, descent along the gradient, and symmetry under inversion. The language-model substrate contributes the reported 7/8 alignment fraction for MLP-layer comparisons.

proof idea

Direct definition that assigns the constant 7/8 with no lemmas or tactics applied.

why it matters in Recognition Science

Supplies the numerical constant required by the ThreeSubstrateCert structure and its dependent theorems lm_above_threshold, lm_fraction_eq, and seven_eighths_from_F2_cube. The value matches the eight-tick octave relation (2^3 - 1)/2^3 from the forcing chain. The surrounding certificate remains at hypothesis grade.

scope and limits

Lean usage

theorem lm_fraction_eq : languageModelAlignmentFraction = 7/8 := rfl

formal statement (Lean)

  45def languageModelAlignmentFraction : ℚ := 7/8

used by (4)

From the project-wide theorem graph. These declarations reference this one in their body.