pith. machine review for the scientific record. sign in

Explain the Lean theorem `multiplicative_le_additive_of_sqNorm_le_one` in module `IndisputableMonolith.Cost.Ndim.Bridge`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English

The theorem multiplicative_le_additive_of_sqNorm_le_one states that for any vector of weights $\alpha$ whose squared length (dot product with itself) is $1$ or less, and for any vector of values $\epsilon$, the "multiplicative" quadratic expression $\frac{1}{2}(\sum \alpha_i \epsilon_i)^2$ is always less than or equal to the "additive" quadratic expression $\frac{1}{2}\sum \epsilon_i^2$.

2. Why it matters in Recognition Science

Recognition Science derives structural constraints from a central cost function. When analyzing multi-dimensional states, we must bound how costs aggregate. This theorem guarantees that when taking a weighted combination of states (where the weight vector is normalized), the squared combined cost (multiplicative) cannot exceed the sum of the individual squared costs (additive). This establishes that the defined compensatoryQuadratic term is always non-negative, which is formalized immediately after in compensatory_nonneg_of_sqNorm_le_one.

3. How to read the formal statement

theorem multiplicative_le_additive_of_sqNorm_le_one {n : ℕ}
    (α ε : Vec n) (hα : dot α α ≤ 1) :
    multiplicativeQuadratic α ε ≤ additiveQuadratic ε
  • {n : ℕ}: The dimension is a natural number (implicit parameter).
  • (α ε : Vec n): $\alpha$ and $\epsilon$ are vectors of dimension $n$.
  • (hα : dot α α ≤ 1): The hypothesis that the dot product of $\alpha$ with itself (its squared norm) is $\le 1$.
  • multiplicativeQuadratic α ε ≤ additiveQuadratic ε: The conclusion bounding the multiplicative model by the additive model.

4. Visible dependencies in the source

The theorem directly depends on definitions and bounds in the same module:

  • additiveQuadratic: Defines the additive quadratic approximation.
  • multiplicativeQuadratic: Defines the multiplicative quadratic approximation.
  • dot_sq_le_sqNorm_mul: The squared Cauchy-Schwarz bound used to execute the proof step $(\alpha \cdot \epsilon)^2 \le (\alpha \cdot \alpha) \sum \epsilon_i^2$.

5. What this declaration does not prove

  • It does not prove that the exact universal cost function $J(x)$ is purely quadratic. It only bounds the $n$-dimensional quadratic approximation models.
  • It does not prove that the weighting vector $\alpha$ must have a norm $\le 1$ in physical systems; this is an assumed hypothesis () for the theorem.
  • It does not establish the reciprocal symmetry $J(r) = J(r^{-1})$ (T5) which is the absolute foundation of RS cost, though it provides the structural bridge for working with multi-dimensional extensions of it.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • Proof that the weighting vector alpha physically must have a norm of 1 or less.
  • Proof that exact recognition cost is strictly equal to these quadratic approximations.
  • The underlying reciprocal-symmetric uniqueness theorem (T5) for J-cost is not present in this specific module slice.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.