pith. machine review for the scientific record. sign in

Explain the Lean theorem `dot_sq_le_sqNorm_mul` in module `IndisputableMonolith.Cost.Ndim.Bridge`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English Statement

The theorem dot_sq_le_sqNorm_mul is the standard Cauchy-Schwarz inequality expressed for $n$-dimensional vectors. It states that for any two vectors $\alpha$ and $\varepsilon$, the square of their dot product is less than or equal to the squared length (norm) of $\alpha$ multiplied by the squared length of $\varepsilon$.

2. Role in Recognition Science

In the Recognition Science framework, physics and structure are forced by cost. When expanding cost functions into higher dimensions, the framework analyzes both an additiveQuadratic approximation ($\frac{1}{2}\sum \varepsilon_i^2$) and a multiplicativeQuadratic approximation ($\frac{1}{2}(\alpha \cdot \varepsilon)^2$).

This purely mathematical inequality is the load-bearing step that allows RS to compare these two costs. Specifically, it acts as a lemma for multiplicative_le_additive_of_sqNorm_le_one, which establishes that if the weighting vector $\alpha$ is normalized ($|\alpha|^2 \le 1$), the multiplicative cost never exceeds the additive cost. This guarantees that the residual difference between them—the compensatoryQuadratic term—is always non-negative, as proved in compensatory_nonneg_of_sqNorm_le_one.

3. Reading the Formal Statement

theorem dot_sq_le_sqNorm_mul {n : ℕ} (α ε : Vec n) :
    (dot α ε) ^ 2 ≤ (dot α α) * (∑ i : Fin n, (ε i) ^ 2)
  • {n : ℕ}: The dimension of the space is a natural number $n$.
  • (α ε : Vec n): $\alpha$ and $\varepsilon$ are $n$-dimensional vectors.
  • (dot α ε) ^ 2: The left-hand side is the square of the dot product $(\alpha \cdot \varepsilon)^2$.
  • : Is less than or equal to.
  • (dot α α): The dot product of $\alpha$ with itself, which is its squared norm $|\alpha|^2$.
  • (∑ i : Fin n, (ε i) ^ 2): The sum of the squared components of $\varepsilon$, which is its squared norm $|\varepsilon|^2$.

4. Visible Dependencies

In the supplied source, the theorem has a one-line proof using Lean's simpa tactic. It unpacks the local definition of dot and delegates directly to Mathlib's generalized Cauchy-Schwarz theorem for sums of squares: Finset.sum_mul_sq_le_sq_mul_sq. It also implicitly relies on the definitions of Vec and dot from IndisputableMonolith.Cost.Ndim.Core, which is imported at the top of the module.

5. What This Declaration Does Not Prove

As a THEOREM, this declaration is strictly a mathematical inequality. It does not:

  • Prove anything about the fundamental RS cost function $J(x)$ or the golden ratio $\phi$.
  • State that the compensatory cost is non-negative on its own; that requires the specific constraint that $|\alpha|^2 \le 1$, which is applied in downstream theorems, not here.
  • Define what $\alpha$ and $\varepsilon$ represent physically. Within the Ndim.Bridge module, they are simply abstract vectors.

cited recognition theorems

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.