1. Plain English Statement
The theorem dot_sq_le_sqNorm_mul is the standard Cauchy-Schwarz inequality expressed for $n$-dimensional vectors. It states that for any two vectors $\alpha$ and $\varepsilon$, the square of their dot product is less than or equal to the squared length (norm) of $\alpha$ multiplied by the squared length of $\varepsilon$.
2. Role in Recognition Science
In the Recognition Science framework, physics and structure are forced by cost. When expanding cost functions into higher dimensions, the framework analyzes both an additiveQuadratic approximation ($\frac{1}{2}\sum \varepsilon_i^2$) and a multiplicativeQuadratic approximation ($\frac{1}{2}(\alpha \cdot \varepsilon)^2$).
This purely mathematical inequality is the load-bearing step that allows RS to compare these two costs. Specifically, it acts as a lemma for multiplicative_le_additive_of_sqNorm_le_one, which establishes that if the weighting vector $\alpha$ is normalized ($|\alpha|^2 \le 1$), the multiplicative cost never exceeds the additive cost. This guarantees that the residual difference between them—the compensatoryQuadratic term—is always non-negative, as proved in compensatory_nonneg_of_sqNorm_le_one.
3. Reading the Formal Statement
theorem dot_sq_le_sqNorm_mul {n : ℕ} (α ε : Vec n) :
(dot α ε) ^ 2 ≤ (dot α α) * (∑ i : Fin n, (ε i) ^ 2)
{n : ℕ}: The dimension of the space is a natural number $n$.(α ε : Vec n): $\alpha$ and $\varepsilon$ are $n$-dimensional vectors.(dot α ε) ^ 2: The left-hand side is the square of the dot product $(\alpha \cdot \varepsilon)^2$.≤: Is less than or equal to.(dot α α): The dot product of $\alpha$ with itself, which is its squared norm $|\alpha|^2$.(∑ i : Fin n, (ε i) ^ 2): The sum of the squared components of $\varepsilon$, which is its squared norm $|\varepsilon|^2$.
4. Visible Dependencies
In the supplied source, the theorem has a one-line proof using Lean's simpa tactic. It unpacks the local definition of dot and delegates directly to Mathlib's generalized Cauchy-Schwarz theorem for sums of squares: Finset.sum_mul_sq_le_sq_mul_sq. It also implicitly relies on the definitions of Vec and dot from IndisputableMonolith.Cost.Ndim.Core, which is imported at the top of the module.
5. What This Declaration Does Not Prove
As a THEOREM, this declaration is strictly a mathematical inequality. It does not:
- Prove anything about the fundamental RS cost function $J(x)$ or the golden ratio $\phi$.
- State that the compensatory cost is non-negative on its own; that requires the specific constraint that $|\alpha|^2 \le 1$, which is applied in downstream theorems, not here.
- Define what $\alpha$ and $\varepsilon$ represent physically. Within the
Ndim.Bridgemodule, they are simply abstract vectors.