1. Plain English
The theorem multiplicative_le_additive_of_sqNorm_le_one states that for any vector of weights $\alpha$ whose squared length (dot product with itself) is $1$ or less, and for any vector of values $\epsilon$, the "multiplicative" quadratic expression $\frac{1}{2}(\sum \alpha_i \epsilon_i)^2$ is always less than or equal to the "additive" quadratic expression $\frac{1}{2}\sum \epsilon_i^2$.
2. Why it matters in Recognition Science
Recognition Science derives structural constraints from a central cost function. When analyzing multi-dimensional states, we must bound how costs aggregate. This theorem guarantees that when taking a weighted combination of states (where the weight vector is normalized), the squared combined cost (multiplicative) cannot exceed the sum of the individual squared costs (additive). This establishes that the defined compensatoryQuadratic term is always non-negative, which is formalized immediately after in compensatory_nonneg_of_sqNorm_le_one.
3. How to read the formal statement
theorem multiplicative_le_additive_of_sqNorm_le_one {n : ℕ}
(α ε : Vec n) (hα : dot α α ≤ 1) :
multiplicativeQuadratic α ε ≤ additiveQuadratic ε
{n : ℕ}: The dimension is a natural number (implicit parameter).(α ε : Vec n): $\alpha$ and $\epsilon$ are vectors of dimension $n$.(hα : dot α α ≤ 1): The hypothesis that the dot product of $\alpha$ with itself (its squared norm) is $\le 1$.multiplicativeQuadratic α ε ≤ additiveQuadratic ε: The conclusion bounding the multiplicative model by the additive model.
4. Visible dependencies in the source
The theorem directly depends on definitions and bounds in the same module:
- additiveQuadratic: Defines the additive quadratic approximation.
- multiplicativeQuadratic: Defines the multiplicative quadratic approximation.
- dot_sq_le_sqNorm_mul: The squared Cauchy-Schwarz bound used to execute the proof step $(\alpha \cdot \epsilon)^2 \le (\alpha \cdot \alpha) \sum \epsilon_i^2$.
5. What this declaration does not prove
- It does not prove that the exact universal cost function $J(x)$ is purely quadratic. It only bounds the $n$-dimensional quadratic approximation models.
- It does not prove that the weighting vector $\alpha$ must have a norm $\le 1$ in physical systems; this is an assumed hypothesis (
hα) for the theorem. - It does not establish the reciprocal symmetry $J(r) = J(r^{-1})$ (T5) which is the absolute foundation of RS cost, though it provides the structural bridge for working with multi-dimensional extensions of it.