(1) What the Declaration Says in Plain English
The theorem compensatory_nonneg_of_sqNorm_le_one establishes a fundamental inequality for quadratic costs in $N$ dimensions. It states that if you have a vector of weights $\alpha$ whose squared magnitude is at most $1$, the "additive" quadratic cost of any state vector $\varepsilon$ will always be greater than or equal to its "multiplicative" quadratic cost. Consequently, the difference between them—defined as the "compensatory" residual—is always non-negative.
(2) Why it Matters in Recognition Science
In Recognition Science, physics is forced by cost (a THEOREM of the framework). When generalizing cost processing from a single scalar to an $N$-dimensional vector of states $\varepsilon$, there are two canonical ways to approximate the cost quadratically:
- Additive: Process the cost of each component independently and sum them.
- Multiplicative: Aggregate the components using weights $\alpha$, then compute the combined cost.
This theorem guarantees that under normalized weights, the additive configuration always bounds the multiplicative configuration from above. The non-negative residual (the compensatory term) represents the structural gap between independent processing and aggregated processing. It ensures that no negative costs spontaneously appear in multi-dimensional generalizations, preserving the strict positivity required by the underlying Law of Logic.
(3) How to Read the Formal Statement
theorem compensatory_nonneg_of_sqNorm_le_one {n : ℕ}
(α ε : Vec n) (hα : dot α α ≤ 1) :
0 ≤ compensatoryQuadratic α ε
{n : ℕ}: The theorem applies generically to any natural number dimension $n$.(α ε : Vec n): It accepts two $n$-dimensional real vectors: a weight vector $\alpha$ and a state/error vector $\varepsilon$.(hα : dot α α ≤ 1): This is the prerequisite hypothesis. The dot product of $\alpha$ with itself (its squared $L_2$ norm, $|\alpha|^2$) must be $\le 1$.0 ≤ compensatoryQuadratic α ε: The conclusion asserts that the compensatory quadratic term evaluated at $\alpha$ and $\varepsilon$ is greater than or equal to zero.
(4) Visible Dependencies in the Source
The proof is extremely tight and routes through three explicit steps within the same module:
- It unfolds the definition of compensatoryQuadratic, replacing it with
additiveQuadratic ε - multiplicativeQuadratic α ε. - It invokes the immediately preceding theorem multiplicative_le_additive_of_sqNorm_le_one, which proves the multiplicative cost is bounded by the additive cost. That underlying theorem relies on a squared Cauchy-Schwarz bound provided by dot_sq_le_sqNorm_mul.
- It uses the
linarithtactic to close the proof algebraically (since if $A \ge B$, then $A - B \ge 0$).
(5) What this Declaration Does Not Prove
- The Exact RS J-Cost Function: This module establishes bounds for purely quadratic approximations ($x^2$). It does not prove the exact reciprocal-symmetric J-cost functional equation $J(x) = (x+x^{-1})/2 - 1$ unique to Recognition Science.
- Weight Normalization: It does not prove that the weight vector $\alpha$ must have a norm $\le 1$; the bound is strictly conditional on the external hypothesis
hαbeing true. - Equality: It does not compute the exact value of the compensatory term, merely that it cannot fall below zero.