Here is an explanation of the Lean theorem additive_decomposition, structured for a mathematically literate reader.
1. What the Declaration Says in Plain English
The theorem states a simple algebraic identity: an "additive" quadratic cost (the sum of independent squared errors) can always be exactly split into a "multiplicative" quadratic cost (the square of a single, weighted sum of errors) plus a leftover "compensatory" term.
2. Why it Matters in Recognition Science
In Recognition Science, every physical interaction is governed by a unique, reciprocal-symmetric cost function $J(x)$. When examining small deviations in $n$-dimensional configurations, the cost can be approximated quadratically. This theorem provides a structural bridge between two ways a system might aggregate discrepancies:
- Additive: Treating each dimension's discrepancy independently ($\sum \varepsilon_i^2$).
- Multiplicative: Projecting the $n$-dimensional discrepancy onto a single axis (via weights $\alpha$) and squaring the result ($(\sum \alpha_i \varepsilon_i)^2$).
The decomposition formally ensures that we can always analyze a multi-dimensional quadratic cost in terms of a primary projected axis plus a decoupled residual.
3. How to Read the Formal Statement
theorem additive_decomposition {n : ℕ} (α ε : Vec n) :
additiveQuadratic ε = multiplicativeQuadratic α ε + compensatoryQuadratic α ε
{n : ℕ}: The dimension of the system, a natural number.(α ε : Vec n): Two vectors of length $n$. Here, $\varepsilon$ represents the deviations/errors, and $\alpha$ represents the projection weights.- additiveQuadratic: Defined mathematically as $\frac{1}{2} \sum_i \varepsilon_i^2$.
- multiplicativeQuadratic: Defined mathematically as $\frac{1}{2} (\alpha \cdot \varepsilon)^2$.
- compensatoryQuadratic: The residual term.
The theorem explicitly asserts that the additive cost is exactly equal to the multiplicative cost plus the compensatory term.
4. Visible Dependencies and Certificates
The theorem relies on the definitions of the three quadratic functions. The proof itself is a single-step algebraic simplification: it unfolds the definition of compensatoryQuadratic (which is defined strictly as additiveQuadratic ε - multiplicativeQuadratic α ε) and uses Lean's ring tactic to automatically verify the algebraic tautology $A = B + (A - B)$. It also relies on Vec n and dot from the imported IndisputableMonolith.Cost.Ndim.Core module.
5. What This Declaration Does NOT Prove
Because the theorem is an algebraic rearrangement, it does not prove:
- That the compensatory term is well-behaved on its own. For instance, proving that the residual is non-negative requires a separate theorem, compensatory_nonneg_of_sqNorm_le_one, which only holds if the projection vector $\alpha$ is normalized ($|\alpha|^2 \le 1$).
- Any direct properties of the exact nonlinear RS cost function $J(x)$. This theorem applies strictly to the quadratic approximations of cost.