pith. machine review for the scientific record. sign in

Explain the Lean theorem `additive_decomposition` in module `IndisputableMonolith.Cost.Ndim.Bridge`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

Here is an explanation of the Lean theorem additive_decomposition, structured for a mathematically literate reader.

1. What the Declaration Says in Plain English

The theorem states a simple algebraic identity: an "additive" quadratic cost (the sum of independent squared errors) can always be exactly split into a "multiplicative" quadratic cost (the square of a single, weighted sum of errors) plus a leftover "compensatory" term.

2. Why it Matters in Recognition Science

In Recognition Science, every physical interaction is governed by a unique, reciprocal-symmetric cost function $J(x)$. When examining small deviations in $n$-dimensional configurations, the cost can be approximated quadratically. This theorem provides a structural bridge between two ways a system might aggregate discrepancies:

  • Additive: Treating each dimension's discrepancy independently ($\sum \varepsilon_i^2$).
  • Multiplicative: Projecting the $n$-dimensional discrepancy onto a single axis (via weights $\alpha$) and squaring the result ($(\sum \alpha_i \varepsilon_i)^2$).

The decomposition formally ensures that we can always analyze a multi-dimensional quadratic cost in terms of a primary projected axis plus a decoupled residual.

3. How to Read the Formal Statement

theorem additive_decomposition {n : ℕ} (α ε : Vec n) :
    additiveQuadratic ε = multiplicativeQuadratic α ε + compensatoryQuadratic α ε
  • {n : ℕ}: The dimension of the system, a natural number.
  • (α ε : Vec n): Two vectors of length $n$. Here, $\varepsilon$ represents the deviations/errors, and $\alpha$ represents the projection weights.
  • additiveQuadratic: Defined mathematically as $\frac{1}{2} \sum_i \varepsilon_i^2$.
  • multiplicativeQuadratic: Defined mathematically as $\frac{1}{2} (\alpha \cdot \varepsilon)^2$.
  • compensatoryQuadratic: The residual term.

The theorem explicitly asserts that the additive cost is exactly equal to the multiplicative cost plus the compensatory term.

4. Visible Dependencies and Certificates

The theorem relies on the definitions of the three quadratic functions. The proof itself is a single-step algebraic simplification: it unfolds the definition of compensatoryQuadratic (which is defined strictly as additiveQuadratic ε - multiplicativeQuadratic α ε) and uses Lean's ring tactic to automatically verify the algebraic tautology $A = B + (A - B)$. It also relies on Vec n and dot from the imported IndisputableMonolith.Cost.Ndim.Core module.

5. What This Declaration Does NOT Prove

Because the theorem is an algebraic rearrangement, it does not prove:

  • That the compensatory term is well-behaved on its own. For instance, proving that the residual is non-negative requires a separate theorem, compensatory_nonneg_of_sqNorm_le_one, which only holds if the projection vector $\alpha$ is normalized ($|\alpha|^2 \le 1$).
  • Any direct properties of the exact nonlinear RS cost function $J(x)$. This theorem applies strictly to the quadratic approximations of cost.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • The formal bridge connecting these N-dimensional quadratic approximations back to the exact non-linear J(x) cost function.
  • The physical interpretation or consequence of the compensatory term in terms of gauge symmetry or internal degrees of freedom.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.