pith. machine review for the scientific record. sign in

Explain the Lean theorem `FApply_sub` in module `IndisputableMonolith.Cost.Ndim.Projector`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English Statement

The theorem FApply_sub states that the almost-product operator $F$ is linear with respect to vector subtraction. Applying $F$ to the difference of two vectors $v - w$ yields the identical result as applying $F$ to $v$ and $w$ separately and subtracting the resulting vectors.

2. Role in Recognition Science

In Recognition Science, spatial structure is modeled through operator algebras derived from rank-one tensors. Given a one-form $\beta$ and an inverse metric $h^{-1}$, the framework constructs a base operator $A$, its normalized projector $P$, and an induced almost-product operator $F = 2P - I$. The algebraic behavior of $F$ is structurally necessary because it forms the core of the golden and metallic operator families. FApply_sub guarantees that $F$ rigorously respects subtraction, satisfying the basic linearity requirements needed to manipulate these operators in downstream proofs.

3. Reading the Formal Statement

theorem FApply_sub {n : ℕ}
    (lam : ℝ) (hInv : Fin n → Fin n → ℝ) (β : Vec n)
    (v w : Vec n) :
    FApply lam hInv β (v - w) = FApply lam hInv β v - FApply lam hInv β w
  • {n : ℕ}: The dimension of the underlying space.
  • lam : ℝ: A scalar parameter $\lambda$.
  • hInv : Fin n → Fin n → ℝ: The inverse metric kernel $h^{-1}$.
  • β : Vec n: The defining covector $\beta$.
  • v w : Vec n: The input vectors being subtracted.
  • The conclusion asserts the identity $F(v - w) = F(v) - F(w)$.

4. Visible Dependencies and Certificates

The proof is a direct, one-line structural reduction. It unwraps subtraction into the addition of a negated vector (v + (-w)) and relies entirely on two adjacent linearity lemmas:

5. What This Declaration Does Not Prove

FApply_sub isolates subtraction linearity. It does not prove:

  • That $F$ scales linearly with scalar multiplication. That is handled by FApply_smul.
  • That $F$ acts as a true mathematical involution (i.e., $F^2 = I$). That property requires a non-zero trace assumption and is proven in FApply_square.
  • Any properties of the golden ($G$) or metallic operators themselves, which only use $F$ as a component block.

cited recognition theorems

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.