pith. machine review for the scientific record. sign in

Explain the Lean theorem `FApply_smul` in module `IndisputableMonolith.Cost.Ndim.Projector`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. What the declaration says in plain English

The theorem FApply_smul proves that the "almost-product operator" (denoted as $F$) commutes with scalar multiplication. In mathematical terms, if you scale an input vector $v$ by a real number $c$ and then apply the operator $F$, it is identical to applying $F$ to $v$ first and then scaling the result by $c$. This establishes that $F$ is homogeneous of degree 1.

2. Why it matters in Recognition Science

Within the Recognition Science framework, finite-dimensional operator algebras arise from the underlying cost functional. The operator $F$, defined as $F = 2P - I$ (where $P$ is a normalized projector), is the algebraic foundation for the "golden operator" and the broader "metallic family" of operators. Establishing the linear properties of $F$ is structurally necessary to ensure it behaves as a well-defined linear operator on the vector space, which is a prerequisite for its use in describing symmetries, automorphic forms, or the emergence of physical structures.

3. How to read the formal statement

  • {n : ℕ}: The dimension of the underlying vector space.
  • (lam : ℝ): A real scalar parameter $\lambda$.
  • (hInv : Fin n → Fin n → ℝ): The inverse metric kernel $h^{-1}$.
  • (β : Vec n): A defining covector $\beta$.
  • (c : ℝ): The real scalar multiplier.
  • (v : Vec n): The input vector.
  • FApply lam hInv β (c • v) = c • FApply lam hInv β v: The core equality asserting that applying $F$ to $c \cdot v$ yields $c \cdot F(v)$, where represents scalar multiplication.

4. Visible dependencies in the supplied source

The proof is a concise algebraic verification that relies on:

  • The definition of FApply, which defines the operator in terms of the projector $P$.
  • The antecedent theorem PApply_smul, which proves the exact same homogeneity property for the projector $P$.
  • The proof expands the definition to vector components (ext i), substitutes the known behavior of $P$, and uses basic ring algebra (mul_comm and the ring tactic) to show both sides match.

5. What this declaration does not prove

This specific declaration only proves homogeneity (scalar multiplication). It does not prove that $F$ is additive across vector addition; that property is handled separately by FApply_add. It also does not prove the defining geometric property of an almost-product operator, which is that it squares to the identity ($F^2 = I$); that requires FApply_square. Finally, it makes no claims about specific physical dimensions (like $D=3$ or spacetime) or fundamental constants.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • Full linearity as a single packaged typeclass instance (only add and smul are proved separately)
  • Direct physical interpretation of the vectors in this specific module (it remains purely algebraic here)

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.