pith. machine review for the scientific record. sign in

Explain the Lean theorem `quadraticHessian_nonneg` in module `IndisputableMonolith.Cost.Ndim.Hessian`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

(1) What the Declaration Says in Plain English

The theorem establishes that the Hessian matrix of the $n$-dimensional reciprocal cost function is positive semi-definite. Specifically, for any test vector $v$, the quadratic form (often written as $v^T H v$) evaluated at any point $t$ given a state direction $\alpha$ is always non-negative. Geometrically, this means the cost function's curvature is never negative—it either curves upwards or remains flat, ensuring a bowl-like (convex) structure along the active direction.

(2) Why it Matters in Recognition Science

In Recognition Science, physical structures and dynamics are modeled as driven by the minimization of a unique reciprocal-symmetric cost function (J-cost). For a recognized minimum to correspond to a stable physical equilibrium, the governing cost must be convex. This theorem mathematically proves that the generalized $n$-dimensional cost inherently provides that stability. Without a non-negative Hessian, the equilibrium would be unstable, and the system would theoretically run away to infinite cost.

(3) How to Read the Formal Statement

The Lean declaration reads:

theorem quadraticHessian_nonneg {n : ℕ} (α t v : Vec n) :
    0 ≤ quadraticHessian α t v
  • {n : ℕ}: The theorem applies generically to any spatial dimension $n$.
  • (α t v : Vec n): It quantifies over three vectors: the state/weight direction α, the evaluation point t, and the test displacement vector v.
  • 0 ≤ quadraticHessian α t v: Asserts that the associated quadratic form is always greater than or equal to zero.

(4) Visible Dependencies in the Source

The proof is extremely tight (just two lines) because the heavy lifting is done by an adjacent theorem:

  • quadraticHessian_eq: This dependency rewrites the raw tensor contraction into a closed-form scalar expression: $\cosh(\alpha \cdot t) \times (\alpha \cdot v)^2$.
  • The proof then concludes via Lean's positivity tactic, since the hyperbolic cosine is strictly positive and the square of any real number is non-negative, making their product $\ge 0$.

(5) What this Declaration Does Not Prove

This theorem proves the matrix is positive semi-definite, but it intentionally does not prove it is positive definite (strictly greater than zero for all $v \neq 0$). It does not establish strict convexity across all degrees of freedom. In fact, the source contains another theorem, applyHessian_of_dot_zero, which proves that any vector orthogonal to $\alpha$ lies in the kernel of the Hessian. This indicates the existence of "flat" directions where the cost curvature is exactly zero.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • The broader forcing chain establishing why J-cost defines physics, which requires UnifiedForcingChain modules not present in this slice.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.