(1) What the Declaration Says in Plain English
The theorem establishes that the Hessian matrix of the $n$-dimensional reciprocal cost function is positive semi-definite. Specifically, for any test vector $v$, the quadratic form (often written as $v^T H v$) evaluated at any point $t$ given a state direction $\alpha$ is always non-negative. Geometrically, this means the cost function's curvature is never negative—it either curves upwards or remains flat, ensuring a bowl-like (convex) structure along the active direction.
(2) Why it Matters in Recognition Science
In Recognition Science, physical structures and dynamics are modeled as driven by the minimization of a unique reciprocal-symmetric cost function (J-cost). For a recognized minimum to correspond to a stable physical equilibrium, the governing cost must be convex. This theorem mathematically proves that the generalized $n$-dimensional cost inherently provides that stability. Without a non-negative Hessian, the equilibrium would be unstable, and the system would theoretically run away to infinite cost.
(3) How to Read the Formal Statement
The Lean declaration reads:
theorem quadraticHessian_nonneg {n : ℕ} (α t v : Vec n) :
0 ≤ quadraticHessian α t v
{n : ℕ}: The theorem applies generically to any spatial dimension $n$.(α t v : Vec n): It quantifies over three vectors: the state/weight directionα, the evaluation pointt, and the test displacement vectorv.0 ≤ quadraticHessian α t v: Asserts that the associated quadratic form is always greater than or equal to zero.
(4) Visible Dependencies in the Source
The proof is extremely tight (just two lines) because the heavy lifting is done by an adjacent theorem:
- quadraticHessian_eq: This dependency rewrites the raw tensor contraction into a closed-form scalar expression: $\cosh(\alpha \cdot t) \times (\alpha \cdot v)^2$.
- The proof then concludes via Lean's
positivitytactic, since the hyperbolic cosine is strictly positive and the square of any real number is non-negative, making their product $\ge 0$.
(5) What this Declaration Does Not Prove
This theorem proves the matrix is positive semi-definite, but it intentionally does not prove it is positive definite (strictly greater than zero for all $v \neq 0$). It does not establish strict convexity across all degrees of freedom. In fact, the source contains another theorem, applyHessian_of_dot_zero, which proves that any vector orthogonal to $\alpha$ lies in the kernel of the Hessian. This indicates the existence of "flat" directions where the cost curvature is exactly zero.