1. Plain English
The Lean theorem JcostN_nonneg guarantees that the multi-dimensional version of the reciprocal cost function, $J_N(\alpha, x)$, never evaluates to a negative number. No matter the number of dimensions $n$, the weight vector $\alpha$, or the input configuration vector $x$, the calculated cost is always greater than or equal to zero.
2. Why it Matters in Recognition Science
In Recognition Science, physics emerges from the stabilization (minimization) of recognition cost. For cost to define a stable absolute floor, it must be bounded below by zero. While the foundational scalar cost $J(x)$ is defined in 1D, more complex structures require multi-dimensional parameter spaces. This THEOREM proves that when the scalar cost kernel is mathematically lifted into $n$ dimensions via a weighted logarithmic aggregate, it structurally preserves its non-negativity. As a result, multi-dimensional configurations cannot accrue "negative cost" or diverge; they remain safely bounded below by zero.
3. Reading the Formal Statement
theorem JcostN_nonneg {n : ℕ} (α x : Vec n) : 0 ≤ JcostN α x
{n : ℕ}: The dimension of the space is a natural number $n$ (curly braces mean Lean infers this implicitly from the vectors).(α x : Vec n): The variables $\alpha$ (the weights) and $x$ (the configuration variables) are both $n$-dimensional vectors of real numbers.0 ≤ JcostN α x: The mathematical conclusion asserting that the $n$-dimensional cost evaluation is non-negative.
4. Visible Dependencies in the Source
The proof relies on reducing the multi-dimensional case directly to the scalar case:
- It uses JcostN_eq_Jcost_aggregate to rewrite $J_N(\alpha, x)$ as identically equal to the 1D scalar cost evaluated on an aggregated scalar value: $J(\text{aggregate}(\alpha, x))$.
- It applies the underlying 1D non-negativity theorem (
Jcost_nonneg). To do this, it must prove the input to the 1D cost is strictly positive, which is satisfied by supplying the theorem aggregate_pos.
5. What this Declaration Does NOT Prove
- When the cost is exactly zero: It proves $J_N \ge 0$, but does not characterize the minimum. The conditions under which the cost is exactly zero are handled by a separate theorem, JcostN_eq_zero_iff.
- The base scalar non-negativity: The proof delegates the heavy lifting to
Jcost_nonneg(the proof that $J(r) \ge 0$ for $r > 0$). This is a core property of the scalar J-cost model, proved in an imported module outside the provided source slice. - Physical interpretation: The theorem treats $\alpha$ and $x$ as pure mathematical constructs. Any mapping of these vectors to physical degrees of freedom happens downstream.