pith. machine review for the scientific record. sign in

Explain the Lean theorem `JlogN_eq_cosh_sub_one` in module `IndisputableMonolith.Cost.Ndim.Core`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English Translation

The theorem states that the N-dimensional log-coordinate cost of a vector $t$ with weight vector $\alpha$ evaluates exactly to the hyperbolic cosine of their dot product, minus 1.

2. Why it Matters in Recognition Science

In Recognition Science, the foundational scalar cost $J(x) = \frac{x + x^{-1}}{2} - 1$ measures the penalty of distinguishing a ratio $x$ from 1. When expressed in logarithmic coordinates $x = e^\theta$, this penalty takes a hyperbolic form: $\cosh(\theta) - 1$. The THEOREM JlogN_eq_cosh_sub_one establishes that this core geometric structure survives lifting to $N$ dimensions. By evaluating the scalar cost on an exponentially aggregated dot product, the multi-component cost retains the exact $\cosh - 1$ profile, bounding multi-dimensional variations structurally.

3. Reading the Formal Statement

theorem JlogN_eq_cosh_sub_one {n : ℕ} (α t : Vec n) :
    JlogN α t = Real.cosh (dot α t) - 1
  • {n : ℕ}: The dimension of the vector space.
  • α t : Vec n: $\alpha$ and $t$ are $n$-dimensional real vectors.
  • JlogN α t: The log-coordinate cost function defined in JlogN.
  • dot α t: The sum of component-wise products $\sum_{i} \alpha_i t_i$, defined in dot.
  • Real.cosh (dot α t) - 1: The mathematical output, $\cosh(\alpha \cdot t) - 1$.

4. Visible Dependencies

The proof requires only one line: simpa [JlogN] using (Jcost_exp_cosh (dot α t)). It depends on:

  • The definitional lifting JlogN which aggregates the dot product via $e^{\alpha \cdot t}$.
  • The upstream identity Jcost_exp_cosh (from the external IndisputableMonolith.Cost module, assumed by the simp tactic), which provides the scalar substitution $J(e^z) = \cosh(z) - 1$.

5. What this Declaration Does Not Prove

  • Physical semantics: It operates purely mathematically; it does not assign a physical meaning (MODEL or HYPOTHESIS) to the coordinate vector $t$ or the weight vector $\alpha$.
  • Extension Uniqueness: While the scalar $J(x)$ is forced (T5, t5_holds), this theorem does not prove that the exponential weighted dot product is the unique valid multi-dimensional extension.
  • Extrema properties: It establishes an algebraic identity but does not explicitly characterize the minimal cost states of the $N$-dimensional space.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • The upstream scalar theorem `Jcost_exp_cosh`
  • Physical interpretations of the vectors α and t
  • Proof that this multidimensional extension is structurally unique

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.