pith. machine review for the scientific record. sign in

Explain the Lean theorem `aggregate_pos` in module `IndisputableMonolith.Cost.Ndim.Core`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

1. Plain English

The theorem aggregate_pos states that the $n$-dimensional weighted exponential aggregate $R(x) = \exp(\sum_i \alpha_i \log x_i)$ is strictly greater than zero for any weight vector $\alpha$ and coordinate vector $x$.

2. Why it matters in Recognition Science

In the RS MODEL, multi-dimensional cost is constructed by feeding a weighted log-aggregate into the scalar $J$-cost function. Because the fundamental $J$-cost, $J(r) = (r + r^{-1})/2 - 1$, requires a strictly positive input to avoid division by zero and maintain its physical meaning, this theorem is the load-bearing safety guarantee. It proves that the multi-dimensional lift always yields a valid, strictly positive input. This positivity is directly consumed by downstream THEOREMs, such as JcostN_nonneg, to establish that $n$-dimensional cost itself is strictly bounded below by zero.

3. How to read the formal statement

@[simp] theorem aggregate_pos {n : ℕ} (α x : Vec n) : 0 < aggregate α x
  • @[simp]: Instructs the Lean simplifier to automatically apply this rule when evaluating positivity conditions.
  • {n : ℕ}: Implicit parameter defining the dimension of the vectors.
  • (α x : Vec n): The inputs are two $n$-dimensional real vectors, representing weights $\alpha$ and coordinates $x$.
  • 0 < aggregate α x: Asserts that the output of the aggregate function is strictly positive.

4. Visible dependencies

The proof uses no axioms or external certificates. It relies strictly on definitional expansion (unfold aggregate) and Mathlib's fundamental property of the real exponential function, Real.exp_pos, which asserts $e^y > 0$ for all real $y$.

5. What this declaration does not prove

This declaration strictly isolates the positivity of the aggregate argument. It does not prove that the resulting multi-component cost $J_N(\alpha, x)$ is non-negative (proved in JcostN_nonneg), nor does it prove that the total cost is invariant under componentwise inversion (proved in JcostN_reciprocal). It simply guarantees the domain safety for the scalar lift.

cited recognition theorems

  • Core.aggregate_pos The target theorem explaining the strict positivity of the multi-dimensional aggregate.
  • Core.JcostN_nonneg A downstream theorem that directly consumes aggregate_pos to prove the non-negativity of N-dimensional J-cost.
  • Core.aggregate The definition that is shown to be positive by aggregate_pos.
  • Core.JcostN_reciprocal Mentioned to contrast what aggregate_pos covers versus what requires separate structural proofs.

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.