1. Plain English
The theorem aggregate_pos states that the $n$-dimensional weighted exponential aggregate $R(x) = \exp(\sum_i \alpha_i \log x_i)$ is strictly greater than zero for any weight vector $\alpha$ and coordinate vector $x$.
2. Why it matters in Recognition Science
In the RS MODEL, multi-dimensional cost is constructed by feeding a weighted log-aggregate into the scalar $J$-cost function. Because the fundamental $J$-cost, $J(r) = (r + r^{-1})/2 - 1$, requires a strictly positive input to avoid division by zero and maintain its physical meaning, this theorem is the load-bearing safety guarantee. It proves that the multi-dimensional lift always yields a valid, strictly positive input. This positivity is directly consumed by downstream THEOREMs, such as JcostN_nonneg, to establish that $n$-dimensional cost itself is strictly bounded below by zero.
3. How to read the formal statement
@[simp] theorem aggregate_pos {n : ℕ} (α x : Vec n) : 0 < aggregate α x
@[simp]: Instructs the Lean simplifier to automatically apply this rule when evaluating positivity conditions.{n : ℕ}: Implicit parameter defining the dimension of the vectors.(α x : Vec n): The inputs are two $n$-dimensional real vectors, representing weights $\alpha$ and coordinates $x$.0 < aggregate α x: Asserts that the output of theaggregatefunction is strictly positive.
4. Visible dependencies
The proof uses no axioms or external certificates. It relies strictly on definitional expansion (unfold aggregate) and Mathlib's fundamental property of the real exponential function, Real.exp_pos, which asserts $e^y > 0$ for all real $y$.
5. What this declaration does not prove
This declaration strictly isolates the positivity of the aggregate argument. It does not prove that the resulting multi-component cost $J_N(\alpha, x)$ is non-negative (proved in JcostN_nonneg), nor does it prove that the total cost is invariant under componentwise inversion (proved in JcostN_reciprocal). It simply guarantees the domain safety for the scalar lift.