The Lean theorem JcostN_eq_Jcost_aggregate formally connects multi-dimensional cost to the fundamental scalar cost.
1. Plain English Translation
The declaration states that computing the $n$-dimensional recognition cost of a coordinate vector $x$ given a weight vector $\alpha$ is mathematically identical to first compressing those coordinates into a single scalar value—the logarithmic aggregate—and then applying the standard 1D scalar cost function. The aggregate is mathematically equivalent to the weighted geometric product $\prod x_i^{\alpha_i}$.
2. Significance in Recognition Science
In RS, all physics derives from a single invariant functional equation for cost. When the framework scales from single scalar recognition events to multi-component ($n$-dimensional) systems, it cannot introduce arbitrary new cost functions without breaking the framework. This theorem provides a structural MODEL ensuring that higher-dimensional cost is strictly tethered to the fundamental 1-dimensional scalar Jcost. By routing through a logarithmic aggregate, RS perfectly preserves the reciprocal symmetries in multi-component spaces without ad-hoc parameterization.
3. Reading the Formal Statement
@[simp] theorem JcostN_eq_Jcost_aggregate {n : ℕ} (α x : Vec n) :
JcostN α x = Jcost (aggregate α x) := by
rfl
@[simp]: A tag telling Lean's automated simplifier to replace the left side with the right side whenever it is encountered.{n : ℕ} (α x : Vec n): The theorem applies to any dimension $n$ (a natural number), taking two $n$-dimensional vectors of real numbers: $\alpha$ (the weights) and $x$ (the values).JcostN α x = Jcost (aggregate α x): The core equality. It states that the multidimensional cost function JcostN evaluates exactly to the 1DJcostapplied to the aggregate of $\alpha$ and $x$.:= by rfl: The proof.rflstands for reflexivity, meaning the left and right sides are equivalent simply by unfolding their definitions.
4. Visible Dependencies
Within the supplied slice, the theorem directly depends on:
aggregate: Defined as $\exp(\sum \alpha_i \ln x_i)$.JcostN: The multi-component cost, defined viaJlogNandlogVec.Vec: The $n$-dimensional real vector type defined asFin n → ℝ.
The proof is closed, containing zero sorry gaps and invoking no external axioms.
5. What This Declaration Does Not Prove
- Why the scalar
Jcostis unique: This theorem assumes the scalarJcostfunction. The proof thatJ(x) = (x + x⁻¹)/2 - 1is the unique reciprocal-symmetric cost is a THEOREM (washburn_uniqueness_aczelorT5), which the primer notes is established inCost.FunctionalEquation(not in this specific slice). - The physical meaning of $\alpha$: It does not assign a physical interpretation (like mass or scaling dimensions) to the weight vector $\alpha$; it merely provides the mathematical architecture for multi-dimensional aggregation.
- Multi-dimensional reciprocity: While it establishes the identity bridge, properties like reciprocity under inversion and non-negativity are proved by downstream theorems in the same module (e.g.,
JcostN_reciprocal).