pith. machine review for the scientific record. sign in
theorem proved term proof high

actionJ_minimum_unique_value

show as:
view Lean formalization →

Two admissible paths that each minimize the J-action among all paths sharing their endpoints must share the same action value. Researchers deriving the unconditional principle of least action cite this uniqueness step to close the comparison before invoking convexity. The argument is a direct symmetric application of the two minimality hypotheses, which forces equality by linear arithmetic.

claimLet $a ≤ b$ and let $γ_1, γ_2$ be admissible paths on $[a,b]$ that share endpoints. If $S[γ_1] ≤ S[γ]$ for every admissible $γ$ sharing endpoints with $γ_1$, and likewise $S[γ_2] ≤ S[γ]$ for every admissible $γ$ sharing endpoints with $γ_2$, then $S[γ_1] = S[γ_2]$, where $S[γ] = ∫_a^b J(γ(t)) dt$.

background

The J-action functional is the integral $S[γ] = ∫_a^b Jcost(γ(t)) dt$ over an admissible path, where admissible paths are continuous strictly positive real functions on the closed interval $[a,b]$. Fixed endpoints is the predicate that two paths agree at the endpoints $a$ and $b$. The interpolation operator produces the convex combination $(1-s)γ_1 + s γ_2$, which remains admissible for $s ∈ [0,1]$. This module proves that the action inherits convexity on such interpolations from the pointwise convexity of Jcost, which itself follows from d'Alembert uniqueness. The upstream definition of actionJ and the fixedEndpoints predicate supply the objects used here.

proof idea

One-line wrapper. Apply the first minimality hypothesis to γ₂ (using the shared-endpoints assumption) to obtain actionJ γ₁ ≤ actionJ γ₂. Apply the second hypothesis to γ₁ after symmetry of fixedEndpoints to obtain actionJ γ₂ ≤ actionJ γ₁. Linear arithmetic then yields equality.

why it matters in Recognition Science

This uniqueness result closes the comparison step inside the headline unconditional principle of least action (geodesic_minimizes_unconditional) and the broader principle_of_least_action sibling. It removes the need for an extra h_min witness once convexity of actionJ on interpolations is available, which traces directly to the J-functional equation and d'Alembert factorization. The module doc notes that the deep content is the convexity statement itself; this lemma is the elementary convex-calculus consequence that propagates local segment minimization to global uniqueness of the action value.

scope and limits

formal statement (Lean)

 240theorem actionJ_minimum_unique_value (_hab : a ≤ b)
 241    (γ₁ γ₂ : AdmissiblePath a b)
 242    (h_endpoints : fixedEndpoints γ₁ γ₂)
 243    (h₁ : ∀ γ : AdmissiblePath a b, fixedEndpoints γ₁ γ → actionJ γ₁ ≤ actionJ γ)
 244    (h₂ : ∀ γ : AdmissiblePath a b, fixedEndpoints γ₂ γ → actionJ γ₂ ≤ actionJ γ) :
 245    actionJ γ₁ = actionJ γ₂ := by

proof body

Term-mode proof.

 246  have h12 := h₁ γ₂ h_endpoints
 247  have h21 := h₂ γ₁ (fixedEndpoints_symm h_endpoints)
 248  linarith
 249
 250/-! ## Truly unconditional headline: convexity discharges the witness -/
 251
 252/-- **Truly unconditional principle of least action.**
 253
 254    If `γ_geo` is a local minimum of the action functional in the segment
 255    sense (no decrease toward `γ_other` along the convex interpolation
 256    segment for any `s` near `0`), then by convexity it is a *global*
 257    minimum vs `γ_other`.
 258
 259    The conclusion uses only the convexity of `actionJ` on the interpolation
 260    segment, which is itself a theorem of pointwise convexity of `Jcost`,
 261    which is a theorem of d'Alembert uniqueness.
 262
 263    **No extra hypothesis is required beyond `Jcost`'s convexity.**
 264
 265    The hypothesis `h_local_min` is a *weakest possible* form of "critical
 266    point": just `actionJ γ_geo ≤ actionJ (interp γ_geo γ_other s hs)` for
 267    one specific `s ∈ (0,1]` is enough. The convexity of the action ensures
 268    that minimization on the segment propagates to the endpoint. -/

depends on (22)

Lean names referenced from this declaration's body.