pith. machine review for the scientific record. sign in
theorem proved term proof high

dAlembert_contDiff_nat

show as:
view Lean formalization →

Continuous solutions H to the d'Alembert equation H(t+u)+H(t-u)=2H(t)H(u) with H(0)=1 are C^n for every finite n. Analysts studying functional equations or regularity bootstrap would cite this step. The argument extracts a nonzero integration width δ, invokes the representation formula expressing H via its antiderivative Phi, and proceeds by induction on n, lifting differentiability from H to Phi and back.

claimLet $H:ℝ→ℝ$ be continuous with $H(0)=1$ and satisfying the d'Alembert equation $H(t+u)+H(t-u)=2H(t)H(u)$ for all real $t,u$. Then $H$ belongs to $C^n$ for every natural number $n$.

background

The d'Alembert equation is the image of the Recognition Composition Law under the substitution $H(x)=J(x)+1$, where $J$ is the cost function from CostAlgebra. Phi is the antiderivative defined by Phi(H)(t) := ∫ from 0 to t of H(s) ds. The lemma exists_integral_ne_zero guarantees a δ>0 with Phi(δ)≠0. The representation_formula then states H(t)=(Phi(t+δ)−Phi(t−δ))/(2 Phi(δ)). The module proves every continuous solution of the d'Alembert equation with H(0)=1 is C^∞, completing the Aczél classification into the constant, cosh, and cos cases.

proof idea

The proof obtains δ from exists_integral_ne_zero and the representation formula from representation_formula. It then performs induction on n. The base case n=0 is immediate from continuity. In the successor step, phi_contDiff_succ lifts the induction hypothesis to ContDiff of Phi; composition with translation by ±δ produces ContDiff of Phi(t±δ); subtraction followed by division by the constant 2 Phi(δ) recovers ContDiff of H at order n+1.

why it matters in Recognition Science

This supplies the finite-order bootstrap that feeds dAlembert_contDiff_smooth, which asserts full C^∞ regularity and enables the subsequent ODE derivation H''=cH together with the classification into cosh, cos, or constant solutions. It removes the former H_AczelClassification hypothesis that was the last foundation axiom. In the Recognition Science setting the result confirms that J-cost functions arising from the RCL are smooth, consistent with the phi-ladder and the eight-tick octave structure.

scope and limits

formal statement (Lean)

 206private theorem dAlembert_contDiff_nat (H : ℝ → ℝ) (h_one : H 0 = 1) (h_cont : Continuous H)
 207    (h_dAl : ∀ t u, H (t + u) + H (t - u) = 2 * H t * H u) :
 208    ∀ n : ℕ, ContDiff ℝ (n : ℕ∞) H := by

proof body

Term-mode proof.

 209  obtain ⟨δ, _, hδ_ne⟩ := exists_integral_ne_zero H h_one h_cont
 210  have h_rep := representation_formula H h_cont h_dAl hδ_ne
 211  intro n; induction n with
 212  | zero => exact contDiff_zero.mpr h_cont
 213  | succ n ih =>
 214    have h_phi := phi_contDiff_succ H h_cont ih
 215    have h1 : ContDiff ℝ ((n + 1 : ℕ) : ℕ∞) (fun t => Phi H (t + δ)) :=
 216      h_phi.comp (contDiff_id.add contDiff_const)
 217    have h2 : ContDiff ℝ ((n + 1 : ℕ) : ℕ∞) (fun t => Phi H (t - δ)) :=
 218      h_phi.comp (contDiff_id.sub contDiff_const)
 219    have h4 : ContDiff ℝ ((n + 1 : ℕ) : ℕ∞)
 220        (fun t => (Phi H (t + δ) - Phi H (t - δ)) / (2 * Phi H δ)) :=
 221      (h1.sub h2).div_const _
 222    exact (funext h_rep) ▸ h4
 223

used by (3)

From the project-wide theorem graph. These declarations reference this one in their body.

depends on (19)

Lean names referenced from this declaration's body.