pith. machine review for the scientific record. sign in

Explain the Lean theorem `weight_polynomial_decay_summable` in module `IndisputableMonolith.NumberTheory.CostOperatorRegularity`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

Explanation of weight_polynomial_decay_summable

(1) In plain English, the theorem states that if prime weights λ_p satisfy a polynomial decay bound |λ_p| ≤ C / p^(1+ε) for some constant C > 0 and exponent ε ≥ 0, then the infinite sum of λ_p² over all primes is finite (i.e., the sequence is square-summable).

(2) In Recognition Science this matters because the cost operator T_J is built from these weights in the NumberTheory layer; square-summability is a structural precondition for the operator to be well-defined as a spectral object on the recognition Hilbert space, feeding into the regularity hypotheses that support discrete spectrum and trace-class heat kernels.

(3) The formal statement reads: for a function lamP : Nat.Primes → ℝ and real ε, given the assumptions (hε : 0 ≤ ε) and (h : WeightDecayPolynomial lamP ε), the conclusion is WeightSquareSummable lamP. WeightDecayPolynomial asserts existence of C > 0 such that ∀ p, |lamP p| ≤ C / (p.val : ℝ)^(1+ε); WeightSquareSummable asserts Summable (λ p, (lamP p)^2).

(4) Visible dependencies in the supplied source are the definitions WeightDecayPolynomial and WeightSquareSummable in the same module, plus the bundling theorem cost_operator_regularity_certificate that lists this implication among the module's structural facts. The proof is complete (no sorry).

(5) The declaration does not prove the three regularity sub-conjectures themselves; EssentialSelfAdjointness, CompactResolvent and TraceClassHeatKernel remain hypothesis structures whose analytic content is placeholder True. It likewise does not establish any specific RS weight values or downstream physical predictions.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • Any concrete numerical values or RS-derived formulas for the weights λ_p
  • The analytic discharge of the self-adjointness, compact-resolvent or trace-class claims (still placeholders)
  • Connections to modules outside NumberTheory.CostOperatorRegularity such as HilbertPolyaCandidate or LanglandsFromRecognitionCost

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.