pith. machine review for the scientific record. sign in

Explain the Lean theorem `kepler_selection_principle` in module `IndisputableMonolith.Papers.DraftV1`. Write for an educated reader who knows science and programming but may not know Lean. Cover: (1) what the declaration says in plain English, (2) why it matters in Recognition Science, (3) how to read the formal statement, (4) visible dependencies or certificates in the supplied source, and (5) what this declaration does not prove. Cite only declarations present in the supplied Recognition source.

Big AI job. Grok 4.3 reads the canon and writes a Lean-grounded derivation; usually 20 seconds to 2 minutes. The page will refresh to your answer when it’s ready.
confidence: high in recognition cached

Explanation of kepler_selection_principle

(1) In plain English, the declaration states that the apsidal angle function Δθ(D) — defined as 2π divided by the square root of (4 − D) — equals exactly 2π if and only if the natural number D equals 3. The proof proceeds by assuming the equality, showing that the denominator must be 1 (via cancellation and non-negativity of the radicand), solving for D, and verifying the converse by direct substitution.

(2) It matters in Recognition Science because it encodes the (K) Kepler non-precession constraint of Draft_v1.tex. Combined with the (S) synchronization-selection principle and the (T) linking-selection principle, it contributes to the dimensional-rigidity argument that forces D = 3 as the unique dimension satisfying all three constraints simultaneously.

(3) The formal statement is theorem kepler_selection_principle (D : ℕ) : apsidalAngle D = 2 * Real.pi ↔ D = 3. It is a biconditional. The forward direction uses algebraic cancellation on the real numbers, positivity of π, and the fact that √(4 − D) = 1 implies D = 3 after casting from ℝ back to ℕ. The reverse direction substitutes D = 3 to obtain √1 = 1 and recovers equality.

(4) Visible dependencies in the supplied source: it is defined in the same module as apsidalAngle and is invoked by dimensional_rigidity_main and no_higher_dimensional_alternative. The module imports IndisputableMonolith.Foundation.AlexanderDuality (supplying alexander_duality_circle_linking for the (T) constraint) and re-exports synchronization_selection_principle for the (S) constraint. The proof relies only on Mathlib real arithmetic (no external hypotheses or sorry).

(5) It does not prove the origin of the closed-form apsidal-angle expression from the Green-kernel power law, nor the full physical derivation of the (K) constraint from recognition geometry. It also leaves the placeholder hypotheses (RGConditionsForDualityHypothesis, CentralPotentialDerivationHypothesis, RobustnessHypothesis) unformalized and does not establish the complete (T/K/S) conjunction without supplying the linking hypothesis.

cited recognition theorems

outside recognition

Aspects Recognition does not yet address:

  • Derivation of the apsidal-angle closed form from the Green-kernel power law (stated in Draft_v1.tex but not formalized in Lean)
  • Full physical motivation and recognition-geometry grounding of the Kepler constraint
  • Formalization of the placeholder hypotheses RGConditionsForDualityHypothesis, CentralPotentialDerivationHypothesis, and RobustnessHypothesis

recognition modules consulted

The Recognition library is at github.com/jonwashburn/shape-of-logic. The model is restricted to the supplied Lean source and instructed not to invent theorem names. Treat output as a starting point, not a verified proof.