theorem
proved
nothing_cannot_exist
show as:
view math explainer →
open explainer
Read the cached plain-language explainer.
open lean source
IndisputableMonolith.Foundation.LawOfExistence on GitHub at line 130.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
used by
formal source
127/-- **Nothing Cannot Exist**: For any cost bound C, defect exceeds C near zero.
128
129This is the **sharp** statement that "Nothing costs infinity." -/
130theorem nothing_cannot_exist (C : ℝ) :
131 ∃ ε > 0, ∀ x, 0 < x → x < ε → C < defect x := by
132 -- Choose ε = 1/(2(|C|+2)) so that 1/x > 2(|C|+2) when x < ε
133 use 1 / (2 * (|C| + 2))
134 constructor
135 · positivity
136 · intro x hxpos hxlt
137 -- x < 1/(2(|C|+2)) implies 1/x > 2(|C|+2)
138 have hbound : 0 < 2 * (|C| + 2) := by positivity
139 have hinv : 2 * (|C| + 2) < x⁻¹ := by
140 rw [inv_eq_one_div, lt_one_div hbound hxpos]
141 exact hxlt
142 simp only [defect, J]
143 have hxinv_pos : 0 < x⁻¹ := inv_pos.mpr hxpos
144 -- (x + 1/x)/2 - 1 ≥ 1/x / 2 - 1 > (2(|C|+2))/2 - 1 = |C| + 1 ≥ C + 1 > C
145 have h1 : x⁻¹ / 2 > |C| + 1 := by linarith
146 have h2 : (x + x⁻¹) / 2 ≥ x⁻¹ / 2 := by
147 have : x ≥ 0 := le_of_lt hxpos
148 linarith
149 have h3 : |C| ≥ C := le_abs_self C
150 linarith
151
152/-! ## Structured Set -/
153
154/-- The structured set S = {x ∈ ℝ₊ : defect(x) = 0} = {1}. -/
155def StructuredSet : Set ℝ := {x : ℝ | 0 < x ∧ defect x = 0}
156
157theorem structured_set_singleton : StructuredSet = {1} := by
158 ext x
159 simp only [StructuredSet, Set.mem_setOf_eq, Set.mem_singleton_iff]
160 constructor
papers checked against this theorem (showing 3 of 3)
-
Reward extrapolation in distillation lets students surpass teachers
"performing reward correction by choosing the reference model as the teacher's base model before RL yields a more accurate reward signal"
-
Unique reciprocal cost on ratios forces balanced discrete ledgers
"This cost exhibits reciprocity J(x)=J(x⁻¹), vanishes only at x=1, and diverges at boundary regimes x→0⁺ and x→∞"
-
Self-distillation turns feedback into denser RL signals
"SDPO treats the current model conditioned on feedback as a self-teacher and distills its feedback-informed next-token predictions back into the policy... leverages the model’s ability to retrospectively identify its own mistakes in-context."