theorem
proved
linking_requires_D3
show as:
view math explainer →
open explainer
Generate a durable explainer page for this declaration.
open lean source
IndisputableMonolith.Foundation.DimensionForcing on GitHub at line 292.
browse module
All declarations in this module, on Recognition.
explainer page
Derivations using this theorem
depends on
used by
formal source
289
290/-- **T8 PRIMARY THEOREM**: Linking requires D = 3.
291 Proof: Alexander duality — no reference to 8-tick or gap-45. -/
292theorem linking_requires_D3 (D : Dimension) :
293 SupportsNontrivialLinking D → D = 3 :=
294 (alexander_duality_circle_linking D).mp
295
296/-- D = 1 does not support linking (collinear — curves cannot be disjoint). -/
297theorem D1_no_linking : ¬SupportsNontrivialLinking 1 :=
298 fun h => absurd (linking_requires_D3 1 h) (by norm_num)
299
300/-- D = 2 does not support linking (Jordan curve theorem — curves separate
301 the plane, linking group H̃^0(S¹) = 0). -/
302theorem D2_no_linking : ¬SupportsNontrivialLinking 2 :=
303 fun h => absurd (linking_requires_D3 2 h) (by norm_num)
304
305/-- D = 4 does not support linking (codimension ≥ 2 — curves unlink by
306 general position, linking group H̃^2(S¹) = 0). -/
307theorem D4_no_linking : ¬SupportsNontrivialLinking 4 :=
308 fun h => absurd (linking_requires_D3 4 h) (by norm_num)
309
310/-- D ≥ 4 does not support linking (Alexander duality: linking group trivial
311 for D ≥ 4 since H̃^{D-2}(S¹) = 0 when D-2 ≥ 2). -/
312theorem high_D_no_linking (D : Dimension) (hD : D ≥ 4) :
313 ¬SupportsNontrivialLinking D := by
314 intro h
315 have heq := linking_requires_D3 D h
316 subst heq
317 norm_num at hD
318
319instance : DecidablePred SupportsNontrivialLinking := fun D =>
320 if h : D = 3 then isTrue (by rw [h]; exact D3_has_linking)
321 else isFalse (fun hlink => h (linking_requires_D3 D hlink))
322
papers checked against this theorem (showing 2 of 2)
-
One model reconstructs metric 3D scenes from images and inputs
"MapAnything leverages a factored representation of multi-view scene geometry, i.e., a collection of depth maps, local ray maps, camera poses, and a metric scale factor that effectively upgrades local reconstructions into a globally consistent metric frame."
-
Sparse attention scales transformers to 100,000-step sequences
"We introduce sparse factorizations of the attention matrix which reduce this to O(n√n)... We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers... setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64."