theorem
proved
dimension_forced
show as:
view math explainer →
open explainer
Generate a durable explainer page for this declaration.
open lean source
IndisputableMonolith.Foundation.DimensionForcing on GitHub at line 383.
browse module
All declarations in this module, on Recognition.
explainer page
Derivations using this theorem
depends on
used by
formal source
380
381 There is no free parameter; D is determined.
382 The 8-tick and gap-45 are now consequences, not premises. -/
383theorem dimension_forced : ∃! D : Dimension, RSCompatibleDimension D := by
384 use 3
385 constructor
386 · exact D3_compatible
387 · intro D hD
388 exact dimension_unique D hD
389
390/-! ## Physical Interpretation -/
391
392/-- The spatial dimension of the physical world. -/
393def D_physical : Dimension := 3
394
395/-- D_physical is RS-compatible. -/
396theorem D_physical_compatible : RSCompatibleDimension D_physical := D3_compatible
397
398/-- The eight-tick cycle in D = 3 dimensions. -/
399theorem physical_eight_tick : EightTickFromDimension D_physical = 8 := rfl
400
401/-- **WHY D = 3**
402
403 The dimension is not a free parameter. It is forced by:
404
405 1. **Alexander duality (PRIMARY, proved from axiom)**:
406 `SphereAdmitsCircleLinking D ↔ D = 3`, proved from the reduced
407 cohomology of S¹ axiom in `AlexanderDuality.lean`. Independent of T7.
408 H̃₁(Sᴰ \ S¹) ≅ H̃^{D-2}(S¹), nontrivial iff D = 3.
409
410 2. **Clifford algebra (CHARACTERIZATION)**: Cl₃ ≅ M₂(ℂ) gives
411 2-component complex spinors — the unique structure for spin-½.
412 (See `CliffordBridge.cl3_iso_m2c`)
413
papers checked against this theorem (showing 30 of 81)
-
Supernovae data alone require acceleration at 5 sigma
"We classify the DES SNe using a machine learning algorithm applied to their light curves in four photometric bands."
-
Carbon-fiber support for trimmers uses 4.5-15 density-to-strength ratio
"support member comprises a carbon fiber material"
-
Separate visual encoders unify understanding and generation
"Experiments show that Janus surpasses previous unified model and matches or exceeds the performance of task-specific models"
-
BAO data rule out dark-energy-free models at 8 sigma
"BAO data alone are able to rule out dark-energy-free models at more than eight standard deviations in an extension to the flat ΛCDM model that allows for curvature"
-
Covariance adaptation guides search in non-convex landscapes
"Covariance matrix adaptation... rank-μ-update... rank-one-update... cumulation... step-size control"
-
RL rewards lift video grounding and tracking by over 30 points
"we explore integrating spatio-temporal specific rewards into Multimodal Large Language Models (MLLMs) to address the unique challenges of video understanding, such as long-range temporal associations"
-
Shifted windows let vision Transformers scale linearly
"we propose a hierarchical Transformer whose representation is computed with Shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. This hierarchical architecture has the flexibility to model at various scales and has linear computational complexity with respect to image size."
-
One model handles images and text together in zero-shot settings
"In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs."
-
ARC-AGI-2 adds tasks to test higher AI reasoning levels
"all tasks require only elementary Core Knowledge ... no specialized world knowledge"
-
Loudest black hole merger tests Kerr nature to few percent
"By fitting a parameterized waveform that incorporates the full inspiral-merger-ringdown sequence, we constrain the fundamental (ℓ=m=4) mode to tens of percent and bound the quadrupolar frequency to within a few percent of the GR prediction."
-
Discrete text levels train LMMs to score visuals like humans
"With the syllabus, we further unify the three tasks into one model, termed the OneAlign."
-
Reasoning models cut back effort as puzzles get harder
"three performance regimes: (1) low-complexity tasks where standard models outperform LRMs, (2) medium-complexity tasks where LRMs demonstrates advantage, and (3) high-complexity tasks where both models face complete collapse"
-
BEVDet reaches 39.3% mAP in bird-eye-view 3D detection
"BEVDet performs 3D object detection in Bird-Eye-View (BEV), where most target values are defined and route planning can be handily performed."
-
Transformer turns single photo into 3D model in five seconds
"We train our model in an end-to-end manner on massive multi-view data containing around 1 million objects, including both synthetic renderings from Objaverse and real captures from MVImgNet."
-
Aria device records egocentric multi-modal data for AR AI
"Our team at Meta Reality Labs Research built the Project Aria device, an egocentric, multi-modal data recording and streaming device with the goal to foster and accelerate research in this area. In this paper, we describe the Project Aria device hardware including its sensor configuration and the corresponding software tools"
-
Cognitive VLM plus diffusion planner tops driving benchmarks
"Extensive experiments on the NAVSIM and Bench2Drive benchmarks demonstrate that ReCogDrive achieves state-of-the-art performance."
-
MRKL systems pair language models with external knowledge modules
"Jurassic-X... some of the technical challenges in implementing it"
-
Shortcut models cut diffusion sampling to one step
"Across a wide range of sampling step budgets, shortcut models consistently produce higher quality samples than previous approaches, such as consistency models and reflow."
-
Open benchmark makes jailbreak attacks comparable across LLMs
"We have carefully considered the potential ethical implications of releasing this benchmark, and believe that it will be a net positive for the community."
-
GPT-4V and Gemini reach only 56-59% on expert multimodal benchmark
"We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines..."
-
Qwen2.5-1M models reach 1 million token contexts
"Key techniques such as long data synthesis, progressive pre-training, and multi-stage supervised fine-tuning are employed to effectively enhance long-context performance while reducing training costs."
-
LiveBench updates monthly with fresh questions to block LLM test contamination
"LiveBench contains questions that are based on recently-released math competitions, arXiv papers, news articles, and datasets, and it contains harder, contamination-limited versions of tasks from previous benchmarks such as Big-Bench Hard, AMPS, and IFEval."
-
Open audio model tops 20+ benchmarks with public data only
"AF3 achieves new SOTA results on over 20+ (long) audio understanding and reasoning benchmarks"
-
Motion diffusion model predicts clean samples for SOTA results
"MDM is a generic approach, enabling different modes of conditioning, and different generation tasks. We show that our model is trained with lightweight resources and yet achieves state-of-the-art results on leading benchmarks for text-to-motion and action-to-motion."
-
Thyme lets AI write code to edit images on the fly
"comprehensive evaluations on nearly 20 benchmarks show that Thyme yields significant and consistent performance gains, particularly in challenging high-resolution perception and complex reasoning tasks"
-
Cosmology tensions may need new physics or better systematics
"These discordances primarily rest on the divergence in the measurement of core cosmological parameters with varying levels of statistical confidence. These possible statistical tensions may be partially accounted for by systematics in various measurements or cosmological probes but there is also a growing indication of potential new physics beyond the standard model."
-
Spotlighting cuts indirect prompt injection success to under 2%
"We introduce spotlighting, a family of prompt engineering techniques"
-
1M precise text-video pairs raise T2V quality
"Our OpenVid-1M has several characteristics: 1) Superior in quantity... 2) Superior in visual quality... 3) Expressive in caption"
-
H0 tension at 5 sigma signals need for new cosmology
"their persistence after several years of accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity for new physics or generalisations beyond the standard model"
-
1.3B model matches 5x larger ones on math and coding
"Our training data for phi-1.5 is a combination of phi-1’s training data (7B tokens) and newly created synthetic, “textbook-like” data (roughly 20B tokens) for the purpose of teaching common sense reasoning and general knowledge of the world"