theorem
proved
embed_add
show as:
view math explainer →
open explainer
Read the cached plain-language explainer.
open lean source
IndisputableMonolith.Foundation.ArithmeticFromLogic on GitHub at line 576.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
formal source
573/-- **Multiplicative homomorphism**: addition in `LogicNat` corresponds
574to multiplication of orbit elements. This is the structural fact that
575`LogicNat` *is* the orbit, parameterized by iteration count. -/
576theorem embed_add (γ : Generator) (a b : LogicNat) :
577 embed γ (a + b) = embed γ a * embed γ b := by
578 induction b with
579 | identity =>
580 show embed γ (a + LogicNat.zero) = embed γ a * embed γ LogicNat.zero
581 rw [LogicNat.add_zero, embed_zero, mul_one]
582 | step b ih =>
583 show embed γ (a + LogicNat.succ b) = embed γ a * embed γ (LogicNat.succ b)
584 rw [LogicNat.add_succ, embed_succ, embed_succ, ih]
585 ring
586
587/-- The embedding is the natural-number power of `γ.value`. -/
588theorem embed_eq_pow (γ : Generator) (n : LogicNat) :
589 embed γ n = γ.value ^ (LogicNat.toNat n) := by
590 induction n with
591 | identity => simp [embed, LogicNat.toNat]
592 | step n ih =>
593 show γ.value * embed γ n = γ.value ^ (Nat.succ (LogicNat.toNat n))
594 rw [ih, pow_succ]
595 ring
596
597/-! ## Embedding Injectivity (J-positivity off identity)
598
599The key fact: a non-trivial generator `γ ≠ 1` cannot have `γⁿ = γᵐ`
600for `n ≠ m`. Structurally, this is forced by J-positivity off
601identity: `J(γᵏ) > 0` for any `k ≥ 1` (since `γᵏ ≠ 1` whenever
602`γ ≠ 1` and `k ≥ 1`), and `J(x) = 0 ↔ x = 1`. Analytically, it
603follows from `Real.log γ.value ≠ 0` and the rule `log(γⁿ) = n · log γ`
604on positive reals. The latter route is shorter and is what we use
605here. -/
606
papers checked against this theorem (showing 21 of 21)
-
Learned user embeddings personalize text-to-image outputs
"When user data are scarce, new users are represented as linear combinations of existing preference embeddings learned during training"
-
Matrix descriptors let RL reuse local transition geometry across tasks
"M(τ) captures exactly the chosen low-order additive information induced by the fixed lift."
-
Density method decomposes full East-West wage gap
"fi = ⊕j=1^J hj(xi) … clr(fi) = … (b(xi) ⊗ b̃T)^⊤ θ"
-
Additive composition structures latent actions for robot learning
"Proposition 3.2 (Identity). It holds that zii=0. ... Proposition 3.3 (Inverse consistency). ... Proposition 3.4 (Cycle consistency)"
-
ViBA cuts visual odometry errors 12-18% with implicit bundle adjustment
"multi-frame trajectory consistency ... Lmrp = 1/M Σ distth(⇝pm, #»pm)"
-
Second attention pass cuts language model perplexity by 6%
"the mechanism requires the additive residual structure of Pre-LN transformers"
-
One latent scene maps to full HDR exposure stack in single pass
"A pretrained diffusion backbone produces a single coherent scene representation, while a lightweight conditional latent-to-latent head deterministically maps it to exposure-specific representations... exposure variation... corresponds to a structured, monotonic transformation of scene radiance... deterministic transformation of a shared scene latent"
-
Summing PEFT module outputs boosts multi-attribute text control
"output summing... preserves learned module structure... linear combination naturally integrates these learned behaviours"
-
Multiplicative model identifies ATT with imperfect instruments
"Identification is achieved via a modified Wald ratio estimand, which corrects the bias due to the exclusion restriction violation"
-
Context gating exponentially improves retrieval in memory models
"the resulting retrieval state is driven by both a direct contextual bias and a second-order retrieval-gate feedback loop"
-
Three method classes for MD transport coefficients each carry error bounds
"Green–Kubo formula α = ∫ E[R(xt)S(x0)] dt"
-
r_k(n) summation formula now covers Whittaker functions
"functional equation π^{-s} Γ(s) ζ_k(s) = ... and theta transformation (1.7)"
-
Algebraic latent transitions lift VLA success from 48% to 85% on MetaWorld
"consecutive transitions are encouraged to agree with the longer transition they span... reversed transition is encouraged to cancel its forward counterpart"
-
CORF framework links domain shifts to incremental class learning
"Hierarchical Kernel-Based Distillation (HKD)... relational kernel map P_lt... symmetric divergence-based alignment... hierarchical aggregation"
-
Encoding turns any 2D shape into invertible compact code without training
"Linearity: Zernike encoding on composite shape equals linear combination of individual encodings (Sec 6.2); harmonic pose field projection A = p · C with radially orthonormal windows guarantees invertibility (Eq 10, Sec 6.6)"
-
Learned Lagrangian governs visual world model rollouts
"a latent variational integrator: ... the DEL condition matches the incoming and outgoing discrete momenta at qk, giving the update its standard variational-integrator interpretation"
-
0.1% of PyPI packages carry 80% of maintenance impact
"Meff_eco = ∑p∈P (∑q∈P op,q) · mp (linear additive model over transitive closure)"
-
TTD homomorphism yields low-memory split-beam dictionary
"Theorem 1. Ω3 is a homomorphism between <V1, +> and <PT, ⋆>: ∀ Φ1, Φ2 ∈ V1, Ω3(Φ1 + Φ2) = Ω3(Φ1) ⋆ Ω3(Φ2)"
-
Robust learning gives weighted ODE models for heterogeneous systems
"Theorem 1... F∗(X∗(t),t) = Σ ω∗k F(k)(X(k)(t),t) = Dt(X∗(t))"
-
Learning dynamics in low-rank RNNs reduce to low-dimensional ODEs
"K = zz^⊤ + vv^⊤ − mm^⊤ − uu^⊤ ∈ R^{N×N} is conserved throughout learning ... gradient flow admits exact invariants of the learning dynamics."
-
Non-Markovian master equation gains exact memory integral
"K_{n+1}[pϱ_n,...,pϱ_0] = p K_{n+1}[ϱ_n,...,ϱ_0] ... convex-linear maps on the base of a convex cone have a unique extension to the whole cone which is homogeneous of degree one"