theorem
proved
branch_selection
show as:
view math explainer →
open explainer
Generate a durable explainer page for this declaration.
open lean source
IndisputableMonolith.Foundation.BranchSelection on GitHub at line 173.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
used by
formal source
170addressed by separate generator-calibration / higher-derivative /
171action-functional conditions, none of which are part of the
172operator-level Aristotelian content. -/
173theorem branch_selection (c : ℝ)
174 (hCoupling : IsCouplingCombiner (RCLCombiner c)) :
175 c ≠ 0 :=
176 (RCLCombiner_isCoupling_iff c).mp hCoupling
177
178/-- The contrapositive: if `c = 0`, the RCL combiner is not coupling. The
179additive branch fails the strengthened (L4*). -/
180theorem additive_branch_not_coupling :
181 ¬ IsCouplingCombiner (RCLCombiner 0) := by
182 intro h
183 exact branch_selection 0 h rfl
184
185/-! ## Headline Certificate -/
186
187/-- **Branch Selection Certificate.**
188
189The structural strengthening of (L4) — coupling, that is, non-additivity —
190forces the bilinear branch within the polynomial RCL family produced by
191the translation theorem of `Logic_Functional_Equation.tex`. -/
192structure BranchSelectionCert where
193 separately_additive_iff_defect_zero :
194 ∀ P : ℝ → ℝ → ℝ,
195 SeparatelyAdditive P ↔ ∀ u v : ℝ, interactionDefect P u v = 0
196 coupling_iff_defect_nonzero :
197 ∀ P : ℝ → ℝ → ℝ,
198 IsCouplingCombiner P ↔ ∃ u v : ℝ, interactionDefect P u v ≠ 0
199 rcl_coupling_iff :
200 ∀ c : ℝ, IsCouplingCombiner (RCLCombiner c) ↔ c ≠ 0
201 bilinear_branch_forced :
202 ∀ c : ℝ, IsCouplingCombiner (RCLCombiner c) → c ≠ 0
203 additive_branch_excluded :
papers checked against this theorem (showing 30 of 385)
-
Adaptive per-dimension priors let VAEs recover independent sources
"heterogeneous per-dimension priors reduce latent permutation symmetry relative to homogeneous shared priors... KL regularization induced by the adaptive GMM prior creates source-specific attraction behavior"
-
Minimax version of GCE bounds classification error
"MGCE margin loss is classification-calibrated... arg max f(x,μ*)y = arg max p*(y|x)"
-
Weight-only score prunes MoE experts without calibration
"both AIMER and the Hoyer metric are functions of the same underlying ℓ1/ℓ2 ratio"
-
New index matches supergravity below black-hole threshold in D1-D5 CFT
"By examining the action of the supercharge deformed by an exactly marginal operator on the relevant symmetry algebra, we propose a superselection rule governing the lifting process of BPS states, and use it to construct the REG by summing only over those symmetry sectors that can mix according to this rule."
-
Logistic tuning cuts GNSS positioning errors by 28 percent
"η_TLQLC(s) and η_TLAH(s) efficiency curves... ARE bounded 0.95–1.01"
-
Neural model compiles any single-qubit gate into NMR pulses
"comparative risk-aware redesign based on right-tail Conditional Value-at-Risk (RU-CVaR)"
-
Bandit-guided distributed design cuts regret in black-box experiments
"Kriging Believer variance deflation σ^(j)²(x) = σ^(0)²(x) − Σ k(x,x^(l))² / (σ²(x^(l)) + σ_n²)"
-
Architecture search, not scale, yields stable EEG microstates
"four-dimensional grid search over cluster count (K from 3 to 20), latent dimensionality, network depth, and channel width"
-
Wrapper makes any network normalization-equivariant
"WNE is parameter-free; it exactly parameterizes the NE class without constraining internal layers."
-
Exact critical drift pins when random labels allow infinite increasing paths
"Proposition 1.8 and Theorem 1.6: first-moment + branching-number lower bound θ_c ≥ 1/(e br_T)"
-
FLEX scheduler adapts TDD ratios to enforce QoS in industrial 5G
"The priority function: w_i,f(t) = 1/5QI × r_i,f(t)/R_i,f(t)"
-
Text-only supervision cannot enforce model honesty
"Tensor interface exports per-token entropy H_t = −∑ p_v^{(t)} log p_v^{(t)} that is structurally coupled to correctness under standard training."
-
EMA cuts model adaptation costs 15-42% in shifting environments
"prioritizes labeling high-utility data while balancing the tradeoff between training and labeling cost"
-
Action history prior straightens robot flow paths
"We trace this gain to markedly straighter probability paths, echoing the effect of optimal-transport couplings in Rectified Flow"
-
Quadratic cost correction lifts VLA success 28.8% in dynamic scenes
"From a single quadratic cost, joint minimization yields a unified solution that decomposes orthogonally into two distinct channels. The pace channel compresses execution along the planned direction, while the path channel applies an orthogonal spatial offset"
-
Real-valued provability in linear logic converges to MALL at infinite hardness
"softales ... enriched poset that generalises ... residuated lattices ... featuring a family of operations internalizing ⊕_{-p} and ⊕_p via an enriched universal property"
-
One model detects and locates fakes on 48 benchmarks
"data scaling laws, cross-domain artifacts transfer-interference patterns"
-
Network-aware tokenization stabilizes brain connectivity learning
"we propose a bilinear network-aware tokenization... W_{l,m}=U_l ⊙ U_m ... replaces quadratic growth in patch-specific parameters with a linear scaling in the number of networks"
-
Bilinear fusion lifts multimodal survival prediction
"We construct three independent low-rank bilinear modules: (h×r),(h×c),(r×c). Each module learns multiplicative relationships using a factorized bilinear mapping: f(x, y) = W((U x) ⊙ (V y))"
-
RL policy selects sources to cut molecular OOD errors 11%
"POMA formulates knowledge transfer as retrieve–compose–adapt with GRPO policy optimization over candidate pool C, dual-scale alignment LDA using squared Frobenius norms of covariance matrices at mol and sub scales."
-
Interaction ramp exposes hidden CDW correlations in Hubbard atoms
"We observe an enhanced visibility of charge-density-wave correlations... after a rapid boost of the interaction strength"
-
Adaptive sampling boosts worst-task performance in multi-task RL
"q*_i = exp(η g_i(θ)) / sum exp(η g_j(θ))"
-
V486 Car binary has 2.1 and 0.4 solar masses plus third companion
"WD+MC fits to TESS LCs … hot spot model … fill-out factor 0.27"
-
RoSHAP gives stable rankings by summarizing SHAP distributions
"RoSHAP_j := (1−P0j) m_j² / s_j ... rewards features that are active, strong, and stable."
-
Nonlinear regularizer restores strong duality in dual decomposition
"vGDD := max_{g_i(·)} ∑ min (f_i(x_i)+g_i(x_i)) s.t. ∑ g_i(x)=0 ∀x∈X"
-
Two embeddings per hit resolve chain conflicts in track graphs
"contrastive hinge loss... |p_i - p_j|^2 for true pairs, max(0, m - |p_i - p_j|^2) otherwise"
-
Quadratic splines achieve minimal curvature for monotone Hermite data
"Define ϕ^-_M(t) := max{0, a-M t, b-M(1-t)} ... Φ^-(M) ..."
-
Presentations yield random model for unramified Galois groups
"random group model μ_Γ on admissible Γ-groups with H-moments |H^Γ| ∏_{v∈T} |H^{Γ_v}|"
-
Focused estimator improves PU learning on imbalanced data
"Empirical evaluations demonstrate state-of-the-art performance on imbalanced datasets under two labeling mechanisms - selecting positives completely at random (SCAR) and selecting at random (SAR)."
-
Optimal control reformulation gives language models fast parallel sampling at high quality
"Flow Matching regression LCFM = E ∥vθ(zt,t) − (z1 − z0)∥² as Lagrangian surrogate for HJB"