theorem
proved
branch_selection
show as:
view math explainer →
open explainer
Generate a durable explainer page for this declaration.
open lean source
IndisputableMonolith.Foundation.BranchSelection on GitHub at line 173.
browse module
All declarations in this module, on Recognition.
explainer page
depends on
used by
formal source
170addressed by separate generator-calibration / higher-derivative /
171action-functional conditions, none of which are part of the
172operator-level Aristotelian content. -/
173theorem branch_selection (c : ℝ)
174 (hCoupling : IsCouplingCombiner (RCLCombiner c)) :
175 c ≠ 0 :=
176 (RCLCombiner_isCoupling_iff c).mp hCoupling
177
178/-- The contrapositive: if `c = 0`, the RCL combiner is not coupling. The
179additive branch fails the strengthened (L4*). -/
180theorem additive_branch_not_coupling :
181 ¬ IsCouplingCombiner (RCLCombiner 0) := by
182 intro h
183 exact branch_selection 0 h rfl
184
185/-! ## Headline Certificate -/
186
187/-- **Branch Selection Certificate.**
188
189The structural strengthening of (L4) — coupling, that is, non-additivity —
190forces the bilinear branch within the polynomial RCL family produced by
191the translation theorem of `Logic_Functional_Equation.tex`. -/
192structure BranchSelectionCert where
193 separately_additive_iff_defect_zero :
194 ∀ P : ℝ → ℝ → ℝ,
195 SeparatelyAdditive P ↔ ∀ u v : ℝ, interactionDefect P u v = 0
196 coupling_iff_defect_nonzero :
197 ∀ P : ℝ → ℝ → ℝ,
198 IsCouplingCombiner P ↔ ∃ u v : ℝ, interactionDefect P u v ≠ 0
199 rcl_coupling_iff :
200 ∀ c : ℝ, IsCouplingCombiner (RCLCombiner c) ↔ c ≠ 0
201 bilinear_branch_forced :
202 ∀ c : ℝ, IsCouplingCombiner (RCLCombiner c) → c ≠ 0
203 additive_branch_excluded :
papers checked against this theorem (showing 30 of 361)
-
Interaction ramp exposes hidden CDW correlations in Hubbard atoms
"We observe an enhanced visibility of charge-density-wave correlations... after a rapid boost of the interaction strength"
-
Adaptive sampling boosts worst-task performance in multi-task RL
"q*_i = exp(η g_i(θ)) / sum exp(η g_j(θ))"
-
V486 Car binary has 2.1 and 0.4 solar masses plus third companion
"WD+MC fits to TESS LCs … hot spot model … fill-out factor 0.27"
-
RoSHAP gives stable rankings by summarizing SHAP distributions
"RoSHAP_j := (1−P0j) m_j² / s_j ... rewards features that are active, strong, and stable."
-
Nonlinear regularizer restores strong duality in dual decomposition
"vGDD := max_{g_i(·)} ∑ min (f_i(x_i)+g_i(x_i)) s.t. ∑ g_i(x)=0 ∀x∈X"
-
Presentations yield random model for unramified Galois groups
"random group model μ_Γ on admissible Γ-groups with H-moments |H^Γ| ∏_{v∈T} |H^{Γ_v}|"
-
Focused estimator improves PU learning on imbalanced data
"Empirical evaluations demonstrate state-of-the-art performance on imbalanced datasets under two labeling mechanisms - selecting positives completely at random (SCAR) and selecting at random (SAR)."
-
Optimal control reformulation gives language models fast parallel sampling at high quality
"Flow Matching regression LCFM = E ∥vθ(zt,t) − (z1 − z0)∥² as Lagrangian surrogate for HJB"
-
AI alignment via internal transaction design to lower misalignment costs
"competitive cost discovery... costs most relevant... opportunity costs and interference costs... decentralized competition among modules forces behavioral revelation... cost truthfulness ensures resource shares converge to modules’ true marginal contributions"
-
Moderate SaaS alignment maximizes long-term SME performance
"H1: There is a quadratic inverted relationship (inverted U-shaped) between human asset specificity (HAS) and SaaS alignment with SME strategy"
-
Action conservation flags galactic bar stars automatically
"We define the action to be functionally conserved if the percentage change in action is less than 10% of the initial action value"
-
AI adoption superlinearly amplifies financial systemic risk
"Lemma 3.3 (Endogenous Fragility: Convexity of the Equilibrium Coupling)"
-
DDSR fuses black-box predictions with ViL priors for domain adaptation
"subnetwork-based regularization ... Jensen–Shannon divergence ... gradient divergency"
-
Iterative algorithm fills missing low-angle diffraction data
"support constraint... ℋ(r) = exp(-(r-r_c)/w)^(2N) with r1 ≤ r ≤ r2"
-
Supercurrent generates tunable Néel torque in altermagnet hybrids
"leading order expansion ... F(1)(q,n) = Jex γ(qx ny + qy nx) ... F(2)(q,n) = −J²ex Az n²z"
-
Mask-free subtitle removal raises PSNR by 6.77 dB
"LoRA-based adaptation with generation feedback for dynamic context adjustment... only 0.77% of the parameters"
-
Dependence maximizes exponential last passage times
"Convex majorization of maxima of partial sums - The method of Meilijson & Nádas [8]"
-
Curved spacetime prolongs inverted distributions in magnetospheres
"drift velocities favour the formation of spiral-shaped momentum distributions that still display inverted Landau populations"
-
Confidence values improve 3D Gaussian mesh extraction accuracy
"we introduce losses which penalize per-primitive color and normal variance"
-
Fixed reference profile stabilizes OOD detection
"both inhibiting and exciting activation shifts independently contribute"
-
RCCs learn latent interactions to generalize across novel combinations
"Representation Classification Chains (RCCs), a JEPA-style architecture which disentangles two processes: variable inference and variable embeddings are learned by separate modules through Reinforcement Learning and self-supervised learning, respectively."
-
Kirigami sheets toggle lift and drag via flow-driven buckling
"fN,T = ½ ρ H CN,T(θ) (a(ε) U·n)² with cosθ = 1/(1+ε)"
-
Transformers grok modular addition by rotating numbers on a circle
"training can be split into three continuous phases: memorization, circuit formation, and cleanup"
-
Joint velocity fields keep multi-agent flows coordinated in one pass
"ui_θ = ui_ind + ui_coord ... Coordinated Velocity Attention (CVA) ... Adaptive Coordination Gating"
-
Multi-level graph contrastive learning removes hyperparameter tuning
"Eq. (16) ... sum_l [(s_l^-)^2 + (s_l^+ - 1)^2] = 8m^2 (hyperspherical decision boundary)"
-
Multi-quantile downscaling catches 18x more extreme rain events
"Under this design, data augmentation via cVAE becomes complementary: the median head absorbs synthetic patterns without contaminating upper quantiles."
-
Adaptive lambda fixes coverage holes in medical conformal sets
"RAPS augments this with a regularization penalty: SRAPS(x, y) = Sbase(x, y) + λ max(0, k − kreg)"
-
Conditioned teacher signals fix reverse KL in LLM distillation
"RKL restricts the teacher to the role of a post-hoc discriminator... uninformative negative feedback"
-
Delight gate decides when to explore and reduces regret growth
"same hyperparameters transfer without retuning across Bernoulli, linear bandits, tabular MDPs"
-
Generative semantics and bi-layer ensembles cut bias and variance in image clustering
"Bias is defined as the average deviation ... variance quantifies the dispersion across predictions"