Local LMO is a new projection-free method that achieves the convergence rates of projected gradient descent for constrained optimization by using local linear minimization oracles over small balls.
Title resolution pending
11 Pith papers cite this work. Polarity classification is still indexing.
verdicts
UNVERDICTED 11representative citing papers
In generalized contrastive learning with imbalanced classes, optimal representations collapse to class means whose angular geometry is determined by class proportions via convex optimization, and extreme imbalance causes all minority classes to collapse to one vector.
A physics-informed Bayesian model recovers user trajectories and radio maps from CSI measurements by using multipath feature distances as proxies for spatial displacements under known access point geometry.
A prediction market design that uses online learning to adaptively mix liquidity regimes from cost functions, achieving switching-regret bounds against the best hindsight sequence.
Optimistic bilevel optimization with manifold lower-level minimizers is differentiable if the optimistic selection is unique, yielding a pseudoinverse hyper-gradient and a convergent HG-MS algorithm whose rate depends on intrinsic manifold dimension.
Introduces Bayesian Sensitivity Value (BSV) for causal inference sensitivity analysis based on evidence-derived priors and Monte Carlo estimation, applied to diabetes treatment effects.
Policy iteration for discounted robust MDPs is strongly polynomial for L1 and L∞ uncertainty sets but hard for other Lp sets.
Diffusion models solve noisy (non)linear inverse problems via approximated posterior sampling that blends diffusion steps with manifold gradients without strict consistency projection.
Weighted rules extend stable model semantics to support probabilistic reasoning, model ranking, and statistical inference in answer set programs.
Stationary duality reduces composite cardinality optimization to simple cardinality, yielding dual problems with equivalent local solutions and global solutions under appropriate parameter selection.
Robust learning problems are formulated as quasar-convex optimization, and HiPPA is proposed as an inexact high-order proximal method with global and superlinear convergence guarantees.
citing papers explorer
-
Local LMO: Constrained Gradient Optimization via a Local Linear Minimization Oracle
Local LMO is a new projection-free method that achieves the convergence rates of projected gradient descent for constrained optimization by using local linear minimization oracles over small balls.
-
Optimal Representations for Generalized Contrastive Learning with Imbalanced Datasets
In generalized contrastive learning with imbalanced classes, optimal representations collapse to class means whose angular geometry is determined by class proportions via convex optimization, and extreme imbalance causes all minority classes to collapse to one vector.
-
Annotation-Free Indoor Radio Mapping via Physics-Informed Trajectory Inference
A physics-informed Bayesian model recovers user trajectories and radio maps from CSI measurements by using multipath feature distances as proxies for spatial displacements under known access point geometry.
-
Adaptive Liquidity in Prediction Markets via Online Learning
A prediction market design that uses online learning to adaptively mix liquidity regimes from cost functions, achieving switching-regret bounds against the best hindsight sequence.
-
Select-then-differentiate: Solving Bilevel Optimization with Manifold Lower-level Solution Sets
Optimistic bilevel optimization with manifold lower-level minimizers is differentiable if the optimistic selection is unique, yielding a pseudoinverse hyper-gradient and a convergent HG-MS algorithm whose rate depends on intrinsic manifold dimension.
-
Bayesian Sensitivity of Causal Inference Estimators under Evidence-Based Priors
Introduces Bayesian Sensitivity Value (BSV) for causal inference sensitivity analysis based on evidence-derived priors and Monte Carlo estimation, applied to diabetes treatment effects.
-
On the Complexity of Discounted Robust MDPs with $L_p$ Uncertainty Sets
Policy iteration for discounted robust MDPs is strongly polynomial for L1 and L∞ uncertainty sets but hard for other Lp sets.
-
Diffusion Posterior Sampling for General Noisy Inverse Problems
Diffusion models solve noisy (non)linear inverse problems via approximated posterior sampling that blends diffusion steps with manifold gradients without strict consistency projection.
-
Weighted Rules under the Stable Model Semantics
Weighted rules extend stable model semantics to support probabilistic reasoning, model ranking, and statistical inference in answer set programs.
-
On the Stationary Duality of Structural Composite Cardinality Optimization
Stationary duality reduces composite cardinality optimization to simple cardinality, yielding dual problems with equivalent local solutions and global solutions under appropriate parameter selection.
-
Robust Learning Meets Quasar-Convex Optimization: Inexact High-Order Proximal-Point Methods
Robust learning problems are formulated as quasar-convex optimization, and HiPPA is proposed as an inexact high-order proximal method with global and superlinear convergence guarantees.