Local LMO is a new projection-free method that achieves the convergence rates of projected gradient descent for constrained optimization by using local linear minimization oracles over small balls.
Title resolution pending
11 Pith papers cite this work. Polarity classification is still indexing.
verdicts
UNVERDICTED 11representative citing papers
In generalized contrastive learning with imbalanced classes, optimal representations collapse to class means whose angular geometry is determined by class proportions via convex optimization, and extreme imbalance causes all minority classes to collapse to one vector.
A physics-informed Bayesian model recovers user trajectories and radio maps from CSI measurements by using multipath feature distances as proxies for spatial displacements under known access point geometry.
A prediction market design that uses online learning to adaptively mix liquidity regimes from cost functions, achieving switching-regret bounds against the best hindsight sequence.
Optimistic bilevel optimization with manifold lower-level minimizers is differentiable if the optimistic selection is unique, yielding a pseudoinverse hyper-gradient and a convergent HG-MS algorithm whose rate depends on intrinsic manifold dimension.
Introduces Bayesian Sensitivity Value (BSV) for causal inference sensitivity analysis based on evidence-derived priors and Monte Carlo estimation, applied to diabetes treatment effects.
Policy iteration for discounted robust MDPs is strongly polynomial for L1 and L∞ uncertainty sets but hard for other Lp sets.
Diffusion models solve noisy (non)linear inverse problems via approximated posterior sampling that blends diffusion steps with manifold gradients without strict consistency projection.
Weighted rules extend stable model semantics to support probabilistic reasoning, model ranking, and statistical inference in answer set programs.
Stationary duality reduces composite cardinality optimization to simple cardinality, yielding dual problems with equivalent local solutions and global solutions under appropriate parameter selection.
Robust learning problems are formulated as quasar-convex optimization, and HiPPA is proposed as an inexact high-order proximal method with global and superlinear convergence guarantees.
citing papers explorer
-
Diffusion Posterior Sampling for General Noisy Inverse Problems
Diffusion models solve noisy (non)linear inverse problems via approximated posterior sampling that blends diffusion steps with manifold gradients without strict consistency projection.