DCA corresponds to Euler discretization of a Bregman gradient flow, with a damped version providing monotone descent, global linear rates under metric DC-PL, and local exponential convergence near nondegenerate minima.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
math.OC 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
A proximal limited-memory quasi-Newton scheme is developed for nonsmooth nonconvex optimization, with global convergence proven under mild assumptions and rates under the Kurdyka-Lojasiewicz property.
citing papers explorer
-
Continuous-Time Dynamics of the Difference-of-Convex Algorithm
DCA corresponds to Euler discretization of a Bregman gradient flow, with a damped version providing monotone descent, global linear rates under metric DC-PL, and local exponential convergence near nondegenerate minima.
-
Proximal Limited-Memory Quasi-Newton Methods for Nonsmooth Nonconvex Optimization
A proximal limited-memory quasi-Newton scheme is developed for nonsmooth nonconvex optimization, with global convergence proven under mild assumptions and rates under the Kurdyka-Lojasiewicz property.