Recognition: unknown
Fast learning rates for plug-in classifiers under the margin condition
read the original abstract
It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, i.e., the rates faster than $n^{-1/2}$. The works on this subject suggested the following two conjectures: (i) the best achievable fast rate is of the order $n^{-1}$, and (ii) the plug-in classifiers generally converge slower than the classifiers based on empirical risk minimization. We show that both conjectures are not correct. In particular, we construct plug-in classifiers that can achieve not only the fast, but also the {\it super-fast} rates, i.e., the rates faster than $n^{-1}$. We establish minimax lower bounds showing that the obtained rates cannot be improved.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
Empirical Bernstein Confidence Intervals for Kernel Smoothers: A Safe and Sharp Way to Exhaust Assumed Smoothness
Empirical Bernstein confidence intervals for univariate kernel smoothers attain at least nominal coverage uniformly over local Taylor-remainder smoothness classes up to a vanishing error while achieving the minimax wi...
-
Empirical Bernstein Confidence Intervals for Kernel Smoothers: A Safe and Sharp Way to Exhaust Assumed Smoothness
Empirical Bernstein confidence intervals for kernel smoothers attain nominal coverage up to a remainder of order n to the minus 2S over 2S+1 while achieving minimax optimal widths under S-th order local smoothness.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.