pith. machine review for the scientific record. sign in

arxiv: 1010.0072 · v2 · submitted 2010-10-01 · 🧮 math.ST · stat.TH

Recognition: unknown

Linear regression through PAC-Bayesian truncation

Authors on Pith no claims yet
classification 🧮 math.ST stat.TH
keywords exponentiallinearpac-bayesianresultsriskassumptionscombinationdeviations
0
0 comments X
read the original abstract

We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression under L^\infty constraints on the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log(n) factor, complex constants involving the conditioning of the Gram matrix of the covariates, kurtosis coefficients or some geometric quantity characterizing the relation between L^2 and L^\infty-balls and require some additional assumptions like exponential moments of the output. This work provides a PAC-Bayesian shrinkage procedure with a simple excess risk bound of order d/n holding in expectation and in deviations, under various assumptions. The common surprising factor of these results is their simplicity and the absence of exponential moment condition on the output distribution while achieving exponential deviations. The risk bounds are obtained through a PAC-Bayesian analysis on truncated differences of losses. We also show that these results can be generalized to other strongly convex loss functions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Statistical Inference via T-Posterior Randomised Estimators

    math.ST 2026-05 unverdicted novelty 5.0

    Introduces T-posterior randomised estimators that deliver non-asymptotic performance bounds and robustness to misspecification for distribution estimation, illustrated on Poisson process intensity.