pith. machine review for the scientific record. sign in

arxiv: 1807.02582 · v1 · submitted 2018-07-06 · 📊 stat.ML · cs.LG

Recognition: unknown

Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords kernelgaussianapproacheslearningmethodsprocessesregressionresults
0
0 comments X
read the original abstract

This paper is an attempt to bridge the conceptual gaps between researchers working on the two widely used approaches based on positive definite kernels: Bayesian learning or inference using Gaussian processes on the one side, and frequentist kernel methods based on reproducing kernel Hilbert spaces on the other. It is widely known in machine learning that these two formalisms are closely related; for instance, the estimator of kernel ridge regression is identical to the posterior mean of Gaussian process regression. However, they have been studied and developed almost independently by two essentially separate communities, and this makes it difficult to seamlessly transfer results between them. Our aim is to overcome this potential difficulty. To this end, we review several old and new results and concepts from either side, and juxtapose algorithmic quantities from each framework to highlight close similarities. We also provide discussions on subtle philosophical and theoretical differences between the two approaches.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 8 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Robust and Data-Adaptive Integration of Nonconcurrent Data in Platform Trials via Gaussian Processes

    stat.ME 2026-05 unverdicted novelty 7.0

    A Gaussian process model enables data-adaptive integration of nonconcurrent controls in platform trials, reducing posterior variance with a non-increasing bias bound.

  2. To discretize continually: Mean shift interacting particle systems for Bayesian inference

    stat.ML 2026-05 unverdicted novelty 7.0

    Mean shift interacting particle systems generate weighted samples approximating expectations under unnormalized densities by minimizing MMD through normalizing-constant-invariant dynamics.

  3. Kernel-based guarantees for nonlinear parametric models in Bayesian optimization

    stat.ML 2026-05 unverdicted novelty 7.0

    A kernel framework over parameter space yields confidence bounds for regularized nonlinear models on adaptive data, supporting convergence analysis in Bayesian optimization.

  4. Pressure reconstruction from error-embedded gradient measurements: a Gaussian-process generalization of Green's function integration

    physics.flu-dyn 2026-05 conditional novelty 7.0

    Gaussian process regression reconstructs pressure from error-embedded gradients by treating the field as a random process with a fitted correlation kernel, generalizing Green's function integration as its zero-noise l...

  5. Nearly-Optimal Algorithm for Adversarial Kernelized Bandits

    cs.LG 2026-05 unverdicted novelty 7.0

    Exponential-weight algorithm achieves Õ(√(T γ_T)) adversarial regret for kernelized bandits and is optimal up to logs for squared exponential and Matérn kernels, with an efficient Nyström variant.

  6. Fast and Provably Accurate Sequential Designs using Hilbert Space Gaussian Processes

    math.ST 2026-04 unverdicted novelty 7.0

    Hilbert space Gaussian process approximations enable closed-form IMSE acquisition functions with non-asymptotic error bounds for faster and more accurate sequential designs.

  7. Online Sharp-Calibrated Bayesian Optimization

    cs.LG 2026-05 unverdicted novelty 6.0

    OSCBO adaptively balances Gaussian process sharpness and calibration in Bayesian optimization by casting hyperparameter selection as constrained online learning, while preserving sublinear regret bounds.

  8. Adversarial Robustness of NTK Neural Networks

    stat.ML 2026-04 unverdicted novelty 6.0

    NTK networks achieve minimax optimal adversarial regression rates in Sobolev spaces with early stopping, but minimum-norm interpolants are vulnerable.