pith. machine review for the scientific record. sign in

arxiv: 1710.07324 · v2 · submitted 2017-10-19 · 💻 cs.LG · stat.ML

Recognition: unknown

Scalable Gaussian Processes with Billions of Inducing Inputs via Tensor Train Decomposition

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords traininducinginputsneuralallowsapproachbillionsdecomposition
0
0 comments X
read the original abstract

We propose a method (TT-GP) for approximate inference in Gaussian Process (GP) models. We build on previous scalable GP research including stochastic variational inference based on inducing inputs, kernel interpolation, and structure exploiting algebra. The key idea of our method is to use Tensor Train decomposition for variational parameters, which allows us to train GPs with billions of inducing inputs and achieve state-of-the-art results on several benchmarks. Further, our approach allows for training kernels based on deep neural networks without any modifications to the underlying GP model. A neural network learns a multidimensional embedding for the data, which is used by the GP to make the final prediction. We train GP and neural network parameters end-to-end without pretraining, through maximization of GP marginal likelihood. We show the efficiency of the proposed approach on several regression and classification benchmark datasets including MNIST, CIFAR-10, and Airline.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Neural Operator: Graph Kernel Network for Partial Differential Equations

    cs.LG 2020-03 unverdicted novelty 7.0

    Graph Kernel Networks learn PDE solution operators that generalize across discretization methods and grid resolutions using graph-based kernel integration.