Exact GPs with Scalable (GPU) Inference¶
In GPyTorch, Exact GP inference is still our preferred approach to large regression datasets. By coupling GPU acceleration with BlackBox Matrix-Matrix Inference and LancZos Variance Estimates (LOVE), GPyTorch can perform inference on datasets with over 1,000,000 data points while making very few approximations.
How GPyTorch Scales Exact GPs¶
GPyTorch relies on two key techniques to scale exact GPs to millions of data points using GPU acceleration.
- BlackBox Matrix-Matrix Inference (introduced by Gardner et al., 2018) computes the GP marginal log likelihood using only matrix multiplication. It is stochastic, but can scale exact GPs to millions of data points.
- GP Regression (CUDA) with Fast Variances (LOVE) demonstrates LanczOs Variance Estimates (LOVE) , a technique to rapidly speed up predictive variance computations. Check out this notebook to see how to use LOVE in GPyTorch, and how it compares to standard variance computations.
Exact GPs with GPU Acceleration¶
Here are examples of Exact GPs using GPU acceleration.
- For datasets with up to 10,000 data points, see our single GPU regression example.
- For datasets with up to 1,000,000 data points, see our multi GPU regression example.
- GPyTorch also integrates with KeOPs for extremely fast and memory-efficient kernel computations. See the KeOPs integration notebook.
Scalable Kernel Approximations¶
While exact computations are our preferred approach, GPyTorch offer approximate kernels to reduce the asymptotic complexity of inference.
- Sparse Gaussian Process Regression (SGPR) (proposed by Titsias, 2009) which approximates kernels using a set of inducing points. This is a general purpose approximation.
- Structured Kernel Interpolation (SKI/KISS-GP) (proposed by Wilson and Nickish, 2015) which interpolates inducing points on a regularly spaced grid. This is designed for low-dimensional data and stationary kernels.
- Structured Kernel Interpolation for Products (SKIP) (proposed by Gardner et al., 2018) which extends SKI to higher dimensions.