Variational and Approximate GPs

Variational and approximate Gaussian processes are used in a variety of cases:

  • When the GP likelihood is non-Gaussian (e.g. for classification).
  • To scale up GP regression (by using stochastic optimization).
  • To use GPs as part of larger probablistic models.

With GPyTorch it is possible to implement various types approximate GP models. All approximate models consist of the following 3 composible objects:

  • VariationalDistribution, which define the form of the approximate inducing value posterior \(q(\mathbf u)\).
  • VarationalStrategies, which define how to compute \(q(\mathbf f(\mathbf X))\) from \(q(\mathbf u)\).
  • _ApproximateMarginalLogLikelihood, which defines the objective function to learn the approximate posterior (e.g. variational ELBO).

The variational documentation has more information on how to use these objects. Here we provide some examples which highlight some of the common use cases: