gpytorch.models

Models for Exact GP Inference

ExactGP

class gpytorch.models.ExactGP(train_inputs, train_targets, likelihood)[source]
get_fantasy_model(inputs, targets, **kwargs)[source]

Returns a new GP model that incorporates the specified inputs and targets as new training data.

Using this method is more efficient than updating with set_train_data when the number of inputs is relatively small, because any computed test-time caches will be updated in linear time rather than computed from scratch.

Note

If targets is a batch (e.g. b x m), then the GP returned from this method will be a batch mode GP. If inputs is of the same (or lesser) dimension as targets, then it is assumed that the fantasy points are the same for each target batch.

Args:
  • inputs (Tensor b1 x … x bk x m x d or f x b1 x … x bk x m x d): Locations of fantasy
    observations.
  • targets (Tensor b1 x … x bk x m or f x b1 x … x bk x m): Labels of fantasy observations.
Returns:
  • ExactGP
    An ExactGP model with n + m training examples, where the m fantasy examples have been added and all test-time caches have been updated.
set_train_data(inputs=None, targets=None, strict=True)[source]

Set training data (does not re-fit model hyper-parameters).

Args:
  • inputs the new training inputs
  • targets the new training targets
  • strict
    if True, the new inputs and targets must have the same shape, dtype, and device as the current inputs and targets. Otherwise, any shape/dtype/device are allowed.

Models for Variational GP Inference

ApproximateGP

class gpytorch.models.ApproximateGP(variational_strategy)[source]
forward(x)[source]

As in the exact GP setting, the user-defined forward method should return the GP prior mean and covariance evaluated at input locations x.

pyro_guide(input, beta=1.0, name_prefix='')[source]

(For Pyro integration only). The component of a pyro.guide that corresponds to drawing samples from the latent GP function.

Args:
input (torch.Tensor)
The inputs \(\mathbf X\).
beta (float, default=1.)
How much to scale the :math:` ext{KL} [ q(mathbf f) Vert p(mathbf f) ]` term by.
name_prefix (str, default=”“)
A name prefix to prepend to pyro sample sites.
pyro_model(input, beta=1.0, name_prefix='')[source]

(For Pyro integration only). The component of a pyro.model that corresponds to drawing samples from the latent GP function.

Args:
input (torch.Tensor)
The inputs \(\mathbf X\).
beta (float, default=1.)
How much to scale the \(\text{KL} [ q(\mathbf f) \Vert p(\mathbf f) ]\) term by.
name_prefix (str, default=”“)
A name prefix to prepend to pyro sample sites.

Returns: torch.Tensor samples from \(q(\mathbf f)\)

Models for integrating with Pyro

PyroGP

class gpytorch.models.PyroGP(variational_strategy, likelihood, num_data, name_prefix='', beta=1.0)[source]

A ApproximateGP designed to work with Pyro.

This module makes it possible to include GP models with more complex probablistic models, or to use likelihood functions with additional variational/approximate distributions.

The parameters of these models are learned using Pyro’s inference tools, unlike other models that optimize models with respect to a MarginalLogLikelihood. See the Pyro examples for detailed examples.

Args:
variational_strategy (VariationalStrategy):
The variational strategy that defines the variational distribution and the marginalization strategy.
likelihood (Likelihood):
The likelihood for the model
num_data (int):
The total number of training data points (necessary for SGD)
name_prefix (str, optional):
A prefix to put in front of pyro sample/plate sites
beta (float - default 1.):
A multiplicative factor for the KL divergence term. Setting it to 1 (default) recovers true variational inference (as derived in Scalable Variational Gaussian Process Classification). Setting it to anything less than 1 reduces the regularization effect of the model (similarly to what was proposed in the beta-VAE paper).
Example:
>>> class MyVariationalGP(gpytorch.models.PyroGP):
>>>     # implementation
>>>
>>> # variational_strategy = ...
>>> likelihood = gpytorch.likelihoods.GaussianLikelihood()
>>> model = MyVariationalGP(variational_strategy, likelihood, train_y.size())
>>>
>>> optimizer = pyro.optim.Adam({"lr": 0.01})
>>> elbo = pyro.infer.Trace_ELBO(num_particles=64, vectorize_particles=True)
>>> svi = pyro.infer.SVI(model.model, model.guide, optimizer, elbo)
>>>
>>> # Optimize variational parameters
>>> for _ in range(n_iter):
>>>    loss = svi.step(train_x, train_y)
guide(input, target, *args, **kwargs)[source]

Guide function for Pyro inference. Includes the guide for the GP’s likelihood function as well.

Args:
input (torch.Tensor):
\(\mathbf X\) The input values values
target (torch.Tensor):
\(\mathbf y\) The target values
*args, **kwargs:
Additional arguments passed to the likelihood’s forward function.
model(input, target, *args, **kwargs)[source]

Model function for Pyro inference. Includes the model for the GP’s likelihood function as well.

Args:
input (torch.Tensor):
\(\mathbf X\) The input values values
target (torch.Tensor):
\(\mathbf y\) The target values
*args, **kwargs:
Additional arguments passed to the likelihood’s forward function.