gpytorch.likelihoods

Likelihood

class gpytorch.likelihoods.Likelihood[source]

A Likelihood in GPyTorch specifies the mapping from latent function values f to observed labels y.

For example, in the case of regression this might be a Gaussian distribution, as y(x) is equal to f(x) plus Gaussian noise:

y(x) = f(x) + epsilon, epsilon ~ N(0,sigma^{2}_{n} I)

In the case of classification, this might be a Bernoulli distribution, where the probability that y=1 is given by the latent function passed through some sigmoid or probit function:

y(x) = 1 w/ probability sigma(f(x)), -1 w/ probability 1-sigma(f(x))

In either case, to implement a (non-Gaussian) likelihood function, GPyTorch requires that two methods be implemented:

  1. A forward method that computes predictions p(y*|x*) given a distribution over the latent function p(f*|x*). Typically, this solves or approximates the integral:

    p(y*|x*) = int p(y*|f*)p(f*|x*) df*

  2. A variational_log_probability method that computes the log probability

    log p(y|f) from a set of samples of f. This is only used for variational inference.

forward(*inputs, **kwargs)[source]

Computes a predictive distribution p(y*|x*) given either a posterior distribution p(f|D,x) or a prior distribution p(f|x) as input.

With both exact inference and variational inference, the form of p(f|D,x) or p(f|x) should usually be Gaussian. As a result, input should usually be a MultivariateNormal specified by the mean and (co)variance of p(f|…).

variational_log_probability(f, y)[source]

Compute the log likelihood p(y|f) given y and averaged over a set of latent function samples.

For the purposes of our variational inference implementation, y is an n-by-1 label vector, and f is an n-by-s matrix of s samples from the variational posterior, q(f|D).

Standard Likelihoods

GaussianLikelihood

class gpytorch.likelihoods.GaussianLikelihood(noise_prior=None, batch_size=1, param_transform=<MagicMock id='140117992732208'>, inv_param_transform=None, **kwargs)[source]

BernoulliLikelihood

class gpytorch.likelihoods.BernoulliLikelihood[source]

Implements the Bernoulli likelihood used for GP classification, using Probit regression (i.e., the latent function is warped to be in [0,1] using the standard Normal CDF Phi(x)). Given the identity Phi(-x) = 1-Phi(x), we can write the likelihood compactly as:

\[\begin{equation*} p(Y=y|f)=\Phi(yf) \end{equation*}\]

Specialty Likelihoods

MultitaskGaussianLikelihood

class gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks, rank=0, task_correlation_prior=None, batch_size=1, noise_prior=None, param_transform=<MagicMock id='140117992732264'>, inv_param_transform=None, **kwargs)[source]

A convenient extension of the gpytorch.likelihoods.GaussianLikelihood to the multitask setting that allows for a full cross-task covariance structure for the noise. The fitted covariance matrix has rank rank. If a strictly diagonal task noise covariance matrix is desired, then rank=0 should be set. (This option still allows for a different log_noise parameter for each task.). This likelihood assumes homoskedastic noise.

Like the Gaussian likelihood, this object can be used with exact inference.

Note: This currently does not yet support batched training and evaluation. If you need support for this, use MultitaskGaussianLikelihoodKronecker for the time being.

SoftmaxLikelihood

class gpytorch.likelihoods.SoftmaxLikelihood(num_features, n_classes, mixing_weights_prior=None)[source]

Implements the Softmax (multiclass) likelihood used for GP classification.