# gpytorch.likelihoods¶

## Likelihood¶

class gpytorch.likelihoods.Likelihood(max_plate_nesting=1)[source]

A Likelihood in GPyTorch specifies the mapping from latent function values $$f(\mathbf X)$$ to observed labels $$y$$.

For example, in the case of regression this might be a Gaussian distribution, as $$y(\mathbf x)$$ is equal to $$f(\mathbf x)$$ plus Gaussian noise:

$y(\mathbf x) = f(\mathbf x) + \epsilon, \:\:\:\: \epsilon ~ N(0,\sigma^{2}_{n} \mathbf I)$

In the case of classification, this might be a Bernoulli distribution, where the probability that $$y=1$$ is given by the latent function passed through some sigmoid or probit function:

$\begin{split}y(\mathbf x) = \begin{cases} 1 & \text{w/ probability} \:\: \sigma(f(\mathbf x)) \\ 0 & \text{w/ probability} \:\: 1-\sigma(f(\mathbf x)) \end{cases}\end{split}$

In either case, to implement a likelihood function, GPyTorch only requires a forward method that computes the conditional distribution $$p(y \mid f(\mathbf x))$$.

Calling this object does one of two things:

• If likelihood is called with a torch.Tensor object, then it is assumed that the input is samples from $$f(\mathbf x)$$. This returns the conditional distribution p(y|f(mathbf x)).
• If likelihood is called with a MultivariateNormal object, then it is assumed that the input is the distribution $$f(\mathbf x)$$. This returns the marginal distribution p(y|mathbf x).
Args:
max_plate_nesting (int, default=1)
(For Pyro integration only). How many batch dimensions are in the function. This should be modified if the likelihood uses plated random variables.
expected_log_prob(observations, function_dist, *args, **kwargs)[source]

(Used by VariationalELBO for variational inference.)

Computes the expected log likelihood, where the expectation is over the GP variational distribution.

$\sum_{\mathbf x, y} \mathbb{E}_{q\left( f(\mathbf x) \right)} \left[ \log p \left( y \mid f(\mathbf x) \right) \right]$
Args:
observations (torch.Tensor)
Values of $$y$$.
function_dist (MultivariateNormal)
Distribution for $$f(x)$$.
args, kwargs
Passed to the forward function
Returns
torch.Tensor (log probability)
forward(function_samples, *args, data={}, **kwargs)[source]

Computes the conditional distribution $$p(\mathbf y \mid \mathbf f, \ldots)$$ that defines the likelihood.

Parameters: function_samples (torch.Tensor) – Samples from the function ($$\mathbf f$$) data (dict) – (Optional, Pyro integration only) Additional variables ($$\ldots$$) that the likelihood needs to condition on. The keys of the dictionary will correspond to Pyro sample sites in the likelihood’s model/guide. args – Additional args kwargs – Additional kwargs Distribution object (with same shape as function_samples) Distribution
get_fantasy_likelihood(**kwargs)[source]
log_marginal(observations, function_dist, *args, **kwargs)[source]

(Used by PredictiveLogLikelihood for approximate inference.)

Computes the log marginal likelihood of the approximate predictive distribution

$\sum_{\mathbf x, y} \log \mathbb{E}_{q\left( f(\mathbf x) \right)} \left[ p \left( y \mid f(\mathbf x) \right) \right]$

Note that this differs from expected_log_prob() because the $$log$$ is on the outside of the expectation.

Args:
observations (torch.Tensor)
Values of $$y$$.
function_dist (MultivariateNormal)
Distribution for $$f(x)$$.
args, kwargs
Passed to the forward function
Returns
torch.Tensor (log probability)
marginal(function_dist, *args, **kwargs)[source]

Computes a predictive distribution $$p(y^* | \mathbf x^*)$$ given either a posterior distribution $$p(\mathbf f | \mathcal D, \mathbf x)$$ or a prior distribution $$p(\mathbf f|\mathbf x)$$ as input.

With both exact inference and variational inference, the form of $$p(\mathbf f|\mathcal D, \mathbf x)$$ or $$p(\mathbf f| \mathbf x)$$ should usually be Gaussian. As a result, function_dist should usually be a MultivariateNormal specified by the mean and (co)variance of $$p(\mathbf f|...)$$.

Args:
function_dist (MultivariateNormal)
Distribution for $$f(x)$$.
args, kwargs
Passed to the forward function
Returns:
Distribution object (the marginal distribution, or samples from it)
pyro_guide(function_dist, target, *args, **kwargs)[source]

(For Pyro integration only).

Part of the guide function for the likelihood. This should be re-defined if the likelihood contains any latent variables that need to be infered.

Parameters: function_dist (MultivariateNormal) – Distribution of latent function $$q(\mathbf f)$$. target (torch.Tensor) – Observed $$\mathbf y$$. args – Additional args (for forward()). kwargs – Additional kwargs (for forward()).
pyro_model(function_dist, target, *args, **kwargs)[source]

(For Pyro integration only).

Part of the model function for the likelihood. It should return the This should be re-defined if the likelihood contains any latent variables that need to be infered.

Parameters: function_dist (MultivariateNormal) – Distribution of latent function $$p(\mathbf f)$$. target (torch.Tensor) – Observed $$\mathbf y$$. args – Additional args (for forward()). kwargs – Additional kwargs (for forward()).

## One-Dimensional Likelihoods¶

Likelihoods for GPs that are distributions of scalar functions. (I.e. for a specific $$\mathbf x$$ we expect that $$f(\mathbf x) \in \mathbb{R}$$.)

One-dimensional likelihoods should extend gpytoch.likelihoods._OneDimensionalLikelihood to reduce the variance when computing approximate GP objective functions. (Variance reduction is accomplished by using 1D Gauss-Hermite quadrature rather than MC-integration).

### GaussianLikelihood¶

class gpytorch.likelihoods.GaussianLikelihood(noise_prior=None, noise_constraint=None, batch_shape=<MagicMock name='mock()' id='140401726369464'>, **kwargs)[source]

### BernoulliLikelihood¶

class gpytorch.likelihoods.BernoulliLikelihood(*args, **kwargs)[source]

Implements the Bernoulli likelihood used for GP classification, using Probit regression (i.e., the latent function is warped to be in [0,1] using the standard Normal CDF Phi(x)). Given the identity Phi(-x) = 1-Phi(x), we can write the likelihood compactly as:

$\begin{equation*} p(Y=y|f)=\Phi(yf) \end{equation*}$

## Multi-Dimensional Likelihoods¶

Likelihoods for GPs that are distributions of vector-valued functions. (I.e. for a specific $$\mathbf x$$ we expect that $$f(\mathbf x) \in \mathbb{R}^t$$, where $$t$$ is the number of output dimensions.)

class gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks, rank=0, task_correlation_prior=None, batch_shape=<MagicMock name='mock()' id='140401727320696'>, noise_prior=None, noise_constraint=None)[source]
A convenient extension of the gpytorch.likelihoods.GaussianLikelihood to the multitask setting that allows for a full cross-task covariance structure for the noise. The fitted covariance matrix has rank rank. If a strictly diagonal task noise covariance matrix is desired, then rank=0 should be set. (This option still allows for a different log_noise parameter for each task.). This likelihood assumes homoskedastic noise.
class gpytorch.likelihoods.SoftmaxLikelihood(num_features=None, num_classes=None, mixing_weights=True, mixing_weights_prior=None, **kwargs)[source]