gpytorch.likelihoods

Likelihood

class gpytorch.likelihoods.Likelihood(*args, **kwargs)[source]
guide(*param, **kwargs)[source]

Guide function for the likelihood This should be defined if the likelihood contains any random variables that need to be infered.

If forward call to the likelihood function should contains any pyro.sample calls, then the guide call should contain the same sample calls.

max_plate_nesting

How many batch dimensions are plated (default = 1) This should be modified if thew likelihood uses other plated random variables

pyro_sample_output(observations, function_dist, *params, **kwargs)[source]

Returns observed pyro samples \(p(y)\) from the likelihood distribution, given the function distribution \(f\)

\[\mathbb{E}_{f(x)} \left[ \log p \left( y \mid f(x) \right) \right]\]
Args:
observations (torch.Tensor) Values of \(y\). function_dist (pyro.distributions) Distribution for \(f(x)\). params kwargs
Returns:
pyro.sample

Standard Likelihoods

GaussianLikelihood

class gpytorch.likelihoods.GaussianLikelihood(noise_prior=None, noise_constraint=None, batch_shape=<MagicMock name='mock()' id='140279632511160'>, **kwargs)[source]

BernoulliLikelihood

class gpytorch.likelihoods.BernoulliLikelihood(*args, **kwargs)[source]

Implements the Bernoulli likelihood used for GP classification, using Probit regression (i.e., the latent function is warped to be in [0,1] using the standard Normal CDF Phi(x)). Given the identity Phi(-x) = 1-Phi(x), we can write the likelihood compactly as:

\[\begin{equation*} p(Y=y|f)=\Phi(yf) \end{equation*}\]

Specialty Likelihoods

MultitaskGaussianLikelihood

class gpytorch.likelihoods.MultitaskGaussianLikelihood(num_tasks, rank=0, task_correlation_prior=None, batch_shape=<MagicMock name='mock()' id='140279631143880'>, noise_prior=None, noise_constraint=None, **kwargs)[source]

A convenient extension of the gpytorch.likelihoods.GaussianLikelihood to the multitask setting that allows for a full cross-task covariance structure for the noise. The fitted covariance matrix has rank rank. If a strictly diagonal task noise covariance matrix is desired, then rank=0 should be set. (This option still allows for a different log_noise parameter for each task.). This likelihood assumes homoskedastic noise.

Like the Gaussian likelihood, this object can be used with exact inference.

SoftmaxLikelihood

class gpytorch.likelihoods.SoftmaxLikelihood(num_features=None, num_classes=None, mixing_weights=True, mixing_weights_prior=None, **kwargs)[source]

Implements the Softmax (multiclass) likelihood used for GP classification.