gpytorch.utils

Utilities

gpytorch.utils.cached(method=None, name=None)[source]

A decorator allowing for specifying the name of a cache, allowing it to be modified elsewhere.

gpytorch.utils.linear_cg(matmul_closure, rhs, n_tridiag=0, tolerance=1e-06, eps=1e-10, max_iter=None, max_tridiag_iter=None, initial_guess=None, preconditioner=None)[source]

Implements the linear conjugate gradients method for (approximately) solving systems of the form

lhs result = rhs

for positive definite and symmetric matrices.

Args:
  • matmul_closure - a function which performs a left matrix multiplication with lhs_mat
  • rhs - the right-hand side of the equation
  • n_tridiag - returns a tridiagonalization of the first n_tridiag columns of rhs
  • tolerance - stop the solve when the max residual is less than this
  • eps - noise to add to prevent division by zero
  • max_iter - the maximum number of CG iterations
  • max_tridiag_iter - the maximum size of the tridiagonalization matrix
  • initial_guess - an initial guess at the solution result
  • precondition_closure - a functions which left-preconditions a supplied vector
Returns:
result - a solution to the system (if n_tridiag is 0) result, tridiags - a solution to the system, and corresponding tridiagonal matrices (if n_tridiag > 0)
class gpytorch.utils.StochasticLQ(max_iter=15, num_random_probes=10)[source]

Implements an approximate log determinant calculation for symmetric positive definite matrices using stochastic Lanczos quadrature. For efficient calculation of derivatives, We additionally compute the trace of the inverse using the same probe vector the log determinant was computed with. For more details, see Dong et al. 2017 (in submission).

evaluate(matrix_shape, eigenvalues, eigenvectors, funcs)[source]

Computes tr(f(A)) for an arbitrary list of functions, where f(A) is equivalent to applying the function elementwise to the eigenvalues of A, i.e., if A = VLambdaV^{T}, then f(A) = Vf(Lambda)V^{T}, where f(Lambda) is applied elementwise. Note that calling this function with a list of functions to apply is significantly more efficient than calling it multiple times with one function – each additional function after the first requires negligible additional computation.

Args:
  • matrix_shape (torch.Size()) - size of underlying matrix (not including batch dimensions)
  • eigenvalues (Tensor n_probes x …batch_shape x k) - batches of eigenvalues from Lanczos tridiag mats
  • eigenvectors (Tensor n_probes x …batch_shape x k x k) - batches of eigenvectors from ” ” “
  • funcs (list of closures) - A list of functions [f_1,…,f_k]. tr(f_i(A)) is computed for each function.
    Each function in the closure should expect to take a torch vector of eigenvalues as input and apply the function elementwise. For example, to compute logdet(A) = tr(log(A)), [lambda x: x.log()] would be a reasonable value of funcs.
Returns:
  • results (list of scalars) - The trace of each supplied function applied to the matrix, e.g.,
    [tr(f_1(A)),tr(f_2(A)),…,tr(f_k(A))].

Lanczos Utilities

gpytorch.utils.lanczos.lanczos_tridiag(matmul_closure, max_iter, dtype, device, matrix_shape, batch_shape=<MagicMock name='mock()' id='140117992433536'>, init_vecs=None, num_init_vecs=1, tol=1e-05)[source]
gpytorch.utils.lanczos.lanczos_tridiag_to_diag(t_mat)[source]

Given a num_init_vecs x num_batch x k x k tridiagonal matrix t_mat, returns a num_init_vecs x num_batch x k set of eigenvalues and a num_init_vecs x num_batch x k x k set of eigenvectors.

TODO: make the eigenvalue computations done in batch mode.

Pivoted Cholesky Utilities

gpytorch.utils.pivoted_cholesky.woodbury_factor(low_rank_mat, shift)[source]

Given a low rank (k x n) matrix V and a shift, returns the matrix R so that

\[\begin{equation*} R = (I_k + 1/shift VV')^{-1}V \end{equation*}\]

to be used in solves with (V’V + shift I) via the Woodbury formula

gpytorch.utils.pivoted_cholesky.woodbury_solve(vector, low_rank_mat, woodbury_factor, shift)[source]

Solves the system of equations: \((sigma*I + VV')x = b\) Using the Woodbury formula.

Input:
  • vector (size n) - right hand side vector b to solve with.
  • woodbury_factor (k x n) - The result of calling woodbury_factor on V and the shift, sigma
  • shift (vector) - shift value sigma

Sparse Utilities

gpytorch.utils.sparse.bdsmm(sparse, dense)[source]

Batch dense-sparse matrix multiply

gpytorch.utils.sparse.make_sparse_from_indices_and_values(interp_indices, interp_values, num_rows)[source]

This produces a sparse tensor with a fixed number of non-zero entries in each column.

Args:
  • interp_indices - Tensor (batch_size) x num_cols x n_nonzero_entries
    A matrix which has the indices of the nonzero_entries for each column
  • interp_values - Tensor (batch_size) x num_cols x n_nonzero_entries
    The corresponding values
  • num_rows - the number of rows in the result matrix
Returns:
  • SparseTensor - (batch_size) x num_cols x num_rows
gpytorch.utils.sparse.sparse_eye(size)[source]

Returns the identity matrix as a sparse matrix

gpytorch.utils.sparse.sparse_getitem(sparse, idxs)[source]
gpytorch.utils.sparse.sparse_repeat(sparse, *repeat_sizes)[source]
gpytorch.utils.sparse.to_sparse(dense)[source]