Available Extensions

First order extensions.

First-order extensions make it easier to extract information from the gradients being already backpropagated through the computational graph. They do not backpropagate additional information, and have small overhead. The implemented extensions are

  • BatchGrad The individual gradients, rather than the sum over the samples
  • SumGradSquared The second moment of the individual gradient
  • Variance The variance of the individual gradients
  • BatchL2Grad The L2 norm of the individual gradients

backpack.extensions.BatchGrad()

Individual gradients for each sample in a minibatch.

Stores the output in grad_batch as a [N x ...] tensor, where N batch size and ... is the shape of the gradient.

Note: beware of scaling issue

The individual gradients depend on the scaling of the overall function. Let fᵢ be the loss of the i th sample, with gradient gᵢ. BatchGrad will return

  • [g₁, …, gₙ] if the loss is a sum, ∑ᵢ₌₁ⁿ fᵢ,
  • [¹/ₙ g₁, …, ¹/ₙ gₙ] if the loss is a mean, ¹/ₙ ∑ᵢ₌₁ⁿ fᵢ.

The concept of individual gradients is only meaningful if the objective is a sum of independent functions (no batchnorm).

backpack.extensions.BatchL2Grad()

The squared L2 norm of individual gradients in the minibatch.

Stores the output in batch_l2 as a tensor of size [N], where N is the batch size.

Note: beware of scaling issue

The individual L2 norm depends on the scaling of the overall function. Let fᵢ be the loss of the i th sample, with gradient gᵢ. BatchL2Grad will return the L2 norm of

  • [g₁, …, gₙ] if the loss is a sum, ∑ᵢ₌₁ⁿ fᵢ,
  • [¹/ₙ g₁, …, ¹/ₙ gₙ] if the loss is a mean, ¹/ₙ ∑ᵢ₌₁ⁿ fᵢ.
backpack.extensions.SumGradSquared()

The sum of individual-gradients-squared, or second moment of the gradient.

Stores the output in sum_grad_squared. Same dimension as the gradient.

Note: beware of scaling issue

The second moment depends on the scaling of the overall function. Let fᵢ be the loss of the i th sample, with gradient gᵢ. SumGradSquared will return the sum of the squared

  • [g₁, …, gₙ] if the loss is a sum, ∑ᵢ₌₁ⁿ fᵢ,
  • [¹/ₙ g₁, …, ¹/ₙ gₙ] if the loss is a mean, ¹/ₙ ∑ᵢ₌₁ⁿ fᵢ.
backpack.extensions.Variance()

Estimates the variance of the gradient using the samples in the minibatch.

Stores the output in variance. Same dimension as the gradient.

Note: beware of scaling issue

The variance depends on the scaling of the overall function. Let fᵢ be the loss of the i th sample, with gradient gᵢ. Variance will return the variance of the vectors

  • [g₁, …, gₙ] if the loss is a sum, ∑ᵢ₌₁ⁿ fᵢ,
  • [¹/ₙ g₁, …, ¹/ₙ gₙ] if the loss is a mean, ¹/ₙ ∑ᵢ₌₁ⁿ fᵢ.

Second order extensions.

Second-order extensions propagate additional information through the graph to extract structural or local approximations to second-order information. They are more expensive to run than a standard gradient backpropagation. The implemented extensions are

  • The diagonal of the Generalized Gauss-Newton (GGN)/Fisher information, using exact computation (DiagGGNExact) or Monte-Carlo approximation (DiagGGNMC).
  • Kronecker Block-Diagonal approximations of the GGN/Fisher KFAC, KFRA, KFLR.
  • The diagonal of the Hessian DiagHessian

backpack.extensions.DiagGGNMC(mc_samples=1)

Diagonal of the Generalized Gauss-Newton/Fisher. Uses a Monte-Carlo approximation of the Hessian of the loss w.r.t. the model output.

Stores the output in diag_ggn_mc, has the same dimensions as the gradient.

For a more precise but slower alternative, see backpack.extensions.DiagGGNExact().

backpack.extensions.DiagGGNExact()

Diagonal of the Generalized Gauss-Newton/Fisher. Uses the exact Hessian of the loss w.r.t. the model output.

Stores the output in diag_ggn_exact, has the same dimensions as the gradient.

For a faster but less precise alternative, see backpack.extensions.DiagGGNMC().

backpack.extensions.KFAC(mc_samples=1)

Approximate Kronecker factorization of the Generalized Gauss-Newton/Fisher using Monte-Carlo sampling.

Stores the output in kfac as a list of Kronecker factors.

  • If there is only one element, the item represents the GGN/Fisher approximation itself.
  • If there are multiple elements, they are arranged in the order such that their Kronecker product represents the Generalized Gauss-Newton/Fisher approximation.
  • The dimension of the factors depends on the layer, but the product of all row dimensions (or column dimensions) yields the dimension of the layer parameter.
Note:
The literature uses column-stacking as vectorization convention, but torch defaults to a row-major storing scheme of tensors. The order of factors might differs from the presentation in the literature.

Implements the procedures described by

backpack.extensions.KFLR()

Approximate Kronecker factorization of the Generalized Gauss-Newton/Fisher using the full Hessian of the loss function w.r.t. the model output.

Stores the output in kflr as a list of Kronecker factors.

  • If there is only one element, the item represents the GGN/Fisher approximation itself.
  • If there are multiple elements, they are arranged in the order such that their Kronecker product represents the Generalized Gauss-Newton/Fisher approximation.
  • The dimension of the factors depends on the layer, but the product of all row dimensions (or column dimensions) yields the dimension of the layer parameter.

Note

The literature uses column-stacking as vectorization convention. This is in contrast to the default row-major storing scheme of tensors in torch. Therefore, the order of factors differs from the presentation in the literature.

Implements the procedures described by

Extended for convolutions following

backpack.extensions.KFRA()

Approximate Kronecker factorization of the Generalized Gauss-Newton/Fisher using the full Hessian of the loss function w.r.t. the model output and averaging after every backpropagation step.

Stores the output in kfra as a list of Kronecker factors.

  • If there is only one element, the item represents the GGN/Fisher approximation itself.
  • If there are multiple elements, they are arranged in the order such that their Kronecker product represents the Generalized Gauss-Newton/Fisher approximation.
  • The dimension of the factors depends on the layer, but the product of all row dimensions (or column dimensions) yields the dimension of the layer parameter.

Note

The literature uses column-stacking as vectorization convention. This is in contrast to the default row-major storing scheme of tensors in torch. Therefore, the order of factors differs from the presentation in the literature.

Extended for convolutions following

backpack.extensions.DiagHessian()

Diagonal of the Hessian.

Stores the output in diag_h, has the same dimensions as the gradient.

Warning

Very expensive on networks with non-piecewise linear activations.