New Study Revisits Laplace’s Approximation, Validating It as an ‘Effortless’ Method for Bayesian Deep Learning

0

Bayesian neural networks (BNNs) have been shown to provide practical advantages for model application, such as better quantification of predictive uncertainty and model selection over other NNs. But the practical deployment of BNNs has been limited, as they are generally considered difficult to implement, difficult to tune, expensive to train, and difficult to adapt to today’s large models and datasets.

This view is challenged in the new document Laplace Redux — Effortless Bayesian Deep Learning, in which a research team from the University of Cambridge, University of Tübingen, ETH Zurich and DeepMind are conducting extensive experiments that demonstrate that the Laplace approximation (LA) can be a method of simple and cost effective yet competitive approximation for inference in Bayesian deep learning. The team presents Laplace, a PyTorch-based library for scalable LAs in deep neural networks (NN).

The Laplace approximation (LA) is a classical and simple method to obtain an approximate posterior inference in deep NNs. Machine learning researchers, however, have tended to favor alternative approaches such as variational Bayes or deep sets, assuming that LA is too expensive due to its Hessian computational requirements or yields inferior results. The paper argues that these views are misconceptions.

The researchers summarize the main contributions of their study as follows:

  1. We first review recent advances and present the key components of scalable and practical Laplace approximations in deep learning.
  2. We then present Laplace, an easy-to-use PyTorch-based library for “turning an NN into a BNN” via the LA. Laplace implements a wide range of different LA variants.
  3. Finally, using Laplace, we show in an extensive empirical study that LA is competitive with alternative approaches, especially given its simplicity and low cost.

AL can benefit deep learning models by approximating the posterior distribution of the model to enable probabilistic predictions and approximating the model evidence to enable model selection. The article identifies four key components of scalable and practical Laplace approximations in deep learning: 1) inference over all weights or subsets of weights, 2) Hessian approximations and their factorizations, 3) tuning hyperparameters and 4) the approximate predictive distribution.

The researchers first select a part of the model to perform the inference with the LA, then decide how to approximate the Hessian. Thus, the team can then make a selection of models using the evidence. If they started with an untrained model, they jointly train the model and use the evidence to tune the hyperparameters online; if they started with a pretrained model, they use the evidence to tune post-hoc hyperparameters. It is then possible to calculate/approximate the predictive distribution to make predictions for new inputs.

The proposed Laplace toolbox is designed to allow user-friendly implementation of deep Laplace approximations. Laplace is a simple, easy-to-use, and extensible library for scalable deep NN LAs in PyTorch that allows all possible combinations of the aforementioned four key components and includes efficient implementations of key LA quantities: 1) posterior (i.e. i.e. Hessian computation and storage), 2) marginal probability and 3) posterior predictive.

The researchers compared various LAs implemented via Laplace, with the results showing that the LA is competitive with strong Bayesian bases in the distribution, dataset offset, and out-of-distribution (OOD) parameters.

Overall, this work demonstrates that the Laplace approximation can compete with more popular alternatives in terms of performance while maintaining low computational cost. The team hopes their work can catalyze wider adoption of LA in deep hands-on learning.

The Laplace code is available on the project’s GitHub. The paper Laplace Redux — Effortless Bayesian Deep Learning is on arXiv.


Author: Hecate He | Editor: Michel Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Weekly Synchronized Global AI to get weekly AI updates.

Share.

About Author

Comments are closed.