pith. machine review for the scientific record. sign in

arxiv: 1906.09686 · v1 · submitted 2019-06-24 · 💻 cs.LG · stat.ML

Recognition: unknown

Quality of Uncertainty Quantification for Bayesian Neural Network Inference

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords inferencebnnsneuralqualitybayesianexperimentsmethodsnetwork
0
0 comments X
read the original abstract

Bayesian Neural Networks (BNNs) place priors over the parameters in a neural network. Inference in BNNs, however, is difficult; all inference methods for BNNs are approximate. In this work, we empirically compare the quality of predictive uncertainty estimates for 10 common inference methods on both regression and classification tasks. Our experiments demonstrate that commonly used metrics (e.g. test log-likelihood) can be misleading. Our experiments also indicate that inference innovations designed to capture structure in the posterior do not necessarily produce high quality posterior approximations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. SeBA: Semi-supervised few-shot learning via Separated-at-Birth Alignment for tabular data

    cs.LG 2026-05 unverdicted novelty 7.0

    SeBA is a joint-embedding framework that separates tabular data into two complementary views and aligns one view's representations to the nearest-neighbor structure of the other, improving feature-label relationships ...

  2. LLMs Uncertainty Quantification via Adaptive Conformal Semantic Entropy

    cs.LG 2026-05 unverdicted novelty 5.0

    ACSE estimates LLM prompt uncertainty via adaptive clustering of semantic entropy across multiple responses and uses conformal prediction to bound error rates on accepted answers with distribution-free guarantees.