pith. machine review for the scientific record. sign in

arxiv: 2605.00600 · v1 · submitted 2026-05-01 · 💻 cs.LG · cs.AI· cs.CV

Recognition: unknown

Possibilistic Predictive Uncertainty for Deep Learning

Jeremie Houssineau, Piotr Koniusz, Yao Ni, Yew Soon Ong

Authors on Pith no claims yet

Pith reviewed 2026-05-09 19:11 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.CV
keywords epistemic uncertaintypossibility theoryDirichlet approximationdeep neural networksuncertainty quantificationevidential deep learningposterior projection
0
0 comments X

The pith

Deep neural networks can quantify epistemic uncertainty by projecting possibilistic posteriors over parameters onto predictions via supremum operators and approximating them with learnable Dirichlet possibility functions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Deep neural networks often produce overconfident predictions on unfamiliar inputs, creating a need for reliable epistemic uncertainty estimates. Bayesian approaches offer principled uncertainty but are computationally heavy, while faster alternatives lack strong theoretical grounding. This paper introduces Dirichlet-approximated possibilistic posterior predictions, or DAPPr, which defines a possibilistic posterior over model parameters. It projects that posterior into prediction space using supremum operators and approximates the result with learnable Dirichlet possibility functions. The method produces a simple training objective with closed-form solutions and performs competitively or better than existing evidential deep learning techniques on standard benchmarks.

Core claim

We introduce Dirichlet-approximated possibilistic posterior predictions (DAPPr), a principled framework leveraging possibility theory. We define a possibilistic posterior over parameters, project this posterior to the prediction space via supremum operators, and approximate the projected posterior using learnable Dirichlet possibility functions. This projection-and-approximation strategy yields a simple training objective with closed-form solutions. Extensive experiments across diverse benchmarks demonstrate that our approach achieves competitive or superior uncertainty quantification performance compared to state-of-the-art evidential deep learning methods while maintaining both principled

What carries the argument

The supremum projection of a possibilistic posterior over parameters onto prediction space, followed by approximation with learnable Dirichlet possibility functions.

If this is right

  • Yields a simple training objective with closed-form solutions.
  • Achieves competitive or superior uncertainty quantification performance compared to state-of-the-art evidential deep learning methods.
  • Maintains both principled derivation from possibility theory and computational efficiency.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The closed-form solutions could make uncertainty modeling easier to integrate into very large models without added sampling costs.
  • Possibility theory might offer better handling of uncertainty in settings where strict probabilistic assumptions do not apply, such as adversarial inputs.
  • The projection step could be adapted to other output types beyond classification to quantify uncertainty in regression or structured prediction tasks.

Load-bearing premise

The supremum-based projection of the possibilistic posterior onto prediction space, when approximated by Dirichlet functions, rigorously quantifies epistemic uncertainty.

What would settle it

An experiment in which the method assigns low uncertainty to out-of-distribution inputs where the model makes errors, or where its uncertainty estimates fail to improve upon those from standard evidential deep learning on the same benchmarks.

Figures

Figures reproduced from arXiv: 2605.00600 by Jeremie Houssineau, Piotr Koniusz, Yao Ni, Yew Soon Ong.

Figure 1
Figure 1. Figure 1: PyTorch-style pseudocode for the DAPPr loss (∼10 lines of code) and its usage by simply replacing the cross-entropy loss. ciency, these approaches adopt heuristic objectives without rigorous justification for uncertainty quantification. These limitations reveal a dilemma for modeling epistemic uncertainty: Bayesian methods offer theoretical rigor but remain computationally intractable, while second-order p… view at source ↗
Figure 2
Figure 2. Figure 2: Test accuracy and OOD AUPR (↑) for varying λ on CIFAR-100 and Stanford Dogs. OOD AUPR averaged over their corresponding OOD datasets. 0.82 0.85 0.88 EDL 0.5 1 2 3 4 5 6 8 10 Data Points (× 1000) 0.80 0.82 0.84 DAPPr Epistemic Uncertainty (a) Epistemic Uncertainty. 1 0.5 1 2 3 4 5 6 8 10 Data Points (× 1000) 0 10 20 30 40 50 Acc EDL DAPPr (b) Accuracy. 1 [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 4
Figure 4. Figure 4: shows the distribution of normalised α0 for ID and OOD samples. EDL produces heavily overlapping distri￾butions, making OOD detection difficult. DAPPr clearly separates ID (high α0) from OOD (low α0), enabling reli￾able detection of distribution shift. 0.0 0.2 0.4 0.6 0.8 1.0 0 4 8 12 16 Density ID (CIFAR-10) OOD (SVHN) OOD (CIFAR-100) 0.0 0.2 0.4 0.6 0.8 1.0 ID (CIFAR-10) OOD (SVHN) OOD (CIFAR-100) 1 (a) … view at source ↗
Figure 5
Figure 5. Figure 5: x axis: 1 Ltrue. y-axis: Sx 2. Perturbed label fine-tuning: Fine-tune θ0 identically, but replace the label of x with a randomly sampled soft label p ∈ ∆K−1 for x, yielding model θp and loss Lp = L(θp; D \ {(x, y)}). For each sample, we compute the maximum loss deviation Sx = maxp |Lp − Ltrue| over multiple random perturbations [PITH_FULL_IMAGE:figures/full_fig_p015_5.png] view at source ↗
read the original abstract

Deep neural networks achieve impressive results across diverse applications, yet their overconfidence on unseen inputs necessitates reliable epistemic uncertainty modelling. Existing methods for uncertainty modelling face a fundamental dilemma: Bayesian approaches provide principled estimates but remain computationally prohibitive, while efficient second-order predictors lack rigorous derivations connecting their specific objectives to epistemic uncertainty quantification. To resolve this dilemma, we introduce Dirichlet-approximated possibilistic posterior predictions (DAPPr), a principled framework leveraging possibility theory. We define a possibilistic posterior over parameters, projects this posterior to the prediction space via supremum operators, and approximates the projected posterior using learnable Dirichlet possibility functions. This projection-and-approximation strategy yields a simple training objective with closed-form solutions. Extensive experiments across diverse benchmarks demonstrate that our approach achieves competitive or superior uncertainty quantification performance compared to state-of-the-art evidential deep learning methods while maintaining both principled derivation and computational efficiency. Code will be available at https://github.com/MaxwellYaoNi/DAPPr.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces Dirichlet-approximated possibilistic posterior predictions (DAPPr), a framework that defines a possibilistic posterior over neural network parameters, projects it onto the prediction space via supremum operators, and approximates the result using learnable Dirichlet possibility functions. This yields a simple training objective with closed-form solutions for epistemic uncertainty quantification. Experiments across benchmarks show competitive or superior performance relative to evidential deep learning baselines while claiming both principled derivation and computational efficiency.

Significance. If the supremum-based projection and Dirichlet approximation are shown to rigorously preserve epistemic semantics from possibility theory without collapsing to a data-dependent heuristic, the work could meaningfully address the trade-off between principled uncertainty estimates and efficiency. The closed-form objective and reproducible code (promised) would strengthen its contribution as a falsifiable alternative to Bayesian or evidential methods.

major comments (2)
  1. [§3] §3 (Possibilistic Posterior and Projection): The central claim that the supremum projection of the possibilistic posterior onto prediction space rigorously quantifies epistemic uncertainty (rather than producing a convenient optimizable objective) requires explicit verification. The abstract asserts this follows from possibility-theoretic axioms, but the derivation steps showing that the supremum correctly marginalizes parameter uncertainty without introducing uncontrolled bias or reducing to a fit of the Dirichlet parameters must be provided and checked against the axioms; absent this, the epistemic semantics remain unestablished.
  2. [§4] §4 (Dirichlet Approximation and Training Objective): The claim of a 'principled' closed-form objective depends on the Dirichlet possibility functions being an approximation that does not alter the epistemic character of the projected posterior. If the parameters of these functions are optimized directly against the training loss (as implied by the learnable setup), this risks circularity where the uncertainty estimate is defined by the fit rather than derived independently; the manuscript must demonstrate that the approximation error is bounded in a way that preserves epistemic quantification.
minor comments (2)
  1. [Abstract] The abstract states 'projects this posterior' (subject-verb agreement error); correct to 'project' for grammatical consistency.
  2. [§2] Notation for the supremum operator and Dirichlet parameters should be introduced with explicit definitions and contrasted with standard Bayesian marginalization to aid readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review. The comments raise important points about the rigor of the theoretical derivations, which we address below by clarifying the foundations and committing to expansions in the revised manuscript.

read point-by-point responses
  1. Referee: [§3] §3 (Possibilistic Posterior and Projection): The central claim that the supremum projection of the possibilistic posterior onto prediction space rigorously quantifies epistemic uncertainty (rather than producing a convenient optimizable objective) requires explicit verification. The abstract asserts this follows from possibility-theoretic axioms, but the derivation steps showing that the supremum correctly marginalizes parameter uncertainty without introducing uncontrolled bias or reducing to a fit of the Dirichlet parameters must be provided and checked against the axioms; absent this, the epistemic semantics remain unestablished.

    Authors: We agree that Section 3 would benefit from additional formal detail. In the revised manuscript we will insert a new proposition with proof establishing that the supremum projection is the standard marginalization operator under possibility theory (Zadeh's extension principle), which by construction yields a possibility measure on the prediction space whose value at each output represents the highest compatibility with any parameter setting. This step is independent of the subsequent Dirichlet approximation and does not introduce bias beyond the semantics of possibility measures; it satisfies maxitivity and normalization by definition. We will explicitly cross-reference the relevant axioms and show that the projection step alone already encodes epistemic uncertainty before any approximation is applied. revision: yes

  2. Referee: [§4] §4 (Dirichlet Approximation and Training Objective): The claim of a 'principled' closed-form objective depends on the Dirichlet possibility functions being an approximation that does not alter the epistemic character of the projected posterior. If the parameters of these functions are optimized directly against the training loss (as implied by the learnable setup), this risks circularity where the uncertainty estimate is defined by the fit rather than derived independently; the manuscript must demonstrate that the approximation error is bounded in a way that preserves epistemic quantification.

    Authors: We accept that the manuscript should more clearly separate the projection step from the approximation step and bound the error. In the revision we will add an analysis showing that the Dirichlet family can represent possibility distributions with controlled approximation error (via the fact that any continuous possibility function on the simplex can be approximated arbitrarily closely by a Dirichlet possibility function under the sup-norm). The training objective minimizes a discrepancy measure between the projected posterior and this parametric family; once the parameters are obtained, epistemic uncertainty is read off directly from the resulting possibility function using closed-form expressions that inherit the semantics of the projection. We will include a proposition bounding the total variation between the true projected possibility and its Dirichlet approximation, thereby ensuring that the epistemic character is preserved up to a quantifiable error that vanishes with better approximation. This removes any appearance of circularity, as the uncertainty measure is defined by the approximated possibility, not by the loss value itself. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected in derivation chain

full rationale

The paper's core construction defines a possibilistic posterior over parameters, applies supremum-based projection to prediction space, and approximates the result with Dirichlet possibility functions to derive a training objective with closed-form solutions. This sequence is presented as following from possibility theory without the projection or approximation reducing to a tautological fit of the target uncertainty quantity itself. No equations or steps are shown to equate the epistemic uncertainty output directly to the fitted parameters by construction, and the framework maintains independent content through its claimed axiomatic grounding and empirical validation on benchmarks. The derivation remains self-contained against external possibility-theoretic principles rather than relying on self-referential definitions or load-bearing self-citations.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 2 invented entities

The central claim rests on possibility theory as an alternative to probability, the validity of supremum projection for transferring uncertainty from parameters to predictions, and the adequacy of Dirichlet functions as approximators; these are introduced without independent empirical or formal support in the abstract.

free parameters (1)
  • Dirichlet possibility function parameters
    Learnable parameters that define the approximating distribution and are optimized as part of the training objective.
axioms (2)
  • domain assumption Possibility theory provides a valid representation of epistemic uncertainty over neural-network parameters
    Invoked when defining the possibilistic posterior.
  • domain assumption Supremum operator correctly projects the parameter-level possibilistic posterior onto the prediction space
    Central step in the projection-and-approximation strategy.
invented entities (2)
  • possibilistic posterior over parameters no independent evidence
    purpose: To encode epistemic uncertainty in a non-probabilistic manner
    New construct introduced to replace Bayesian posterior.
  • Dirichlet possibility functions no independent evidence
    purpose: To approximate the projected possibilistic posterior with closed-form tractability
    Learnable approximation introduced for computational efficiency.

pith-pipeline@v0.9.0 · 5476 in / 1556 out tokens · 40101 ms · 2026-05-09T19:11:00.236027+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

84 extracted references · 7 canonical work pages · 3 internal anchors

  1. [1]

    arXiv preprint arXiv:2511.21223 , year=

    Maxitive Donsker-Varadhan Formulation for Possibilistic Variational Inference , author=. arXiv preprint arXiv:2511.21223 , year=

  2. [2]

    The 28th International Conference on Artificial Intelligence and Statistics , pages=

    Decoupling epistemic and aleatoric uncertainties with possibility theory , author=. The 28th International Conference on Artificial Intelligence and Statistics , pages=. 2025 , organization=

  3. [3]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    Uncertainty Estimation by Flexible Evidential Deep Learning , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  4. [4]

    Mengyuan Chen and Junyu Gao and Changsheng Xu , booktitle=. R-. 2024 , url=

  5. [5]

    International conference on machine learning , pages=

    Uncertainty estimation by fisher information-based evidential deep learning , author=. International conference on machine learning , pages=. 2023 , organization=

  6. [6]

    http://yann

    The MNIST database of handwritten digits , author=. http://yann. lecun. com/exdb/mnist/ , year=

  7. [7]

    2009 , publisher=

    Learning multiple layers of features from tiny images , author=. 2009 , publisher=

  8. [8]

    Novel Dataset for Fine-Grained Image Categorization

    Aditya Khosla and Nityananda Jayadevaprakash and Bangpeng Yao and Li Fei-Fei. Novel Dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition. 2011

  9. [9]

    and Branson, S

    Wah, C. and Branson, S. and Welinder, P. and Perona, P. and Belongie, S. , Year =. Caltech-UCSD Birds-200-2011

  10. [10]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Class-balanced loss based on effective number of samples , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  11. [11]

    NIPS workshop on deep learning and unsupervised feature learning , volume=

    Reading digits in natural images with unsupervised feature learning , author=. NIPS workshop on deep learning and unsupervised feature learning , volume=. 2011 , organization=

  12. [12]

    Deep Learning for Classical Japanese Literature

    Deep learning for classical japanese literature , author=. arXiv preprint arXiv:1812.01718 , year=

  13. [13]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms , author=. arXiv preprint arXiv:1708.07747 , year=

  14. [14]

    2009 IEEE conference on computer vision and pattern recognition , pages=

    Imagenet: A large-scale hierarchical image database , author=. 2009 IEEE conference on computer vision and pattern recognition , pages=. 2009 , organization=

  15. [15]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Natural adversarial examples , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  16. [16]

    Cimpoi and S

    M. Cimpoi and S. Maji and I. Kokkinos and S. Mohamed and and A. Vedaldi , Title =. Proceedings of the

  17. [17]

    IEEE Transactions on Pattern Analysis and Machine Intelligence , year=

    Places: A 10 million Image Database for Scene Recognition , author=. IEEE Transactions on Pattern Analysis and Machine Intelligence , year=

  18. [18]

    Advances in neural information processing systems , volume=

    Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts , author=. Advances in neural information processing systems , volume=

  19. [19]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Very deep convolutional networks for large-scale image recognition , author=. arXiv preprint arXiv:1409.1556 , year=

  20. [20]

    Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

    Deep residual learning for image recognition , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

  21. [21]

    European conference on computer vision , pages=

    Visual prompt tuning , author=. European conference on computer vision , pages=. 2022 , organization=

  22. [22]

    International Conference on Learning Representations , year=

    Spectral Normalization for Generative Adversarial Networks , author=. International Conference on Learning Representations , year=

  23. [23]

    Fuzzy sets and systems , volume=

    Fuzzy sets as a basis for a theory of possibility , author=. Fuzzy sets and systems , volume=. 1978 , publisher=

  24. [24]

    Computational statistics & data analysis , volume=

    Possibility theory and statistical reasoning , author=. Computational statistics & data analysis , volume=. 2006 , publisher=

  25. [25]

    2012 , publisher=

    Possibility Theory: An Approach to Computerized Processing of Uncertainty , author=. 2012 , publisher=

  26. [26]

    Fuzzy sets and systems , volume=

    Representing parametric probabilistic models tainted with imprecision , author=. Fuzzy sets and systems , volume=. 2008 , publisher=

  27. [27]

    Journal of the American Statistical Association , volume=

    On the foundations of statistical inference , author=. Journal of the American Statistical Association , volume=. 1962 , publisher=

  28. [28]

    Canadian Journal of Statistics , volume=

    Belief functions and statistical inference , author=. Canadian Journal of Statistics , volume=. 1990 , publisher=

  29. [29]

    Journal of the Royal Statistical Society: Series B (Statistical Methodology) , volume=

    Upper probabilities based only on the likelihood function , author=. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , volume=. 1999 , publisher=

  30. [30]

    arXiv preprint arXiv:1301.0569 , year=

    Statistical decisions using likelihood information without prior probabilities , author=. arXiv preprint arXiv:1301.0569 , year=

  31. [31]

    International Journal of Approximate Reasoning , volume=

    Likelihood-based belief function: justification and some extensions to low-quality data , author=. International Journal of Approximate Reasoning , volume=. 2014 , publisher=

  32. [32]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    Credal Prediction based on Relative Likelihood , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  33. [33]

    1967 , publisher=

    The theory of Max-Min and its application to weapons allocation problems , author=. 1967 , publisher=

  34. [34]

    International Conference on Learning Representations , year=

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , author=. International Conference on Learning Representations , year=

  35. [35]

    Advances in neural information processing systems , volume=

    Laion-5b: An open large-scale dataset for training next generation image-text models , author=. Advances in neural information processing systems , volume=

  36. [36]

    and Lo, Wan-Yen and Dollar, Piotr and Girshick, Ross , title =

    Kirillov, Alexander and Mintun, Eric and Ravi, Nikhila and Mao, Hanzi and Rolland, Chloe and Gustafson, Laura and Xiao, Tete and Whitehead, Spencer and Berg, Alexander C. and Lo, Wan-Yen and Dollar, Piotr and Girshick, Ross , title =. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , month =. 2023 , pages =

  37. [37]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    High-resolution image synthesis with latent diffusion models , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  38. [38]

    2024 , url=

    Ao Wang and Hui Chen and Lihao Liu and Kai CHEN and Zijia Lin and Jungong Han and Guiguang Ding , booktitle=. 2024 , url=

  39. [39]

    Scaling Laws for Neural Language Models

    Scaling laws for neural language models , author=. arXiv preprint arXiv:2001.08361 , year=

  40. [40]

    International journal of computer vision , volume=

    The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale , author=. International journal of computer vision , volume=. 2020 , publisher=

  41. [41]

    Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

    Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , author=. Proceedings of the IEEE conference on computer vision and pattern recognition , pages=

  42. [42]

    IEEE transactions on medical imaging , volume=

    Confidence calibration and predictive uncertainty estimation for deep medical image segmentation , author=. IEEE transactions on medical imaging , volume=. 2020 , publisher=

  43. [43]

    Advances in Neural Information Processing Systems , volume=

    Rethinking calibration of deep neural networks: Do not be afraid of overconfidence , author=. Advances in Neural Information Processing Systems , volume=

  44. [44]

    international conference on machine learning , pages=

    Dropout as a bayesian approximation: Representing model uncertainty in deep learning , author=. international conference on machine learning , pages=. 2016 , organization=

  45. [45]

    International conference on machine learning , pages=

    Weight uncertainty in neural network , author=. International conference on machine learning , pages=. 2015 , organization=

  46. [46]

    Bayesian hypernetworks.arXiv preprint arXiv:1710.04759, 2017

    Bayesian hypernetworks , author=. arXiv preprint arXiv:1710.04759 , year=

  47. [47]

    Scientific reports , volume=

    Dropconnect is effective in modeling uncertainty of bayesian deep networks , author=. Scientific reports , volume=. 2021 , publisher=

  48. [48]

    Advances in neural information processing systems , volume=

    Simple and scalable predictive uncertainty estimation using deep ensembles , author=. Advances in neural information processing systems , volume=

  49. [49]

    IEEE Computational Intelligence Magazine , volume=

    Hands-on Bayesian neural networks—A tutorial for deep learning users , author=. IEEE Computational Intelligence Magazine , volume=. 2022 , publisher=

  50. [50]

    Ensemble Methods in Machine Learning

    Dietterich, Thomas G. Ensemble Methods in Machine Learning. Multiple Classifier Systems. 2000

  51. [51]

    International Conference on Learning Representations , year=

    BatchEnsemble: an Alternative Approach to Efficient Ensemble and Lifelong Learning , author=. International Conference on Learning Representations , year=

  52. [52]

    Advances in neural information processing systems , volume=

    Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift , author=. Advances in neural information processing systems , volume=

  53. [53]

    Advances in Neural Information Processing Systems , volume=

    Tractable function-space variational inference in bayesian neural networks , author=. Advances in Neural Information Processing Systems , volume=

  54. [54]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Deep deterministic uncertainty: A new simple baseline , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  55. [55]

    Advances in neural information processing systems , volume=

    Simple and principled uncertainty estimation with deterministic deep learning via distance awareness , author=. Advances in neural information processing systems , volume=

  56. [56]

    Advances in neural information processing systems , volume=

    Bayesian deep ensembles via the neural tangent kernel , author=. Advances in neural information processing systems , volume=

  57. [57]

    Advances in neural information processing systems , volume=

    Evidential deep learning to quantify classification uncertainty , author=. Advances in neural information processing systems , volume=

  58. [58]

    International Conference on Learning Representations , year=

    Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions , author=. International Conference on Learning Representations , year=

  59. [59]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Assumed density filtering methods for learning bayesian neural networks , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  60. [60]

    Advances in neural information processing systems , volume=

    Predictive uncertainty estimation via prior networks , author=. Advances in neural information processing systems , volume=

  61. [61]

    Advances in neural information processing systems , volume=

    Single-model uncertainties for deep learning , author=. Advances in neural information processing systems , volume=

  62. [62]

    1988 , publisher =

    Possibility Theory: An Approach to Computerized Processing of Uncertainty , author =. 1988 , publisher =

  63. [63]

    2012 , publisher=

    Bayesian learning for neural networks , author=. 2012 , publisher=

  64. [64]

    Advances in neural information processing systems , volume=

    Bayesian deep learning and a probabilistic perspective of generalization , author=. Advances in neural information processing systems , volume=

  65. [65]

    Advances in neural information processing systems , volume=

    Practical variational inference for neural networks , author=. Advances in neural information processing systems , volume=

  66. [66]

    International conference on machine learning , pages=

    Uncertainty estimation using a single deep deterministic neural network , author=. International conference on machine learning , pages=. 2020 , organization=

  67. [67]

    Journal of the Royal Statistical Society: Series B (Methodological) , volume=

    A generalization of Bayesian inference , author=. Journal of the Royal Statistical Society: Series B (Methodological) , volume=. 1968 , publisher=

  68. [68]

    Advances in Neural Information Processing Systems , volume=

    Pace: Marrying generalization in parameter-efficient fine-tuning with consistency regularization , author=. Advances in Neural Information Processing Systems , volume=

  69. [69]

    A Mathematical Theory of Evidence , urldate =

    Glenn Shafer , publisher =. A Mathematical Theory of Evidence , urldate =

  70. [70]

    International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems , volume=

    A logic for uncertain probabilities , author=. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems , volume=. 2001 , publisher=

  71. [71]

    Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

    Open-world objectness modeling unifies novel object detection , author=. Proceedings of the Computer Vision and Pattern Recognition Conference , pages=

  72. [72]

    Advances in Neural Information Processing Systems , editor=

    Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  73. [73]

    Proceedings of the 41st International Conference on Machine Learning , pages=

    Uncertainty estimation by density aware evidential deep learning , author=. Proceedings of the 41st International Conference on Machine Learning , pages=

  74. [74]

    2016 , publisher=

    Subjective logic , author=. 2016 , publisher=

  75. [75]

    Springer handbook of computational intelligence , pages=

    Possibility theory and its applications: Where do we stand? , author=. Springer handbook of computational intelligence , pages=. 2015 , publisher=

  76. [76]

    Proceedings of 1995 IEEE International Conference on Fuzzy Systems

    Learning possibilistic networks from data , author=. Proceedings of 1995 IEEE International Conference on Fuzzy Systems. , volume=. 1995 , organization=

  77. [77]

    IEEE Transactions on Signal Processing , volume=

    A linear algorithm for multi-target tracking in the context of possibility theory , author=. IEEE Transactions on Signal Processing , volume=. 2021 , publisher=

  78. [78]

    Automatica , volume=

    Observer control for bearings-only tracking using possibility functions , author=. Automatica , volume=. 2021 , publisher=

  79. [79]

    International Conference on Learning Representations , year=

    Understanding deep learning requires rethinking generalization , author=. International Conference on Learning Representations , year=

  80. [80]

    International conference on machine learning , pages=

    Understanding black-box predictions via influence functions , author=. International conference on machine learning , pages=. 2017 , organization=

Showing first 80 references.