pith. machine review for the scientific record. sign in

arxiv: 2605.14413 · v1 · submitted 2026-05-14 · 💻 cs.LG · cs.AI

Recognition: 2 theorem links

· Lean Theorem

MahaVar: OOD Detection via Class-wise Mahalanobis Distance Variance under Neural Collapse

Authors on Pith no claims yet

Pith reviewed 2026-05-15 02:14 UTC · model grok-4.3

classification 💻 cs.LG cs.AI
keywords OOD detectionMahalanobis distanceNeural Collapseout-of-distributiondistance variancepost-hoc methodimage classification
0
0 comments X

The pith

Class-wise Mahalanobis distance variance distinguishes in-distribution from out-of-distribution samples under Neural Collapse geometry.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that in-distribution samples produce a sharp minimum in class-wise Mahalanobis distances, with small distance to the nearest class and large distances to others, yielding high variance across classes. Out-of-distribution samples lack this structure and show lower variance. This pattern is derived from relaxed Neural Collapse assumptions on within-class compactness and inter-class separation, providing a geometric basis for variance as an OOD indicator. The authors introduce MahaVar, which augments standard Mahalanobis scoring with the variance term, and report improved AUROC and FPR@95 on CIFAR-100 and ImageNet under the OpenOOD protocol.

Core claim

Under relaxed Neural Collapse assumptions on within-class compactness and inter-class separation, in-distribution samples structurally exhibit high class-wise Mahalanobis distance variance due to a pronounced sharp minimum structure, whereas out-of-distribution samples exhibit lower variance. This difference supplies a theoretical basis for the class-wise variance term, which MahaVar adds to the Mahalanobis distance to form an OOD score that achieves state-of-the-art results on standard image benchmarks.

What carries the argument

The class-wise Mahalanobis distance variance term, which measures the sharp minimum structure across distances to different class means.

If this is right

  • MahaVar yields consistent gains in both AUROC and FPR@95 over prior Mahalanobis-based detectors across all tested benchmarks.
  • The method remains a simple post-hoc addition that follows the OpenOOD v1.5 evaluation protocol.
  • The variance signal is grounded in Neural Collapse geometry rather than dataset-specific tuning.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same variance signal could be tested as an add-on to other prototype-based OOD scores that compute distances to class centers.
  • If Neural Collapse geometry weakens on non-image data, the variance advantage may shrink and require separate validation.
  • A controlled ablation that varies the degree of collapse could map the exact compactness threshold where the method loses effectiveness.

Load-bearing premise

In-distribution samples must satisfy relaxed Neural Collapse conditions of within-class compactness and inter-class separation so that high variance appears.

What would settle it

Direct computation on CIFAR-100 or ImageNet showing that in-distribution samples do not produce reliably higher class-wise Mahalanobis variance than out-of-distribution samples.

Figures

Figures reproduced from arXiv: 2605.14413 by Donghwan Kim, Hyunsoo Yoon.

Figure 1
Figure 1. Figure 1: Sorted class-wise dMaha,c(x) of Mahalanobis++ [22] on CIFAR-10 and CIFAR-100 as ID datasets, evaluated on OOD datasets following the OpenOOD v1.5 benchmark protocol [38] with ResNet-18 as the backbone. The x-axis represents the Mahalanobis distance rank, where rank 0 corresponds to the nearest class mean. Shaded regions indicate ±0.5σ across samples at each rank. where a higher score indicates a greater li… view at source ↗
Figure 2
Figure 2. Figure 2: Distribution of Varc[dMaha,c(x)] for ID and OOD samples on CIFAR-10 and CIFAR-100, evaluated on OOD datasets following the OpenOOD v1.5 benchmark protocol with ResNet-18 as the backbone. The dashed blue line indicates the mean variance of ID samples. Black bars indicate the median of each distribution. While existing Mahalanobis-based methods either focus solely on the nearest class distance [15, 22] or co… view at source ↗
Figure 3
Figure 3. Figure 3: Sorted class-wise dMaha,c(x) (without L2 normalization) on CIFAR-10 and CIFAR-100 as ID datasets, evaluated on OOD datasets following the OpenOOD v1.5 benchmark protocol with ResNet-18 as the backbone. The x-axis represents the Mahalanobis distance rank, where rank 0 corresponds to the nearest class mean. Shaded regions indicate ±0.5σ across samples at each rank [PITH_FULL_IMAGE:figures/full_fig_p013_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Sorted class-wise dMaha,c(x) of Mahalanobis++ on ImageNet as the ID dataset, evaluated on five OOD datasets following the OpenOOD v1.5 benchmark protocol across three backbone architectures (ResNet-50, Swin-B, and ViT-B). The x-axis represents the Mahalanobis distance rank, where rank 0 corresponds to the nearest class mean. Shaded regions indicate ±0.5σ across samples at each rank. 13 [PITH_FULL_IMAGE:fi… view at source ↗
Figure 5
Figure 5. Figure 5: Sorted class-wise dMaha,c(x) (without L2 normalization) on ImageNet as the ID dataset, evaluated on five OOD datasets following the OpenOOD v1.5 benchmark protocol across three backbone architectures (ResNet-50, Swin-B, and ViT-B). The x-axis represents the Mahalanobis distance rank, where rank 0 corresponds to the nearest class mean. Shaded regions indicate ±0.5σ across samples at each rank [PITH_FULL_IM… view at source ↗
Figure 6
Figure 6. Figure 6: Distribution of Varc[dMaha,c(x)] for ID and OOD samples on CIFAR-10 and CIFAR-100 without L2 normalization, evaluated on OOD datasets following the OpenOOD v1.5 benchmark protocol with ResNet-18 as the backbone. The dashed blue line indicates the mean variance of ID samples. Black bars indicate the median of each distribution. 14 [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Distribution of Varc[dMaha,c(x)] for ID and OOD samples on ImageNet with L2 normal￾ization following Mahalanobis++, evaluated on five OOD datasets following the OpenOOD v1.5 benchmark protocol across three backbone architectures (ResNet-50, Swin-B, and ViT-B). The dashed blue line indicates the mean variance of ID samples. Black bars indicate the median of each distribution [PITH_FULL_IMAGE:figures/full_f… view at source ↗
Figure 8
Figure 8. Figure 8: Distribution of Varc[dMaha,c(x)] for ID and OOD samples on ImageNet without L2 normal￾ization, evaluated on five OOD datasets following the OpenOOD v1.5 benchmark protocol across three backbone architectures (ResNet-50, Swin-B, and ViT-B). The dashed blue line indicates the mean variance of ID samples. Black bars indicate the median of each distribution. 15 [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
read the original abstract

Out-of-distribution (OOD) detection is a critical component for ensuring the reliability of deep neural networks in safety-critical applications. In this work, we present a key empirical observation: for in-distribution (ID) samples, class-wise Mahalanobis distances exhibit a pronounced sharp minimum structure, where the distance to the nearest class is small while distances to all other classes remain large, resulting in high variance across classes. In contrast, OOD samples tend to exhibit a less pronounced sharp minimum structure, producing comparatively lower variance across classes. We further provide a theoretical analysis grounding this observation in Neural Collapse geometry: under relaxed Neural Collapse assumptions on within-class compactness and inter-class separation, ID samples are shown to structurally exhibit high class-wise distance variance, offering a theoretical basis for its use as an OOD score. Motivated by this observation and its theoretical backing, we propose MahaVar, a simple and effective post-hoc OOD detector that augments the Mahalanobis distance with a class-wise distance variance term. Following the OpenOOD v1.5 benchmark protocol, MahaVar achieves state-of-the-art performance on CIFAR-100 and ImageNet, with consistent improvements in both AUROC and FPR@95 over existing Mahalanobis-based methods across all benchmarks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper introduces MahaVar, a post-hoc OOD detector that augments the class-conditional Mahalanobis distance with a variance term computed over the vector of class-wise distances. It reports the empirical observation that ID samples produce a sharp minimum structure (small distance to nearest class, large to others) yielding high variance, while OOD samples produce flatter distance vectors and lower variance. This observation is theoretically motivated by relaxed Neural Collapse assumptions of within-class compactness and inter-class separation. Experiments on OpenOOD v1.5 benchmarks claim state-of-the-art AUROC and FPR@95 on CIFAR-100 and ImageNet, with consistent gains over prior Mahalanobis-based baselines.

Significance. If the reported gains are reproducible and the variance term can be shown to be a direct geometric consequence of Neural Collapse rather than an empirical tweak, the method supplies a lightweight, training-free improvement to an established OOD baseline. The absence of additional fitted parameters and the use of existing class means and covariances are practical strengths. The work would be more significant if the theoretical section verified that the feature representations of the evaluated ResNets actually satisfy the compactness and separation conditions invoked.

major comments (1)
  1. [Theoretical analysis] Theoretical analysis section: the claim that relaxed Neural Collapse (within-class compactness plus inter-class separation) implies high class-wise Mahalanobis distance variance for ID points is not accompanied by any verification that the actual ResNet-18/50 feature layers on CIFAR-100 and ImageNet satisfy these conditions. Without measuring within-class covariance relative to mean separation at the layer used for Mahalanobis, the geometric argument remains unanchored to the experimental models and does not establish that the variance term follows from NC geometry rather than post-hoc tuning.
minor comments (1)
  1. [Method] The exact algebraic form of the MahaVar score (how the variance term is normalized and combined with the Mahalanobis distance) should be stated explicitly in the main text rather than deferred to the appendix.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address the single major comment below.

read point-by-point responses
  1. Referee: [Theoretical analysis] Theoretical analysis section: the claim that relaxed Neural Collapse (within-class compactness plus inter-class separation) implies high class-wise Mahalanobis distance variance for ID points is not accompanied by any verification that the actual ResNet-18/50 feature layers on CIFAR-100 and ImageNet satisfy these conditions. Without measuring within-class covariance relative to mean separation at the layer used for Mahalanobis, the geometric argument remains unanchored to the experimental models and does not establish that the variance term follows from NC geometry rather than post-hoc tuning.

    Authors: We agree that direct verification of the relaxed Neural Collapse conditions on the specific feature representations would strengthen the link between theory and experiments. In the revision we will add quantitative measurements of within-class compactness (trace of class-conditional covariance) and inter-class separation (distances between class means) at the penultimate layer for the ResNet-18/50 models on both CIFAR-100 and ImageNet. These measurements will be reported alongside the existing results to confirm that the observed high variance for ID samples is consistent with the geometric assumptions rather than an empirical adjustment. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation rests on external NC assumptions and direct computation

full rationale

The paper grounds its key claim—that ID samples exhibit high class-wise Mahalanobis distance variance under relaxed Neural Collapse assumptions on within-class compactness and inter-class separation—directly in prior NC literature rather than self-referential definitions or fits. The MahaVar score is formed by augmenting the standard Mahalanobis distance with the variance of the same class-wise distances, without any parameter estimation that reduces the output to the input by construction. No self-citation load-bearing steps, uniqueness theorems imported from the authors, or ansatz smuggling appear in the derivation chain. The empirical observation and theoretical analysis are independent of the current paper's fitted values, rendering the result self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard Mahalanobis parameter estimation from ID data and on relaxed Neural Collapse assumptions; no new entities are postulated.

free parameters (1)
  • Class means and covariance matrix
    Estimated from in-distribution training data exactly as in the baseline Mahalanobis detector.
axioms (1)
  • domain assumption Relaxed Neural Collapse assumptions on within-class compactness and inter-class separation
    Invoked to prove that ID samples must exhibit high class-wise Mahalanobis distance variance.

pith-pipeline@v0.9.0 · 5526 in / 1332 out tokens · 67635 ms · 2026-05-15T02:14:03.966574+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    NECO: NEural collapse based out-of-distribution detection

    Mouïn Ben Ammar, Nacim Belkhir, Sebastian Popescu, Antoine Manzanera, and Gianni Franchi. NECO: NEural collapse based out-of-distribution detection. InThe Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum? id=9ROuKblmi7

  2. [2]

    In or out? fixing imagenet out-of- distribution detection evaluation

    Julian Bitterwolf, Maximilian Müller, and Matthias Hein. In or out? fixing imagenet out-of- distribution detection evaluation. InInternational Conference on Machine Learning, pages 2471–2506. PMLR, 2023

  3. [3]

    Describing textures in the wild

    Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 3606–3613, 2014

  4. [4]

    Imagenet: A large- scale hierarchical image database

    Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009

  5. [5]

    The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

    Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web].IEEE signal processing magazine, 29(6):141–142, 2012

  6. [6]

    Extremely simple activation shaping for out-of-distribution detection

    Andrija Djurisic, Nebojsa Bozanic, Arjun Ashok, and Rosanne Liu. Extremely simple activation shaping for out-of-distribution detection. InThe Eleventh International Conference on Learning Representations, 2023. URLhttps://openreview.net/forum?id=ndYXTEL6cZz

  7. [7]

    An image is worth 16x16 words: Transformers for image recognition at scale

    Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. InInternational Conference on Learning Representations, 2021. URL https:...

  8. [8]

    V os: Learning what you don’t know by virtual outlier synthesis

    Xuefeng Du, Zhaoning Wang, Mu Cai, and Sharon Li. V os: Learning what you don’t know by virtual outlier synthesis. InInternational Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=TW7d65uYu5M

  9. [9]

    Te Han and Yan-Fu Li. Out-of-distribution detection-assisted trustworthy machinery fault diagnosis approach with uncertainty-aware deep ensembles.Reliability Engineering & System Safety, 226:108648, 2022

  10. [10]

    Deep residual learning for image recognition

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 10

  11. [11]

    A baseline for detecting misclassified and out-of-distribution examples in neural networks

    Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. InInternational Conference on Learning Representations, 2017. URLhttps://openreview.net/forum?id=Hkg4TI9xl

  12. [12]

    Scaling out-of-distribution detection for real- world settings

    Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joseph Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling out-of-distribution detection for real- world settings. InInternational Conference on Machine Learning, pages 8759–8773. PMLR, 2022

  13. [13]

    Out-of-distribution detection in medical image analysis: A survey.arXiv preprint arXiv:2404.18279, 2024

    Zesheng Hong, Yubiao Yue, Yubin Chen, Lele Cong, Huanjie Lin, Yuanmei Luo, Mini Han Wang, Weidong Wang, Jialong Xu, Xiaoqi Yang, et al. Out-of-distribution detection in medical image analysis: A survey.arXiv preprint arXiv:2404.18279, 2024

  14. [14]

    Learning multiple layers of features from tiny images

    Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009

  15. [15]

    A simple unified framework for detecting out-of-distribution samples and adversarial attacks.Advances in neural information processing systems, 31, 2018

    Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks.Advances in neural information processing systems, 31, 2018

  16. [16]

    Fast decision boundary based out-of-distribution detector

    Litian Liu and Yao Qin. Fast decision boundary based out-of-distribution detector. InInterna- tional Conference on Machine Learning, pages 31728–31746. PMLR, 2024

  17. [17]

    Detecting out-of-distribution through the lens of neural collapse

    Litian Liu and Yao Qin. Detecting out-of-distribution through the lens of neural collapse. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 15424–15433, 2025

  18. [18]

    Energy-based out-of-distribution detection.Advances in neural information processing systems, 33:21464–21475, 2020

    Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection.Advances in neural information processing systems, 33:21464–21475, 2020

  19. [19]

    Gen: Pushing the limits of softmax-based out-of-distribution detection

    Xixi Liu, Yaroslava Lochman, and Christopher Zach. Gen: Pushing the limits of softmax-based out-of-distribution detection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23946–23955, June 2023

  20. [20]

    Swin transformer: Hierarchical vision transformer using shifted windows

    Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. InProceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021

  21. [21]

    Torchvision: Pytorch’s computer vision library

    TorchVision maintainers and contributors. Torchvision: Pytorch’s computer vision library. https://github.com/pytorch/vision, 2016

  22. [22]

    Mahalanobis++: Improving ood detection via feature normalization

    Maximilian Müller and Matthias Hein. Mahalanobis++: Improving ood detection via feature normalization. InInternational Conference on Machine Learning, pages 45151–45184. PMLR, 2025

  23. [23]

    Reading digits in natural images with unsupervised feature learning

    Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Baolin Wu, Andrew Y Ng, et al. Reading digits in natural images with unsupervised feature learning. InNIPS workshop on deep learning and unsupervised feature learning, volume 2011, page 4. Granada, 2011

  24. [24]

    Prevalence of neural collapse during the terminal phase of deep learning training.Proceedings of the National Academy of Sciences, 117 (40):24652–24663, 2020

    Vardan Papyan, XY Han, and David L Donoho. Prevalence of neural collapse during the terminal phase of deep learning training.Proceedings of the National Academy of Sciences, 117 (40):24652–24663, 2020

  25. [25]

    Nearest neighbor guidance for out-of-distribution detection

    Jaewoo Park, Yoon Gyo Jung, and Andrew Beng Jin Teoh. Nearest neighbor guidance for out-of-distribution detection. InProceedings of the IEEE/CVF international conference on computer vision, pages 1686–1695, 2023

  26. [26]

    Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32, 2019

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library.Advances in neural information processing systems, 32, 2019

  27. [27]

    A simple fix to mahalanobis distance for improving near-ood detection.arXiv preprint arXiv:2106.09022, 2021

    Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji Laksh- minarayanan. A simple fix to mahalanobis distance for improving near-ood detection.arXiv preprint arXiv:2106.09022, 2021. 11

  28. [28]

    Out-of-distribution segmentation in autonomous driving: Problems and state of the art

    Youssef Shoeb, Azarm Nowzad, and Hanno Gottschalk. Out-of-distribution segmentation in autonomous driving: Problems and state of the art. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 4310–4320, 2025

  29. [29]

    React: Out-of-distribution detection with rectified activations.Advances in neural information processing systems, 34:144–157, 2021

    Yiyou Sun, Chuan Guo, and Yixuan Li. React: Out-of-distribution detection with rectified activations.Advances in neural information processing systems, 34:144–157, 2021

  30. [30]

    Out-of-distribution detection with deep nearest neighbors

    Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. InInternational conference on machine learning, pages 20827–20840. PMLR, 2022

  31. [31]

    The inaturalist species classification and detection dataset

    Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018

  32. [32]

    Open-set recognition: A good closed-set classifier is all you need

    Sagar Vaze, Kai Han, Andrea Vedaldi, and Andrew Zisserman. Open-set recognition: A good closed-set classifier is all you need. InInternational conference on learning representations, 2021

  33. [33]

    Vim: Out-of-distribution with virtual-logit matching

    Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang. Vim: Out-of-distribution with virtual-logit matching. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4921–4930, 2022

  34. [34]

    Scaling for training time and post-hoc out-of-distribution detection enhancement

    Kai Xu, Rongyu Chen, Gianni Franchi, and Angela Yao. Scaling for training time and post-hoc out-of-distribution detection enhancement. InThe Twelfth International Conference on Learning Representations, 2024. URLhttps://openreview.net/forum?id=RDSTjtnqCg

  35. [35]

    Vra: Variational rectified activation for out- of-distribution detection.Advances in Neural Information Processing Systems, 36:28941–28959, 2023

    Mingyu Xu, Zheng Lian, Bin Liu, and Jianhua Tao. Vra: Variational rectified activation for out- of-distribution detection.Advances in Neural Information Processing Systems, 36:28941–28959, 2023

  36. [36]

    Generalized out-of-distribution detection: A survey.International Journal of Computer Vision, 132(12):5635–5662, 2024

    Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution detection: A survey.International Journal of Computer Vision, 132(12):5635–5662, 2024

  37. [37]

    Tiny imagenet visual recognition challenge

    Xuan Yang et al. Tiny imagenet visual recognition challenge

  38. [38]

    Openood v1

    Jingyang Zhang, Jingkang Yang, Pengyun Wang, Haoqi Wang, Yueqian Lin, Haoran Zhang, Yiyou Sun, Xuefeng Du, Yixuan Li, Ziwei Liu, et al. Openood v1. 5: Enhanced benchmark for out-of-distribution detection.Journal of Data-centric Machine Learning Research, 2024

  39. [39]

    Places: A 10 million image database for scene recognition.IEEE transactions on pattern analysis and machine intelligence, 40(6):1452–1464, 2017

    Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition.IEEE transactions on pattern analysis and machine intelligence, 40(6):1452–1464, 2017

  40. [40]

    Diversified outlier exposure for out-of-distribution detection via informative extrapola- tion.Advances in neural information processing systems, 36:22702–22734, 2023

    Jianing Zhu, Yu Geng, Jiangchao Yao, Tongliang Liu, Gang Niu, Masashi Sugiyama, and Bo Han. Diversified outlier exposure for out-of-distribution detection via informative extrapola- tion.Advances in neural information processing systems, 36:22702–22734, 2023. 12 A Additional Sorted Class-wise Mahalanobis Distance Structures This section presents additiona...