pith. machine review for the scientific record. sign in

arxiv: 2602.23013 · v3 · submitted 2026-02-26 · 💻 cs.CV · cs.LG

Recognition: no theorem link

SubspaceAD: Training-Free Few-Shot Anomaly Detection via Subspace Modeling

Authors on Pith no claims yet

Pith reviewed 2026-05-15 18:45 UTC · model grok-4.3

classification 💻 cs.CV cs.LG
keywords anomaly detectionfew-shot learningsubspace modelingPCADINOv2training-freeindustrial inspectionreconstruction residual
0
0 comments X

The pith

SubspaceAD detects anomalies in few-shot settings by fitting a PCA subspace to DINOv2 features of normal images and scoring via reconstruction residuals, reaching state-of-the-art results without training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that modeling normal variations as a low-dimensional linear subspace in foundation-model feature space is sufficient for high-performance few-shot anomaly detection. It extracts patch features from a small set of normal images using a frozen DINOv2 backbone, fits PCA to capture typical variations, and flags anomalies by their reconstruction error under this model. This approach avoids training, prompt tuning, or memory banks while delivering image-level and pixel-level AUROC scores of 97.1% and 97.5% on MVTec-AD and 93.2% and 98.2% on VisA in the one-shot case. A sympathetic reader would care because it indicates that much of the added complexity in recent methods may be unnecessary once strong pretrained representations are available.

Core claim

SubspaceAD operates in two stages: extracting patch-level features from a small set of normal images by a frozen DINOv2 backbone, then fitting a Principal Component Analysis model to estimate the low-dimensional subspace of normal variations. At inference, anomalies are detected via the reconstruction residual with respect to this subspace, producing interpretable and statistically grounded anomaly scores. Despite its simplicity, the method achieves state-of-the-art performance across one-shot and few-shot settings without training, prompt tuning, or memory banks.

What carries the argument

The low-dimensional linear subspace estimated by PCA on DINOv2 patch features, which captures normal variations so that anomalies produce high reconstruction residuals.

If this is right

  • Few-shot anomaly detection can reach leading accuracy without any model training or fine-tuning on the target data.
  • Memory banks of normal samples and auxiliary datasets are not required for competitive results on standard industrial benchmarks.
  • Linear reconstruction residuals yield both image-level and pixel-level anomaly scores that are statistically grounded and interpretable.
  • The same two-stage procedure works across different datasets such as MVTec-AD and VisA without category-specific adjustments.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The results imply that DINOv2 embeddings of normal industrial objects already lie near a linear manifold, which may not extend to scenes with greater appearance diversity.
  • Similar subspace fitting could be tested on few-shot segmentation or classification tasks to check whether reconstruction residuals generalize beyond anomaly detection.
  • Varying the number of retained principal components offers a direct knob to trade off normal-variation coverage against anomaly sensitivity on new data.

Load-bearing premise

The variations among normal images are well captured by a low-dimensional linear subspace in the DINOv2 feature space, so that anomalies produce reliably high reconstruction residuals.

What would settle it

On a test set of normal images with highly nonlinear variations, if the reconstruction residuals for anomalies overlap substantially with those of held-out normal samples and AUROC drops below 90% at pixel level, the subspace model would fail to separate them reliably.

Figures

Figures reproduced from arXiv: 2602.23013 by Camile Lendering, Egor Bondarev, Erkut Akdag.

Figure 1
Figure 1. Figure 1: One-shot segmentation results of SubspaceAD on the MVTec-AD dataset [4], where SubspaceAD only uses one normal image per category. Each example shows a test sample with its predicted anomaly mask (overlaid in dark blue), across all 15 cate￾gories of the MVTec-AD dataset. defect-free images per category to model normality, which is rarely feasible in practice. At the other extreme, zero￾shot methods [18, 40… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of SubspaceAD. (Fitting): Aggregated patch features are collected from k normal samples using a frozen DINOv2-G model and a PCA model is fitted to capture the subspace of normal variation. (Inference): Features of a test image are extracted, projected onto the normal subspace, and the reconstruction error is computed, providing the anomaly segmentation map directly. PCA figure from [38]. For PCA-b… view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative comparison on VisA and MVTec-AD (1-shot). SubspaceAD produces sharper and more precise anomaly maps than [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Effect of image resolution on performance across both [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Impact of backbone scale on SubspaceAD performance [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Additional qualitative results on MVTec-AD. Examples from six categories (Bottle, Cable, Metal Nut, Pill, Zipper, Grid). Rows show the input image, ground-truth mask, and our prediction. SubspaceAD effectively localizes structural, positional, and surface anomalies across varied MVTec categories. may be incorrectly identified as normal texture. The model lacks the semantic, object-level understanding to kn… view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative failure cases. Each row shows the image, [PITH_FULL_IMAGE:figures/full_fig_p013_8.png] view at source ↗
read the original abstract

Detecting visual anomalies in industrial inspection often requires training with only a few normal images per category. Recent few-shot methods achieve strong results employing foundation-model features, but typically rely on memory banks, auxiliary datasets, or multi-modal tuning of vision-language models. We therefore question whether such complexity is necessary given the feature representations of vision foundation models. To answer this question, we introduce SubspaceAD, a training-free method, that operates in two simple stages. First, patch-level features are extracted from a small set of normal images by a frozen DINOv2 backbone. Second, a Principal Component Analysis (PCA) model is fit to these features to estimate the low-dimensional subspace of normal variations. At inference, anomalies are detected via the reconstruction residual with respect to this subspace, producing interpretable and statistically grounded anomaly scores. Despite its simplicity, SubspaceAD achieves state-of-the-art performance across one-shot and few-shot settings without training, prompt tuning, or memory banks. In the one-shot anomaly detection setting, SubspaceAD achieves image-level and pixel-level AUROC of 97.1% and 97.5% on the MVTec-AD dataset, and 93.2% and 98.2% on the VisA dataset, respectively, surpassing prior state-of-the-art results. Code and demo are available at https://github.com/CLendering/SubspaceAD.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces SubspaceAD, a training-free few-shot anomaly detection method. Patch-level features are extracted from a small set of normal images using a frozen DINOv2 backbone; PCA is then fit to these features to model the low-dimensional subspace of normal variations. At inference, anomalies are scored by their reconstruction residual with respect to this subspace. The method reports state-of-the-art image-level and pixel-level AUROC of 97.1%/97.5% on MVTec-AD and 93.2%/98.2% on VisA in the one-shot setting, outperforming prior approaches that rely on memory banks, auxiliary data, or prompt tuning.

Significance. If the empirical results hold under full implementation details, the work demonstrates that a parameter-light linear subspace model on frozen foundation-model features can achieve competitive or superior performance to more complex few-shot anomaly detection pipelines. This would reduce the need for training, memory banks, or multi-modal tuning in industrial inspection tasks and highlight the representational power of DINOv2 features for capturing normal variations.

major comments (2)
  1. [§3.2] §3.2 (PCA fitting): the number of retained principal components is listed as the sole free parameter, yet no selection criterion, cross-validation procedure, or default value is provided; because the residual norm depends directly on this choice, the reported AUROCs cannot be reproduced without it.
  2. [§4.1] §4.1 (evaluation protocol): the aggregation of patch-level residuals into image-level scores is not specified (e.g., max, mean, or percentile), which is load-bearing for the claimed 97.1% image-level AUROC on MVTec-AD.
minor comments (2)
  1. [§3.1] The abstract and §3.1 state that residuals are 'statistically grounded,' but no derivation or reference to a probabilistic model (e.g., chi-squared distribution of residuals) is supplied.
  2. [Table 1] Table 1 and Table 2 compare against prior methods; ensure all baselines use the same DINOv2 backbone and feature extraction settings for fair comparison.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the positive assessment and the recommendation for minor revision. The comments highlight important details for reproducibility, which we address below by clarifying the manuscript.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (PCA fitting): the number of retained principal components is listed as the sole free parameter, yet no selection criterion, cross-validation procedure, or default value is provided; because the residual norm depends directly on this choice, the reported AUROCs cannot be reproduced without it.

    Authors: We agree that an explicit selection rule is required for reproducibility. In the revised manuscript we state that we retain the minimal number of components that explain at least 95 % of the variance in the normal feature matrix (a standard, deterministic criterion that requires no cross-validation). We also report the resulting k values per dataset and category so that the exact residual computation can be replicated. revision: yes

  2. Referee: [§4.1] §4.1 (evaluation protocol): the aggregation of patch-level residuals into image-level scores is not specified (e.g., max, mean, or percentile), which is load-bearing for the claimed 97.1% image-level AUROC on MVTec-AD.

    Authors: We thank the referee for noting this omission. The image-level score is defined as the maximum patch-level reconstruction residual within the image; this choice is now explicitly stated in §4.1 together with the corresponding pixel-level map (the residual map itself). The revised text also includes a short justification that the max operator is consistent with the goal of detecting the strongest local deviation from the normal subspace. revision: yes

Circularity Check

0 steps flagged

No significant circularity; standard PCA on external features

full rationale

The derivation consists of extracting patch features from a frozen DINOv2 backbone on a few normal images, fitting PCA to model the normal subspace, and scoring anomalies by reconstruction residual. This is a direct, parameter-light application of established linear algebra to off-the-shelf features with no self-definitional loops, no fitted inputs renamed as predictions, and no load-bearing self-citations. Performance numbers are external benchmark results, not internal reductions of the method to its own inputs.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on the standard assumption that normal variations form a low-dimensional linear subspace in feature space; no new entities are introduced and the only free parameter is the number of retained PCA components.

free parameters (1)
  • number of principal components
    Determines the dimensionality of the normal subspace; exact selection rule is not stated in the abstract.
axioms (1)
  • domain assumption Normal image variations lie in a low-dimensional linear subspace of the DINOv2 feature space.
    Invoked to justify using PCA reconstruction residual as the anomaly score.

pith-pipeline@v0.9.0 · 5556 in / 1287 out tokens · 23844 ms · 2026-05-15T18:45:31.515004+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

43 extracted references · 43 canonical work pages · 4 internal anchors

  1. [1]

    Ganomaly: Semi-supervised anomaly detection via adversarial training

    Samet Akcay, Amir Atapour-Abarghouei, and Toby P Breckon. Ganomaly: Semi-supervised anomaly detection via adversarial training. InAsian conference on computer vision, pages 622–637. Springer, 2018. 2

  2. [2]

    Deep nearest neighbor anomaly detection.arXiv preprint arXiv:2002.10445, 2020

    Liron Bergman, Niv Cohen, and Yedid Hoshen. Deep nearest neighbor anomaly detection.arXiv preprint arXiv:2002.10445, 2020. 3

  3. [3]

    Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders

    Paul Bergmann, Sindy L ¨owe, Michael Fauser, David Sattlegger, and Carsten Steger. Improving unsupervised defect segmentation by applying structural similarity to autoencoders.arXiv preprint arXiv:1807.02011, 2018. 1, 2

  4. [4]

    Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection

    Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger. Mvtec ad–a comprehensive real-world dataset for unsupervised anomaly detection. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9592–9600, 2019. 1, 2, 5

  5. [5]

    Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection

    Yunkang Cao, Jiangning Zhang, Luca Frittoli, Yuqi Cheng, Weiming Shen, and Giacomo Boracchi. Adaclip: Adapting clip with hybrid learnable prompts for zero-shot anomaly detection. InEuropean Conference on Computer Vision, pages 55–72. Springer, 2024. 3

  6. [6]

    Emerging properties in self-supervised vision transformers

    Mathilde Caron, Hugo Touvron, Ishan Misra, Herv ´e J´egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. InProceedings of the IEEE/CVF international conference on computer vision, pages 9650–9660, 2021. 2

  7. [7]

    Anomaly detection: A survey.ACM computing surveys (CSUR), 41(3):1–58, 2009

    Varun Chandola, Arindam Banerjee, and Vipin Kumar. Anomaly detection: A survey.ACM computing surveys (CSUR), 41(3):1–58, 2009. 1

  8. [8]

    A simple framework for contrastive learning of visual representations

    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. InInternational conference on machine learning, pages 1597–1607. PmLR,

  9. [9]

    Xuhai Chen, Yue Han, and Jiangning Zhang. A zero-/few-shot anomaly classification and segmentation method for cvpr 2023 (vand) workshop challenge tracks 1 &2.1st Place on Zero-shot AD and 4th Place on Few-shot AD, 2305:17382, 2023. 3

  10. [10]

    Sub-image anomaly detection with deep pyramid correspondences.arXiv preprint arXiv:2005.02357, 2020

    Niv Cohen and Yedid Hoshen. Sub-image anomaly detection with deep pyramid correspondences.arXiv preprint arXiv:2005.02357, 2020. 1, 2, 3, 5

  11. [11]

    Anomalydino: Boosting patch-based few-shot anomaly detection with dinov2

    Simon Damm, Mike Laszkiewicz, Johannes Lederer, and Asja Fischer. Anomalydino: Boosting patch-based few-shot anomaly detection with dinov2. In2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 1319–1329. IEEE, 2025. 1, 2, 5, 6, 7, 4

  12. [12]

    Padim: a patch distribution modeling framework for anomaly detection and localization

    Thomas Defard, Aleksandr Setkov, Angelique Loesch, and Romaric Audigier. Padim: a patch distribution modeling framework for anomaly detection and localization. In International conference on pattern recognition, pages 475–489. Springer, 2021. 2, 3

  13. [13]

    Anomaly detection via reverse distillation from one-class embedding

    Hanqiu Deng and Xingyu Li. Anomaly detection via reverse distillation from one-class embedding. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9737–9746, 2022. 2

  14. [14]

    Anovl: Adapting vision-language models for unified zero-shot anomaly localization.arXiv preprint arXiv:2308.15939, 2(5), 2023

    Hanqiu Deng, Zhaoxiang Zhang, Jinan Bao, and Xingyu Li. Anovl: Adapting vision-language models for unified zero-shot anomaly localization.arXiv preprint arXiv:2308.15939, 2(5), 2023. 3

  15. [15]

    Fastrecon: Few-shot industrial anomaly detection via fast feature reconstruction

    Zheng Fang, Xiaoyang Wang, Haocheng Li, Jiejie Liu, Qiugui Hu, and Jimin Xiao. Fastrecon: Few-shot industrial anomaly detection via fast feature reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 17481–17490, 2023. 2, 5

  16. [16]

    Transfusion–a transparency-based diffusion model for anomaly detection

    Matic Fu ˇcka, Vitjan Zavrtanik, and Danijel Skoˇcaj. Transfusion–a transparency-based diffusion model for anomaly detection. InEuropean conference on computer vision, pages 91–108. Springer, 2024. 1, 2

  17. [17]

    Masked autoencoders are scalable vision learners

    Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked autoencoders are scalable vision learners. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000–16009, 2022. 2

  18. [18]

    Winclip: Zero-/few-shot anomaly classification and segmentation

    Jongheon Jeong, Yang Zou, Taewan Kim, Dongqing Zhang, Avinash Ravichandran, and Onkar Dabeer. Winclip: Zero-/few-shot anomaly classification and segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19606–19616, 2023. 1, 2, 3, 5, 7, 4

  19. [19]

    Few-shot anomaly detection via personalization.IEEE Access, 12:11035–11051, 2024

    Sangkyung Kwak, Jongheon Jeong, Hankook Lee, Woohyuck Kim, Dongho Seo, Woojin Yun, Wonjin Lee, and Jinwoo Shin. Few-shot anomaly detection via personalization.IEEE Access, 12:11035–11051, 2024. 3

  20. [20]

    Zero-shot anomaly detection via batch normalization.Advances in Neural Information Processing Systems, 36:40963–40993, 2023

    Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Maja Rudolph, and Stephan Mandt. Zero-shot anomaly detection via batch normalization.Advances in Neural Information Processing Systems, 36:40963–40993, 2023. 3, 7

  21. [21]

    Multimodal foundation models: From specialists to general-purpose assistants.Foundations and Trends® in Computer Graphics and Vision, 16(1-2):1–214, 2024

    Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, Jianfeng Gao, et al. Multimodal foundation models: From specialists to general-purpose assistants.Foundations and Trends® in Computer Graphics and Vision, 16(1-2):1–214, 2024. 2

  22. [22]

    Musc: Zero-shot industrial anomaly classification and segmentation with mutual scoring of the unlabeled images

    Xurui Li, Ziming Huang, Feng Xue, and Yu Zhou. Musc: Zero-shot industrial anomaly classification and segmentation with mutual scoring of the unlabeled images. InThe Twelfth International Conference on Learning Representations, 2024. 3, 6, 7

  23. [23]

    Promptad: Learning prompts with only normal samples for few-shot anomaly detection

    Xiaofan Li, Zhizhong Zhang, Xin Tan, Chengwei Chen, Yanyun Qu, Yuan Xie, and Lizhuang Ma. Promptad: Learning prompts with only normal samples for few-shot anomaly detection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16838–16848, 2024. 2, 3, 5, 7

  24. [24]

    Grounding dino: Marrying dino with grounded pre-training for open-set object detection

    Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European conference on computer vision, pages 38–55. Springer, 2024. 2 9

  25. [25]

    One-for-all few-shot anomaly detection via instance-induced prompt learning

    Wenxi Lv, Qinliang Su, and Wenchao Xu. One-for-all few-shot anomaly detection via instance-induced prompt learning. InThe Thirteenth International Conference on Learning Representations, 2025. 2, 3, 5

  26. [26]

    Principal components analysis (pca).Computers & Geosciences, 19 (3):303–342, 1993

    Andrzej Ma ´ckiewicz and Waldemar Ratajczak. Principal components analysis (pca).Computers & Geosciences, 19 (3):303–342, 1993. 2

  27. [27]

    DINOv2: Learning Robust Visual Features without Supervision

    Maxime Oquab, Timoth ´ee Darcet, Th´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 2, 3, 5

  28. [28]

    Deep learning for anomaly detection: A review.ACM computing surveys (CSUR), 54(2):1–38, 2021

    Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly detection: A review.ACM computing surveys (CSUR), 54(2):1–38, 2021. 1

  29. [29]

    Karl Pearson. Liii. on lines and planes of closest fit to systems of points in space.The London, Edinburgh, and Dublin philosophical magazine and journal of science, 2(11): 559–572, 1901. 2

  30. [30]

    Learning transferable visual models from natural language supervision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. InInternational conference on machine learning, pages 8748–8763. PmLR, 2021. 1, 2, 3

  31. [31]

    Towards total recall in industrial anomaly detection

    Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Sch¨olkopf, Thomas Brox, and Peter Gehler. Towards total recall in industrial anomaly detection. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14318–14328, 2022. 1, 2, 5

  32. [32]

    Optimizing patchcore for few/many-shot anomaly detection.arXiv preprint arXiv:2307.10792, 2023

    Jo ˜ao Santos, Triet Tran, and Oliver Rippel. Optimizing patchcore for few/many-shot anomaly detection.arXiv preprint arXiv:2307.10792, 2023. 3

  33. [33]

    Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery

    Thomas Schlegl, Philipp Seeb ¨ock, Sebastian M. Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery.CoRR, abs/1703.05921, 2017. 2

  34. [34]

    f-anogan: Fast unsupervised anomaly detection with generative adversarial networks.Medical image analysis, 54:30–44, 2019

    Thomas Schlegl, Philipp Seeb ¨ock, Sebastian M Waldstein, Georg Langs, and Ursula Schmidt-Erfurth. f-anogan: Fast unsupervised anomaly detection with generative adversarial networks.Medical image analysis, 54:30–44, 2019. 1, 2

  35. [35]

    A novel anomaly detection scheme based on principal component classifier

    Mei-Ling Shyu, Shu-Ching Chen, Kanoksri Sarinnapakorn, and Liwu Chang. A novel anomaly detection scheme based on principal component classifier. InProceedings of International Conference on Data Mining, 2003. 2

  36. [36]

    DINOv3

    Oriane Sim ´eoni, Huy V V o, Maximilian Seitzer, Federico Baldassarre, Maxime Oquab, Cijo Jose, Vasil Khalidov, Marc Szafraniec, Seungeun Yi, Micha¨el Ramamonjisoa, et al. Dinov3.arXiv preprint arXiv:2508.10104, 2025. 2, 1

  37. [37]

    Probabilistic principal component analysis.Journal of the Royal Statistical Society Series B: Statistical Methodology, 61(3): 611–622, 1999

    Michael E Tipping and Christopher M Bishop. Probabilistic principal component analysis.Journal of the Royal Statistical Society Series B: Statistical Methodology, 61(3): 611–622, 1999. 4

  38. [38]

    Principal component analysis — Wikipedia, the free encyclopedia, 2026

    Wikipedia contributors. Principal component analysis — Wikipedia, the free encyclopedia, 2026. [Online; accessed 3-March-2026]. 4

  39. [39]

    Pushing the limits of fewshot anomaly detection in industry vision: Graphcore.arXiv preprint arXiv:2301.12082, 2023

    Guoyang Xie, Jinbao Wang, Jiaqi Liu, Feng Zheng, and Yaochu Jin. Pushing the limits of fewshot anomaly detection in industry vision: Graphcore.arXiv preprint arXiv:2301.12082, 2023. 3

  40. [40]

    Customizing visual-language foundation models for multi-modal anomaly detection and reasoning

    Xiaohao Xu, Yunkang Cao, Huaxin Zhang, Nong Sang, and Xiaonan Huang. Customizing visual-language foundation models for multi-modal anomaly detection and reasoning. arXiv preprint arXiv:2403.11083, 2024. 1, 3

  41. [41]

    Fastflow: Unsupervised anomaly detection and localization via 2d normalizing flows.arXiv preprint arXiv:2111.07677, 2021

    Jiawei Yu, Ye Zheng, Xiang Wang, Wei Li, Yushuang Wu, Rui Zhao, and Liwei Wu. Fastflow: Unsupervised anomaly detection and localization via 2d normalizing flows.arXiv preprint arXiv:2111.07677, 2021. 2

  42. [42]

    Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection.arXiv preprint arXiv:2310.18961, 2023

    Qihang Zhou, Guansong Pang, Yu Tian, Shibo He, and Jiming Chen. Anomalyclip: Object-agnostic prompt learning for zero-shot anomaly detection.arXiv preprint arXiv:2310.18961, 2023. 1, 3, 7

  43. [43]

    Spot-the-difference self-supervised pre-training for anomaly detection and segmentation

    Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, and Onkar Dabeer. Spot-the-difference self-supervised pre-training for anomaly detection and segmentation. In European conference on computer vision, pages 392–408. Springer, 2022. 2, 5, 1 10 SubspaceAD: Training-Free Few-Shot Anomaly Detection via Subspace Modeling Supplementary Material A. Per-Cate...