Recognition: unknown
ShapeY: A Principled Framework for Measuring Shape Recognition Capacity via Nearest-Neighbor Matching
Pith reviewed 2026-05-08 04:09 UTC · model grok-4.3
The pith
ShapeY shows that deep networks struggle to recognize objects by 3D shape across viewpoint and appearance changes.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
ShapeY comprises 68,200 grayscale images of 200 3D objects rendered from multiple viewpoints, with optional non-shape appearance changes. Using a nearest-neighbor matching task, it probes whether an OR system's embedding space clusters object views by 3D shape similarity. Testing of 321 diverse pre-trained networks shows that state-of-the-art models struggle to generalize consistently across 3D viewpoint and appearance changes and are prone to infrequent but egregious matches of objects with completely different shapes. ShapeY thus establishes a principled framework for advancing toward human-like shape recognition capabilities.
What carries the argument
Nearest-neighbor matching distances in the embedding space to evaluate clustering of object views by 3D shape similarity across variations.
If this is right
- Adoption of ShapeY would allow systematic comparison and improvement of shape invariance in new model architectures.
- Well-performing models on ShapeY would exhibit better generalization in tasks requiring shape-based understanding despite visual changes.
- The detailed readouts like viewpoint tuning curves can identify specific weaknesses in how models encode shape.
- The framework emphasizes the development of encodings that are disentangled from non-shape cues like texture.
- It provides a tool to track progress toward more robust and human-like object recognition systems.
Where Pith is reading between the lines
- This finding implies that training data and objectives in current models may be insufficient for learning true 3D shape representations, suggesting a need for synthetic 3D data augmentation.
- ShapeY could be used to evaluate emerging vision transformers or other architectures for their shape capabilities in ways standard datasets do not.
- Poor shape recognition might underlie failures in applications like autonomous driving where viewpoint-invariant shape understanding is critical.
- Extending ShapeY to include more complex scenes or real-world images could bridge the gap to practical deployment.
Load-bearing premise
That nearest-neighbor matching distances in a model's embedding space directly measure its capacity for shape-based recognition across viewpoint and appearance variation.
What would settle it
If a model achieves low error rates on ShapeY match tasks yet human observers rate the same objects as having dissimilar 3D shapes, or if the model succeeds on ShapeY but fails independent shape-based recognition tests outside the benchmark images.
Figures
read the original abstract
Object recognition (OR) in humans relies heavily on shape cues and the ability to recognize objects across varying 3D viewpoints. Unlike humans, deep networks often rely on non-shape cues such as texture and background, leading to vulnerabilities in generalization and robustness. To address this gap, we introduce ShapeY, a novel and principled benchmarking framework designed to evaluate shape-based recognition capability in OR systems. ShapeY comprises 68,200 grayscale images of 200 3D objects rendered from multiple viewpoints and optionally subjected to non-shape ``appearance'' changes. Using a nearest-neighbor matching task, ShapeY specifically probes the fine-grained structure of an OR system's embedding space by evaluating whether object views are clustered by 3D shape similarity across varying 3D viewpoints and other non-shape changes. ShapeY provides a suite of quantitative and qualitative performance readouts, including error rate graphs, viewpoint tuning curves, histograms of positive and negative matching scores, and grids showing ordered best matches, which together offer a comprehensive evaluation of an OR system's shape understanding capability. Testing of 321 pre-trained networks with diverse architectures reveals significant challenges in achieving robust shape-based recognition: even state-of-the-art models struggle to generalize consistently across 3D viewpoint and appearance changes, and are prone to infrequent but egregious matches of objects of obviously completely different shape. ShapeY establishes a principled framework for advancing artificial vision systems toward human-like shape recognition capabilities, emphasizing the importance of disentangled and invariant object encodings.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces ShapeY, a benchmarking framework with 68,200 grayscale images of 200 3D objects rendered from multiple viewpoints with optional non-shape appearance changes. It evaluates shape-based recognition capacity in 321 pre-trained networks via a nearest-neighbor matching task that probes whether embeddings cluster object views by 3D shape similarity, reporting that even state-of-the-art models struggle to generalize consistently across viewpoint and appearance variation and exhibit infrequent but egregious mismatches of dissimilar shapes.
Significance. If the nearest-neighbor procedure validly isolates shape capacity, the work supplies a scalable diagnostic tool with quantitative readouts (error graphs, viewpoint curves, score histograms) and qualitative match grids, plus large-scale testing across architectures; this could help steer development of more viewpoint- and appearance-invariant representations.
major comments (2)
- [Evaluation Protocol] The central claim that poor NN performance demonstrates limited shape recognition rests on the unvalidated premise that embedding distances primarily encode 3D shape across viewpoints. No positive control is reported using an explicit shape descriptor (e.g., silhouette or mesh-invariant matching) that should succeed on the same stimuli, nor a negative control with shape-scrambled images; without these, failures could arise from generic embedding statistics rather than absence of shape understanding (see evaluation protocol and results sections).
- [Methods] Quantitative tables and error-rate graphs are referenced but the manuscript provides insufficient detail on data-generation parameters (exact viewpoint sampling, appearance-change distributions, object selection) and on how the 321 networks were chosen or fine-tuned, making it impossible to assess whether the reported generalization failures are robust or protocol-dependent.
minor comments (2)
- [Results] The abstract and results text use the term 'egregious matches' without a precise operational definition or frequency statistic; add a quantitative threshold or histogram bin for such cases.
- Figure captions for the match grids should explicitly state the ordering criterion (e.g., cosine distance) and whether appearance changes were applied in the displayed examples.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed comments on our manuscript introducing the ShapeY framework. We address each major comment below with the strongest honest defense possible, proposing targeted revisions to improve validation and transparency without misrepresenting the original work.
read point-by-point responses
-
Referee: [Evaluation Protocol] The central claim that poor NN performance demonstrates limited shape recognition rests on the unvalidated premise that embedding distances primarily encode 3D shape across viewpoints. No positive control is reported using an explicit shape descriptor (e.g., silhouette or mesh-invariant matching) that should succeed on the same stimuli, nor a negative control with shape-scrambled images; without these, failures could arise from generic embedding statistics rather than absence of shape understanding (see evaluation protocol and results sections).
Authors: We agree that explicit positive and negative controls would provide stronger evidence that the nearest-neighbor failures specifically reflect limitations in shape encoding rather than generic embedding properties. The current protocol defines positives as different views of the same object and negatives as different objects, with qualitative match grids already showing egregious shape mismatches in some models. However, to directly address the concern, the revised manuscript will add a positive control using silhouette-based nearest-neighbor matching on the same grayscale stimuli (expected to succeed if shape cues dominate) and a negative control on shape-scrambled versions of the images (e.g., via randomized part textures or permutations) to show performance collapse when shape is disrupted. These will be reported in an expanded evaluation protocol section. revision: yes
-
Referee: [Methods] Quantitative tables and error-rate graphs are referenced but the manuscript provides insufficient detail on data-generation parameters (exact viewpoint sampling, appearance-change distributions, object selection) and on how the 321 networks were chosen or fine-tuned, making it impossible to assess whether the reported generalization failures are robust or protocol-dependent.
Authors: We acknowledge that greater detail on data generation and network selection is necessary for full reproducibility and robustness assessment. The original manuscript prioritized the framework description and aggregate results across 321 networks, but we agree this leaves gaps. In the revised version, the Methods section will be expanded to specify: exact viewpoint sampling (uniform azimuth 0-360° in 15° steps, elevation -30° to 30° in 10° steps), appearance-change distributions (parameters for texture randomization, lighting intensity, and background variation), object selection criteria (200 models chosen for shape diversity from ShapeNet and similar repositories), and network curation (321 models drawn from standard pre-trained repositories such as PyTorch Hub and TensorFlow Hub with no fine-tuning applied to preserve zero-shot evaluation). These additions will allow readers to evaluate protocol dependence. revision: yes
Circularity Check
No significant circularity; empirical benchmark
full rationale
The paper introduces ShapeY as a new dataset of 68,200 rendered images and applies a nearest-neighbor matching procedure directly to the embedding spaces of 321 pre-trained networks. No mathematical derivations, predictions, or first-principles results are claimed. Central claims rest on empirical error rates, viewpoint curves, and match grids obtained from the benchmark itself, without any reduction to fitted parameters, self-definitional equations, or load-bearing self-citations. The framework is self-contained against external benchmarks and does not invoke uniqueness theorems or ansatzes from prior author work.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Nearest-neighbor matching in a model's embedding space measures shape similarity across viewpoint and appearance changes
invented entities (1)
-
ShapeY benchmark framework
no independent evidence
Reference graph
Works this paper leans on
-
[1]
The lateral occipital complex and its role in object recognition,
K. Grill-Spector, Z. Kourtzi, and N. Kanwisher, “The lateral occipital complex and its role in object recognition,”Vision Research, vol. 41, no. 10-11, pp. 1409–1422, May 2001
2001
-
[2]
Recognition-by-components: A theory of human image understanding
I. Biederman, “Recognition-by-components: A theory of human image understanding.”Psychological Review, vol. 94, no. 2, pp. 115–147, Apr. 1987
1987
-
[3]
Surface versus edge-based determinants of visual recognition,
I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,”Cognitive Psychology, vol. 20, no. 1, pp. 38–64, Jan. 1988
1988
-
[4]
Visual intelligence: How we create what we see
D. D. Hoffman, “Visual intelligence: How we create what we see.” Visual intelligence: How we create what we see., pp. xv, 294–xv, 294, 1998
1998
-
[5]
Representation of Perceived Object Shape by the Human Lateral Occipital Complex,
Z. Kourtzi, “Representation of Perceived Object Shape by the Human Lateral Occipital Complex,”Science, vol. 293, no. 5534, pp. 1506–1509, Aug. 2001
2001
-
[6]
ImageNet: A large-scale hierarchical image database,
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in2009 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2009, pp. 248–255
2009
-
[7]
R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wich- mann, and W. Brendel, “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness,” arXiv:1811.12231 [cs, q-bio, stat], Jan. 2019
-
[8]
Approximating CNNs with Bag- of-local-Features models works surprisingly well on ImageNet,
W. Brendel and M. Bethge, “Approximating CNNs with Bag- of-local-Features models works surprisingly well on ImageNet,” arXiv:1904.00760 [cs, stat], Mar. 2019
-
[9]
Deep convolutional networks do not classify based on global object shape,
N. Baker, H. Lu, G. Erlikhman, and P. J. Kellman, “Deep convolutional networks do not classify based on global object shape,”PLOS Compu- tational Biology, vol. 14, no. 12, p. e1006613, Dec. 2018
2018
-
[10]
Shape- Biased Learning by Thinking Inside the Box,
N. M ¨uller, C. G. M. Snoek, I. I. A. Groen, and H. S. Scholte, “Shape- Biased Learning by Thinking Inside the Box,” p. 2024.05.30.595526, Jun. 2024
2024
-
[11]
Are Vision Language Models Texture or Shape Biased and Can We Steer Them?
P. Gavrikov, J. Lukasik, S. Jung, R. Geirhos, B. Lamm, M. J. Mirza, M. Keuper, and J. Keuper, “Are Vision Language Models Texture or Shape Biased and Can We Steer Them?” Mar. 2024
2024
-
[12]
Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity
T. Li, Y . Li, Z. Wen, and T. S. Lee, “Emergence of Shape Bias in Convolutional Neural Networks through Activation Sparsity.”
-
[13]
Assessing Shape Bias Property of Convolutional Neural Networks,
H. Hosseini, B. Xiao, M. Jaiswal, and R. Poovendran, “Assessing Shape Bias Property of Convolutional Neural Networks,” in2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jun. 2018, pp. 2004–20 048
2018
-
[14]
Shape or Texture: Understanding Discriminative Features in CNNs,
M. A. Islam, M. Kowal, P. Esser, S. Jia, B. Ommer, K. G. Derpanis, and N. Bruce, “Shape or Texture: Understanding Discriminative Features in CNNs,” Jan. 2021
2021
-
[15]
Teaching deep networks to see shape: Lessons from a simplified visual world,
C. Jarvers and H. Neumann, “Teaching deep networks to see shape: Lessons from a simplified visual world,” 2024
2024
-
[16]
Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models
A. Hemmat, A. Davies, T. A. Lamb, J. Yuan, P. Torr, A. Khakzar, and F. Pinto, “Hidden in Plain Sight: Evaluating Abstract Shape Recognition in Vision-Language Models.”
-
[17]
Does enhanced shape bias improve neural network robustness to common corruptions?
C. K. Mummadi, R. Subramaniam, R. Hutmacher, J. Vitay, V . Fischer, and J. H. Metzen, “Does enhanced shape bias improve neural network robustness to common corruptions?” Apr. 2021
2021
-
[18]
Can Biases in ImageNet Models Explain Generalization?
P. Gavrikov and J. Keuper, “Can Biases in ImageNet Models Explain Generalization?”
-
[19]
Columbia Object Image Library (COIL-100)
S. A. Nene, S. K. Nayar, and H. Murase, “Columbia Object Image Library (COIL-100).”
-
[20]
Learning methods for generic object recognition with invariance to pose and lighting,
Y . LeCun, Fu Jie Huang, and L. Bottou, “Learning methods for generic object recognition with invariance to pose and lighting,” inProceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 2. Washington, DC, USA: IEEE, 2004, pp. 97–104
2004
-
[21]
A large-scale hierarchical multi- view RGB-D object dataset,
K. Lai, L. Bo, X. Ren, and D. Fox, “A large-scale hierarchical multi- view RGB-D object dataset,” in2011 IEEE International Conference on Robotics and Automation, 2011, pp. 1817–1824
2011
-
[22]
BigBIRD: A large-scale 3D database of object instances,
A. Singh, J. Sha, K. S. Narayan, T. Achim, and P. Abbeel, “BigBIRD: A large-scale 3D database of object instances,” in2014 IEEE International Conference on Robotics and Automation (ICRA). Hong Kong, China: IEEE, May 2014, pp. 509–516
2014
-
[23]
iLab-20M: A Large-Scale Controlled Object Dataset to Investigate Deep Learning,
A. Borji, S. Izadi, and L. Itti, “iLab-20M: A Large-Scale Controlled Object Dataset to Investigate Deep Learning,” in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV , USA: IEEE, Jun. 2016, pp. 2221–2230
2016
-
[24]
CORe50: A New Dataset and Benchmark for Continuous Object Recognition,
V . Lomonaco and D. Maltoni, “CORe50: A New Dataset and Benchmark for Continuous Object Recognition,” inProceedings of the 1st Annual Conference on Robot Learning. PMLR, Oct. 2017, pp. 17–26
2017
-
[25]
Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations,
A. Ahmadyan, L. Zhang, J. Wei, A. Ablavatski, and M. Grundmann, “Objectron: A Large Scale Dataset of Object-Centric Videos in the Wild with Pose Annotations,” Dec. 2020
2020
-
[26]
Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction,
J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, “Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction,” Sep. 2021
2021
-
[27]
PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning,
F. Bordes, S. Shekhar, M. Ibrahim, D. Bouchacourt, P. Vincent, and A. S. Morcos, “PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning,” Dec. 2023
2023
-
[28]
Benchmarking Neural Network Ro- bustness to Common Corruptions and Perturbations,
D. Hendrycks and T. Dietterich, “Benchmarking Neural Network Ro- bustness to Common Corruptions and Perturbations,” Mar. 2019
2019
-
[29]
Learning Robust Global Representations by Penalizing Local Predictive Power,
H. Wang, S. Ge, E. P. Xing, and Z. C. Lipton, “Learning Robust Global Representations by Penalizing Local Predictive Power,” Nov. 2019
2019
-
[30]
One-shot viewpoint invariance in matching novel objects,
I. Biederman and M. Bar, “One-shot viewpoint invariance in matching novel objects,”Vision Research, vol. 39, no. 17, pp. 2885–2899, Aug. 1999
1999
-
[31]
On the Semantics of a Glance at a Scene,
I. Biederman, “On the Semantics of a Glance at a Scene,” inPerceptual Organization. Routledge, 1981
1981
-
[32]
ShapeNet: An Information-Rich 3D Model Repository
A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, J. Xiao, L. Yi, and F. Yu, “ShapeNet: An Information-Rich 3D Model Repository,” arXiv:1512.03012 [cs], Dec. 2015
work page internal anchor Pith review arXiv 2015
-
[33]
Untangling invariant object recognition,
J. J. DiCarlo and D. D. Cox, “Untangling invariant object recognition,” Trends in Cognitive Sciences, vol. 11, no. 8, pp. 333–341, Aug. 2007
2007
-
[34]
Deep Residual Learning for Image Recognition
K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,”arXiv:1512.03385 [cs], Dec. 2015
work page internal anchor Pith review arXiv 2015
-
[35]
PyTorch Image Models,
R. Wightman, “PyTorch Image Models,” 2019
2019
-
[36]
A Simple Framework for Contrastive Learning of Visual Representations
T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A Simple Framework for Contrastive Learning of Visual Representations,”arXiv:2002.05709 [cs, stat], Jun. 2020
work page internal anchor Pith review arXiv 2002
-
[37]
DINOv2: Learning Robust Visual Fea- tures without Supervision,
M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, M. Assran, N. Ballas, W. Galuba, R. Howes, P.-Y . Huang, S.-W. Li, I. Misra, M. Rabbat, V . Sharma, G. Synnaeve, H. Xu, H. Jegou, J. Mairal, P. Labatut, A. Joulin, and P. Bojanowski, “DINOv2: Learning Robust Visual Fea- tures without Su...
2024
-
[38]
Decoupled Contrastive Learning,
C.-H. Yeh, C.-Y . Hong, Y .-C. Hsu, T.-L. Liu, Y . Chen, and Y . LeCun, “Decoupled Contrastive Learning,” Jul. 2022
2022
-
[39]
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments,
M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised Learning of Visual Features by Contrasting Cluster Assignments,” Jan. 2021
2021
-
[40]
VICReg: Variance-Invariance- Covariance Regularization for Self-Supervised Learning,
A. Bardes, J. Ponce, and Y . LeCun, “VICReg: Variance-Invariance- Covariance Regularization for Self-Supervised Learning,” Jan. 2022
2022
-
[41]
Momentum Contrast for Unsupervised Visual Representation Learning,
K. He, H. Fan, Y . Wu, S. Xie, and R. Girshick, “Momentum Contrast for Unsupervised Visual Representation Learning,” Mar. 2020
2020
-
[42]
Bootstrap your own latent: A new approach to self-supervised Learning,
J.-B. Grill, F. Strub, F. Altch ´e, C. Tallec, P. H. Richemond, E. Buchatskaya, C. Doersch, B. A. Pires, Z. D. Guo, M. G. Azar, B. Piot, K. Kavukcuoglu, R. Munos, and M. Valko, “Bootstrap your own latent: A new approach to self-supervised Learning,” Sep. 2020
2020
-
[43]
Barlow Twins: Self-Supervised Learning via Redundancy Reduction,
J. Zbontar, L. Jing, I. Misra, Y . LeCun, and S. Deny, “Barlow Twins: Self-Supervised Learning via Redundancy Reduction,” Jun. 2021. 16
2021
-
[44]
Emerging Properties in Self-Supervised Vision Transform- ers,
M. Caron, H. Touvron, I. Misra, H. J ´egou, J. Mairal, P. Bojanowski, and A. Joulin, “Emerging Properties in Self-Supervised Vision Transform- ers,” May 2021
2021
-
[45]
L. Jing, P. Vincent, Y . LeCun, and Y . Tian, “Understanding Dimensional Collapse in Contrastive Self-supervised Learning,”arXiv:2110.09348 [cs], Apr. 2022
-
[46]
On the surprising similarities between supervised and self-supervised models,
R. Geirhos, K. Narayanappa, B. Mitzkus, M. Bethge, F. A. Wichmann, and W. Brendel, “On the surprising similarities between supervised and self-supervised models,”arXiv:2010.08377 [cs, q-bio], Oct. 2020
-
[47]
ResNet strikes back: An improved training procedure in timm,
R. Wightman, H. Touvron, and H. J ´egou, “ResNet strikes back: An improved training procedure in timm,” Oct. 2021
2021
-
[48]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,”arXiv:2010.11929 [cs], Jun. 2021
work page internal anchor Pith review arXiv 2010
-
[49]
A ConvNet for the 2020s,
Z. Liu, H. Mao, C.-Y . Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s,” Mar. 2022
2022
-
[50]
ConvFormer: Combining CNN and Transformer for Medical Image Segmentation,
P. Gu, Y . Zhang, C. Wang, and D. Z. Chen, “ConvFormer: Combining CNN and Transformer for Medical Image Segmentation,” Nov. 2022
2022
-
[51]
A. El-Nouby, H. Touvron, M. Caron, P. Bojanowski, M. Douze, A. Joulin, I. Laptev, N. Neverova, G. Synnaeve, J. Verbeek, and H. Jegou, “XCiT: Cross-Covariance Image Transformers,”arXiv:2106.09681 [cs], Jun. 2021
-
[52]
Going deeper with Image Transformers,
H. Touvron, M. Cord, A. Sablayrolles, G. Synnaeve, and H. J ´egou, “Going deeper with Image Transformers,” Apr. 2021
2021
-
[53]
Masked Autoencoders Are Scalable Vision Learners,
K. He, X. Chen, S. Xie, Y . Li, P. Doll ´ar, and R. Girshick, “Masked Autoencoders Are Scalable Vision Learners,” Dec. 2021
2021
-
[54]
Shortcut learning in deep neural networks,
R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann, “Shortcut learning in deep neural networks,”Nature Machine Intelligence, vol. 2, no. 11, pp. 665–673, Nov. 2020
2020
-
[55]
ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms,
W. Yang, B. Zhang, and O. Russakovsky, “ImageNet-OOD: Deciphering Modern Out-of-Distribution Detection Algorithms,” Mar. 2024. Jong Woo NamJong Woo Nam is a Ph.D. candi- date in the Neuroscience Graduate Program at the University of Southern California. He earned his B.Eng. in Applied Mathematics from Franklin W. Olin College of Engineering. His research ...
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.