Spatial-frequency biases in neurally aligned DCNNs emerge from human-like representations but do not primarily drive their adversarial robustness advantages.
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5representative citing papers
Alignment of vision-language models with human V1-V3 early visual cortex negatively predicts resistance to sycophantic gaslighting attacks.
Optimized 3x3 adversarial image filters based on edge detection generate transferable untargeted attacks on neural networks with 30-80% success using only one pass and far fewer parameters than prior methods.
ShapeY is a benchmark dataset and nearest-neighbor protocol that measures shape-based recognition in vision models, revealing that even state-of-the-art networks fail to generalize consistently across 3D viewpoints and non-shape appearance changes.
Sparse-to-dense 3D segmentation from 2D slices shows divergent regularization needs: 2D benefits from strong augmentation and soft labels while 3D does not, and human-centric preprocessing harms performance.
citing papers explorer
-
Almost for Free: Crafting Adversarial Examples with Convolutional Image Filters
Optimized 3x3 adversarial image filters based on edge detection generate transferable untargeted attacks on neural networks with 30-80% success using only one pass and far fewer parameters than prior methods.