pith. machine review for the scientific record. sign in

arxiv: 2410.13891 · v3 · submitted 2024-10-13 · 💻 cs.CR · cs.AI

Recognition: unknown

S⁴ST: A Strong, Self-transferable, faSt, and Simple Scale Transformation for Transferable Targeted Attack

Authors on Pith no claims yet
classification 💻 cs.CR cs.AI
keywords transformationsblack-boxdatascalescalingsimplestrongtargeted
0
0 comments X
read the original abstract

Transferable Targeted Attacks (TTAs) face significant challenges due to severe overfitting to surrogate models. Recent breakthroughs heavily rely on large-scale training data of victim models, while data-free solutions, \textit{i.e.}, image transformation-involved gradient optimization, often depend on black-box feedback for method design and tuning. These dependencies violate black-box transfer settings and compromise threat evaluation fairness. In this paper, we propose two blind estimation measures, self-alignment and self-transferability, to analyze per-transformation effectiveness and cross-transformation correlations under strict black-box constraints. Our findings challenge conventional assumptions: (1) Attacking simple scaling transformations uniquely enhances targeted transferability, outperforming other basic transformations and rivaling leading complex methods; (2) Geometric and color transformations exhibit high internal redundancy despite weak inter-category correlations. These insights drive the design and tuning of S$^4$ST (Strong, Self-transferable, faSt, Simple Scale Transformation), which integrates dimensionally consistent scaling, complementary low-redundancy transformations, and block-wise operations. Extensive evaluations across diverse architectures, training distributions, and tasks show that S$^{4}$ST achieves state-of-the-art effectiveness-efficiency balance without data dependency. We reveal that scaling's effectiveness stems from visual data's multi-scale nature and ubiquitous scale augmentation during training, rendering such augmentation a double-edged sword. Further validations on medical imaging and face verification confirm the framework's strong generalization.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Light-ResKAN: A Parameter-Sharing Lightweight KAN with Gram Polynomials for Efficient SAR Image Recognition

    cs.CV 2026-04 unverdicted novelty 6.0

    Light-ResKAN reaches 99.09% accuracy on MSTAR SAR images with 82.9 times fewer FLOPs and 163.78 times fewer parameters than VGG16 by combining KAN convolutions, Gram polynomials, and channel-wise parameter sharing.