Recognition: unknown
Feature-level analysis and adversarial transfer in rotationally equivariant quantum machine learning
Pith reviewed 2026-05-10 10:27 UTC · model grok-4.3
The pith
Equivariant quantum models do not automatically gain adversarial robustness, as they can still depend on vulnerable rotation-invariant statistics such as ring-averaged intensities.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Group-equivariant quantum models with an invariant readout base their predictions solely on the group-twirled version of the input. This isolates the symmetry-invariant information the model can access, split into distinct sectors. For rotationally equivariant models, these sectors correspond to different rotation-invariant image statistics. The work uses targeted input changes to find that models often rely on brittle statistics, especially ring-averaged intensities, which remain open to classical transfer attacks. Removing the symmetry sector linked to these weak points substantially improves the model's robustness.
What carries the argument
Targeted input transformations that isolate which rotation-invariant statistics in different symmetry sectors the model actually uses for classification.
If this is right
- Equivariance restricts models to invariant features but does not prevent reliance on attackable ones.
- Suppressing the symmetry sector for ring-averaged intensities enhances transfer robustness.
- Future quantum models can use symmetry-dependent feature selection to improve security.
- Classical attacks can still succeed by targeting specific invariant components even in equivariant quantum setups.
Where Pith is reading between the lines
- This approach could be extended to other group symmetries beyond rotations to identify vulnerable features in different quantum models.
- Designers of equivariant models might systematically test and prune sectors based on their brittleness to attacks.
- Similar feature-level analysis might reveal why some classical equivariant models also fail at robustness despite symmetry constraints.
Load-bearing premise
Targeted input transformations can precisely identify the exact invariant statistics that the model depends on for its classifications.
What would settle it
If a rotationally equivariant model maintains high accuracy even after input transformations that alter ring-averaged intensities while preserving other invariants, or if suppressing the associated sector fails to improve robustness against transfer attacks.
Figures
read the original abstract
Group-equivariant quantum models are designed to exploit symmetry and can improve trainability, but it remains unclear how symmetry constraints shape their adversarial robustness. We study this question through a feature-level analysis of equivariant quantum models in a transfer-attack setting. Under equivariance with an invariant readout, predictions depend only on the group-twirled input, which identifies the symmetry-invariant information accessible to the model together with a complementary uninformative subspace. Specializing this framework to a rotationally equivariant quantum model, we derive an explicit characterization of the accessible information in terms of rotation-invariant image statistics distributed across distinct symmetry sectors. Using targeted input transformations, we determine which of these statistics are actually relied upon for classification across several datasets. We find that equivariance alone does not guarantee transfer robustness: even within the restricted invariant feature space, the model can rely on brittle statistics, particularly ring-averaged intensities in the rotationally equivariant model, that remain vulnerable to classical transfer attacks. Guided by this analysis, we show that suppressing the symmetry sector associated with the brittle feature substantially improves robustness. These results establish a systematic mechanism to exploit symmetry-dependent features for adversarial robustness in future quantum machine learning models.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript develops a feature-level framework for analyzing rotationally equivariant quantum machine learning models under transfer attacks. With an invariant readout, model predictions depend only on the group-twirled input, which isolates symmetry-invariant statistics distributed across distinct sectors while rendering the orthogonal complement uninformative. Specializing to rotational equivariance, the authors derive an explicit decomposition of these invariant features (including ring-averaged intensities) and employ targeted input transformations to identify which statistics the trained models actually rely upon across multiple datasets. They conclude that equivariance alone does not ensure robustness, as models can exploit brittle sector-specific statistics vulnerable to classical attacks, and demonstrate that suppressing the brittle sector yields substantial robustness gains.
Significance. If the experimental attribution and improvement results hold, the work supplies a concrete, symmetry-aware diagnostic for adversarial brittleness in equivariant QML that goes beyond generic robustness metrics. The group-theoretic characterization of accessible invariant information is a clear strength, as it yields falsifiable, dataset-independent predictions about which features can be exploited. The demonstration that sector suppression improves transfer robustness offers a practical design principle for future models, directly addressing the gap between symmetry exploitation and adversarial considerations in quantum machine learning.
major comments (2)
- [§4.2] §4.2 (Targeted input transformations and sector identification): The central claim that ring-averaged intensities constitute the brittle statistic (and that suppressing their sector improves robustness) rests on the assumption that each targeted transformation affects only the intended invariant statistic without unintended side effects on other sectors or the group-twirled input. The manuscript should supply explicit verification—e.g., the change in the full set of sector statistics before versus after each transformation, or a quantitative orthogonality measure—to confirm that confounding across complementary invariant subspaces is negligible.
- [§5] §5 (Experimental results and robustness gains): The reported improvements in transfer robustness after sector suppression are presented without error bars, statistical significance tests, or details on data exclusion criteria and hyperparameter sensitivity. This makes it difficult to evaluate whether the gains are reproducible and load-bearing for the claim that equivariance does not guarantee robustness, particularly given the low-confidence assessment of the experimental component.
minor comments (2)
- Notation for symmetry sectors and twirled inputs is introduced without a compact summary table or diagram; a single figure or table collecting the sector decomposition would improve readability.
- The abstract states that predictions 'depend only on the group-twirled input'; this is repeated in the main text but would benefit from an explicit pointer to the relevant equation or proposition establishing the reduction.
Simulated Author's Rebuttal
We thank the referee for their careful reading and constructive feedback, which has identified opportunities to strengthen the rigor of our analysis and experimental reporting. We address each major comment below and outline the revisions we will make.
read point-by-point responses
-
Referee: [§4.2] §4.2 (Targeted input transformations and sector identification): The central claim that ring-averaged intensities constitute the brittle statistic (and that suppressing their sector improves robustness) rests on the assumption that each targeted transformation affects only the intended invariant statistic without unintended side effects on other sectors or the group-twirled input. The manuscript should supply explicit verification—e.g., the change in the full set of sector statistics before versus after each transformation, or a quantitative orthogonality measure—to confirm that confounding across complementary invariant subspaces is negligible.
Authors: We agree that explicit verification is necessary to fully substantiate the specificity of the targeted transformations. While the group-theoretic decomposition ensures that the symmetry sectors are orthogonal by construction (as the invariant features are projections onto distinct irreducible representations), we acknowledge that the manuscript does not currently provide a direct empirical check of cross-sector leakage under the chosen transformations. In the revised manuscript, we will add to §4.2 a quantitative analysis: for each transformation, we will report the pre- and post-transformation values of the complete set of sector statistics (including both the ring-averaged intensities and the orthogonal complement sectors), together with a simple orthogonality metric (e.g., the normalized inner product between the change vectors in different sectors). This will confirm that unintended effects remain negligible, as predicted by the theory. revision: yes
-
Referee: [§5] §5 (Experimental results and robustness gains): The reported improvements in transfer robustness after sector suppression are presented without error bars, statistical significance tests, or details on data exclusion criteria and hyperparameter sensitivity. This makes it difficult to evaluate whether the gains are reproducible and load-bearing for the claim that equivariance does not guarantee robustness, particularly given the low-confidence assessment of the experimental component.
Authors: We accept that the current experimental presentation lacks the statistical detail needed for full reproducibility assessment. In the revised version of §5, we will augment all robustness plots with error bars computed over multiple independent training runs (with fixed seeds reported), include statistical significance tests (paired t-tests or Wilcoxon signed-rank tests with p-values) comparing the baseline and sector-suppressed models, and add an appendix detailing data exclusion criteria (if any) together with a hyperparameter sensitivity study across learning rates and circuit depths. These additions will directly address concerns about reproducibility while preserving the core finding that sector suppression yields substantial robustness gains. revision: yes
Circularity Check
No significant circularity; claims follow from standard equivariance definitions
full rationale
The paper begins from the standard property that equivariant models with invariant readouts depend only on the group-twirled input (a direct algebraic consequence of the symmetry constraint, not a fitted or self-defined quantity). The subsequent explicit characterization of rotation-invariant statistics across symmetry sectors is obtained by specializing this property to the rotation group using standard representation theory, without reducing any prediction to the inputs by construction. The use of targeted input transformations to identify relied-upon features is an empirical diagnostic method whose outcomes are not presupposed by the framework equations. No self-citations, ansatzes, or uniqueness theorems are invoked as load-bearing steps in the derivation chain. The central finding that equivariance does not guarantee robustness is therefore an observed result rather than a tautology.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Predictions of equivariant quantum models with invariant readout depend only on the group-twirled input
Reference graph
Works this paper leans on
-
[1]
Specifically, the authors of Ref
The rotationally equivariant quantum model. Specifically, the authors of Ref. [5] construct a rotationally equivariant quantum model by partitioning a𝑛-qubit register into a radial register consisting of𝑛rad-qubits and an orbital register consisting of𝑛orb-qubits. Based on this decomposi- tion,theencodingschemesamplesthepixelvaluesofanimage 𝑥at vertices o...
-
[2]
IIA, the groupG=Z 𝑁𝜙 with𝑁 𝜙 =2 𝑛orb acts as discrete rotations (cyclic shifts) on the orbital register
Specializing to the rotationally equivariant model FortherotationallyequivariantmodelofRef.[5],describedin Sec. IIA, the groupG=Z 𝑁𝜙 with𝑁 𝜙 =2 𝑛orb acts as discrete rotations (cyclic shifts) on the orbital register. The goal of this section is to connect the Fourier-space description, in which the symmetry is naturally expressed, to the pixel-space descr...
-
[3]
Input transformations. The characterization above yields a concrete prediction: a Z 𝑁𝜙-equivariant model should be invariant under any input transformation that leaves the twirled representationTZ 𝑁𝜙 (𝜌) unchanged. Transformation 1 (T1) is designed to directly test this prediction. Further, to probe which rotation-invariant statistics are effectively used...
-
[4]
Here, we consider the specific intervention of suppressing the𝑚=0Fourier mode, i.e., the component associated with ring-average intensity
Readout modification Instead of modifying the training data, the model’s readout canbeadaptedbycomposingthereadoutwithaprojectorthat suppresses selected Fourier modes. Here, we consider the specific intervention of suppressing the𝑚=0Fourier mode, i.e., the component associated with ring-average intensity. In the Fourier basis,𝑚=0corresponds to the computa...
-
[5]
Cerezo, Group-invariant quantum machine learning, PRX Quantum3, 030341 (2022)
M.Larocca,F.Sauvage,F.M.Sbahi,G.Verdon,P.J.Coles,and M. Cerezo, Group-invariant quantum machine learning, PRX Quantum3, 030341 (2022)
2022
-
[6]
J. J. Meyer, M. Mularski, E. Gil-Fuster, A. A. Mele, F. Arzani, A. Wilms, and J. Eisert, Exploiting symmetry in variational quantum machine learning, PRX Quantum4, 010328 (2023)
2023
-
[7]
Q. T. Nguyen, L. Schatzki, P. Braccia, M. Ragone, P. J. Coles, F.Sauvage,M.Larocca,andM.Cerezo,Theoryforequivariant quantum neural networks, PRX Quantum5, 020328 (2024)
2024
-
[8]
L. Schatzki, M. Larocca, Q. T. Nguyen, F. Sauvage, and M. Cerezo, Theoretical guarantees for permutation-equivariant quantum neural networks, npj Quantum Information10, 10.1038/s41534-024-00804-1 (2024)
-
[9]
M. T. West, J. Heredge, M. Sevior, and M. Usman, Provably trainable rotationally equivariant quantum machine learning, PRX Quantum5, 030320 (2024)
2024
-
[10]
T West, M
M. T West, M. Sevior, and M. Usman, Reflection equivari- antquantumneuralnetworksforenhancedimageclassification, Machine Learning: Science and Technology4, 035027 (2023)
2023
- [11]
-
[12]
O. Ibitoye, R. Abou-Khamis, M. el Shehaby, A. Matrawy, and M.O.Shafiq,Thethreatofadversarialattacksonmachinelearn- ing in network security – a survey (2023), arXiv:1911.02621 [cs.CR]
-
[13]
I. J. Goodfellow, J. Shlens, and C. Szegedy, Explaining and harnessing adversarial examples (2015), arXiv:1412.6572 [stat.ML]
work page internal anchor Pith review arXiv 2015
-
[14]
D.Tsipras,S.Santurkar,L.Engstrom,A.Turner,andA.Madry, Robustnessmaybeatoddswithaccuracy,inInternationalCon- ference on Learning Representations(2019)
2019
- [15]
-
[16]
A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli, Why do adversarial at- tackstransfer? explainingtransferabilityofevasionandpoison- ing attacks (2019), arXiv:1809.02861 [cs.LG]
-
[17]
Madry, A
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towardsdeeplearningmodelsresistanttoadversarialattacks,in International Conference on Learning Representations(2018)
2018
-
[18]
M.Usman,QuantumRobustnessinArtificialIntelligence: Prin- ciples and Applications, Quantum Science and Technology (Springer Nature Switzerland, 2026)
2026
-
[19]
S. Lu, L.-M. Duan, and D.-L. Deng, Quantum adversarial ma- chinelearning,PhysicalReviewResearch2,10.1103/physrevre- search.2.033212 (2020)
-
[20]
W. Gong and D.-L. Deng, Universal adversarial examples and perturbations for quantum classifiers, National Science Review 10.1093/nsr/nwab130 (2021)
-
[21]
M.T.West,S.M.Erfani,C.Leckie,M.Sevior,L.C.L.Hollen- berg, and M. Usman, Benchmarking adversarially robust quan- tum machine learning at scale, Physical Review Research5, 10.1103/physrevresearch.5.023186 (2023)
-
[22]
M. T. West, S.-L. Tsang, J. S. Low, C. D. Hill, C. Leckie, L. C. L. Hollenberg, S. M. Erfani, and M. Usman, Towards quantum enhanced adversarial robustness in machine learning, Nature Machine Intelligence5, 581–589 (2023)
2023
-
[23]
N. Dowling, M. T. West, A. Southwell, A. C. Nakhl, M. Se- vior, M. Usman, and K. Modi, Adversarial robustness guar- antees for quantum classifiers, npj Quantum Information12, 10.1038/s41534-025-01129-3 (2026)
-
[24]
Representation theory for geometric quantum machine learning.arXiv preprint arXiv:2210.07980, 2022
M. Ragone, P. Braccia, Q. T. Nguyen, L. Schatzki, P. J. Coles, F. Sauvage, M. Larocca, and M. Cerezo, Representa- tion theory for geometric quantum machine learning (2023), arXiv:2210.07980 [quant-ph]
-
[25]
Lecun, L
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE86, 2278 (1998)
1998
-
[26]
H. Xiao, K. Rasul, and R. Vollgraf, Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms (2017), arXiv:1708.07747 [cs.LG]
work page internal anchor Pith review arXiv 2017
-
[27]
Krizhevsky, Learning multiple layers of features from tiny images (2009)
A. Krizhevsky, Learning multiple layers of features from tiny images (2009)
2009
-
[28]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2016) pp. 770–778
2016
-
[29]
Supervised quantum machine learning mod- els are kernel methods
M. Schuld, Supervised quantum machine learning models are kernel methods (2021), arXiv:2101.11020 [quant-ph]
-
[30]
M. Schuld and N. Killoran, Quantum machine learning in fea- turehilbertspaces,PhysicalReviewLetters122,10.1103/phys- revlett.122.040504 (2019)
- [31]
- [32]
-
[33]
A. Raghunathan, S. M. Xie, F. Yang, J. Duchi, and P. Liang, Understanding and mitigating the tradeoff between robustness and accuracy (2020), arXiv:2002.10716 [cs.LG]
- [34]
-
[35]
L.Wang,I.I.Uddin,K.Santosh,C.Zhang,X.Qin,andY.Zhou, Bridgingsymmetryandrobustness: Ontheroleofequivariance inenhancingadversarialrobustness,inTheThirty-ninthAnnual Conference on Neural Information Processing Systems(2025)
2025
-
[36]
M.Usman,Y.Z.Wong,C.D.Hill,etal.,Frameworkforatomic- level characterisation of quantum computer arrays by machine learning, npj Computational Materials6, 19 (2020)
2020
-
[37]
M. T. West and M. Usman, Framework for donor-qubit spatial metrology in silicon with depths approaching the bulk limit, Phys. Rev. Appl.17, 024070 (2022)
2022
-
[38]
Usman, B
M. Usman, B. Voisin, J. Salfi, S. Rogge, and L. C. L. Hollen- berg, Towards visualisation of central-cell-effects in scanning tunnelling microscope images of subsurface dopant qubits in silicon, Nanoscale9, 17013–17019 (2017)
2017
-
[39]
Chen and L
K. Chen and L. Liu, Privacy preserving data classification with rotationperturbation,inProceedingsoftheFifthIEEEInterna- tionalConferenceonDataMining,ICDM’05(IEEEComputer Society, USA, 2005) p. 589–592
2005
-
[40]
K.ChenandL.Liu,Geometricdataperturbationforprivacypre- serving outsourced data mining, Knowl. Inf. Syst.29, 657–695 11 (2011)
2011
-
[41]
K. Liu, H. Kargupta, and J. Ryan, Random projection-based multiplicative data perturbation for privacy preserving dis- tributed data mining, IEEE Trans. on Knowl. and Data Eng. 18, 92–106 (2006)
2006
-
[42]
Dwork, F
C. Dwork, F. McSherry, K. Nissim, and A. Smith, Calibrating noise to sensitivity in private data analysis (Springer-Verlag, Berlin, Heidelberg, 2006) p. 265–284. Appendix A: Rotationally equivariant quantum model Forcompleteness,Figure6providesaschematicoverviewoftherotationallyequivariantquantummodelarchitectureintroduced in Ref. [5]. FIG. 6. Architectur...
2006
-
[43]
Sample a random diagonal phase vector𝜆∈C𝑁𝜙 with|𝜆 𝑚|=1, conjugate symmetry𝜆 𝑁𝜙 −𝑚 =𝜆 ∗ 𝑚,𝜆 𝑁𝜙/2 ∈ {±1}and 𝜆0 =1, so that the resulting transform is real-valued
-
[44]
Define the corresponding orthogonal circulant matrix 𝑂(𝜆):=𝐹 † diag(𝜆)𝐹∈R 𝑁𝜙 ×𝑁 𝜙 ,(C1) where𝐹denotes the unitary discrete Fourier transform onC𝑁𝜙
-
[45]
9” excluded digit “9
Apply the same𝑂(𝜆)to all rings, 𝑥 (1) 𝑟 :=𝑂(𝜆)𝑥 𝑟 ∀𝑟∈ {0, . . . , 𝑁 𝑟 −1}.(C2) Thetransformedimageisthengivenby𝑥 (1) :={𝑥 (1) 𝑟 } 𝑁𝑟 −1 𝑟=0 . Byconstruction,thistransformationpreservestherotation-invariant correlation features𝐶𝑟 ,𝑟′ (Δ𝜙). Let𝑆 Δ𝜙 denote the cyclic shift onR𝑁𝜙 defined by(𝑆Δ𝜙 𝑥𝑟 ) 𝛼 :=𝑥 𝑟 , 𝛼−Δ𝜙(mod𝑁 𝜙 ). Then 𝐶𝑟 ,𝑟′ (Δ𝜙)= 𝑁𝜙 −1∑︁ 𝛼=0 𝑥𝑟 , ...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.