pith. machine review for the scientific record. sign in

arxiv: 2604.14643 · v1 · submitted 2026-04-16 · 💻 cs.CV · cs.LG

Recognition: unknown

Physically-Induced Atmospheric Adversarial Perturbations: Enhancing Transferability and Robustness in Remote Sensing Image Classification

Weiwei Zhuang , Wangze Xie , Qi Zhang , Xia Du , Zihan Lin , Zheng Lin , Hanlin Cai , Jizhe Zhou , Zihan Fang , Chi-Man Pun , Wei Ni , Jun Luo

Authors on Pith no claims yet

Pith reviewed 2026-05-10 12:17 UTC · model grok-4.3

classification 💻 cs.CV cs.LG
keywords adversarial attacksremote sensingimage classificationfog perturbationsPerlin noisetransferabilityrobustnessatmospheric effects
0
0 comments X

The pith

FogFool uses Perlin noise to generate fog perturbations that create highly transferable adversarial examples for remote sensing image classification.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a new way to attack deep learning models used for classifying remote sensing images by adding perturbations that look like natural fog. Instead of changing individual pixels randomly, it optimizes patterns based on Perlin noise to simulate atmospheric fog in a way that fits the scene. This approach makes the attacks work better when transferred to different models and survive common image processing steps like compression. If true, it shows that mimicking real-world atmospheric effects can make attacks more practical and harder to defend against in satellite and aerial imagery analysis.

Core claim

FogFool generates adversarial examples by iteratively optimizing Perlin noise patterns to model fog formations. These perturbations are visually consistent with authentic remote sensing scenes and embed adversarial information into structural features shared across diverse model architectures. Experiments show superior white-box performance, 83.74% targeted attack success rate in black-box transfer, and robustness to JPEG compression and filtering.

What carries the argument

FogFool, an adversarial framework that models fog formations using Perlin noise and optimizes the patterns iteratively to produce physically plausible perturbations.

If this is right

  • Adversarial perturbations survive common preprocessing defenses like JPEG compression and filtering.
  • The perturbations induce a universal shift in model attention as shown by CAM visualizations.
  • FogFool provides a practical and stealthy threat benchmark for evaluating RS classification system reliability.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar atmospheric modeling could improve attack transferability in other domains like medical imaging or autonomous vehicle vision where natural degradations occur.
  • Defenses might need to incorporate atmospheric simulation during training to counter such persistent perturbations.

Load-bearing premise

That the structural features shared across models are effectively targeted by mid-to-low frequency fog patterns modeled this way.

What would settle it

A test where FogFool-generated examples are applied to a new set of remote sensing models and achieve less than 50% black-box transfer success rate would indicate the transferability does not hold generally.

Figures

Figures reproduced from arXiv: 2604.14643 by Chi-Man Pun, Hanlin Cai, Jizhe Zhou, Jun Luo, Qi Zhang, Wangze Xie, Wei Ni, Weiwei Zhuang, Xia Du, Zheng Lin, Zihan Fang, Zihan Lin.

Figure 1
Figure 1. Figure 1: An overview of the proposed fog-based adversarial attack scenario [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed method for fog adversarial example generation with (a) Procedural Fog Simulation Module, (b) Gradient-Guided Fog [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Process of 2-D Perlin noise generation. (a) The original remote sensing [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of fog-based adversarial examples under different fog [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Classification accuracy of all evaluated models as a function of fog [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Attack success rate (ASR) versus the number of optimization iterations [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Adversarial examples generated by the proposed method on the UCM dataset. Each group of images, from left to right, the original images, the Perlin [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Adversarial examples generated by the proposed method on the NWPU dataset. Each group of images, from left to right, the original images, the [PITH_FULL_IMAGE:figures/full_fig_p009_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Confusion matrices of fog adversarial examples on the UCM dataset for different target models. Rows represent ground-truth labels, and columns [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Radar charts comparing the attack success rates (ASRs) of AutoAttack, PGD, and the proposed FogFool method across eight models. (a) Raw ASR [PITH_FULL_IMAGE:figures/full_fig_p012_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Grad-CAM visualizations of ResNet50 on the UCM dataset. From [PITH_FULL_IMAGE:figures/full_fig_p012_11.png] view at source ↗
read the original abstract

Adversarial attacks pose a severe threat to the reliability of deep learning models in remote sensing (RS) image classification. Most existing methods rely on direct pixel-wise perturbations, failing to exploit the inherent atmospheric characteristics of RS imagery or survive real-world image degradations. In this paper, we propose FogFool, a physically plausible adversarial framework that generates fog-based perturbations by iteratively optimizing atmospheric patterns based on Perlin noise. By modeling fog formations with natural, irregular structures, FogFool generates adversarial examples that are not only visually consistent with authentic RS scenes but also deceptive. By leveraging the spatial coherence and mid-to-low-frequency nature of atmospheric phenomena, FogFool embeds adversarial information into structural features shared across diverse architectures. Extensive experiments on two benchmark RS datasets demonstrate that FogFool achieves superior performance: not only does it exceed in white-box settings, but also exhibits exceptional black-box transferability (reaching 83.74% TASR) and robustness against common preprocessing-based defenses such as JPEG compression and filtering. Detailed analyses, including confusion matrices and Class Activation Map (CAM) visualizations, reveal that our atmospheric-driven perturbations induce a universal shift in model attention. These results indicate that FogFool represents a practical, stealthy, and highly persistent threat to RS classification systems, providing a robust benchmark for evaluating model reliability in complex environments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper proposes FogFool, a framework for generating physically plausible adversarial perturbations in remote sensing images by iteratively optimizing Perlin noise to simulate fog formations. It claims that these perturbations achieve superior white-box attack performance, exceptional black-box transferability (83.74% TASR), and robustness against preprocessing defenses like JPEG compression and filtering on two benchmark RS datasets. The method leverages the spatial coherence and mid-to-low-frequency nature of atmospheric phenomena to induce shared feature shifts across models, as evidenced by CAM visualizations and confusion matrices.

Significance. If the empirical results hold, this work is significant for advancing the understanding of transferable and robust adversarial attacks in remote sensing by incorporating physically-induced atmospheric effects. It provides a practical benchmark for model reliability in complex environments and highlights how natural scene degradations can be exploited for attacks. The use of Perlin noise for natural-looking perturbations and the analysis of frequency components and attention shifts are strengths that could influence future research in physical-world adversarial examples for RS applications.

major comments (1)
  1. [Experimental Results] Experimental Results section: The reported 83.74% TASR in black-box settings is a key claim, but the manuscript does not provide details on the exact optimization procedure for the Perlin noise parameters (scale and octaves), the specific baseline attack methods compared against, the data splits used for the two benchmark datasets, or statistical significance tests supporting the superiority over existing methods. This lack of detail undermines the ability to verify the central claims of transferability and robustness.
minor comments (1)
  1. [Abstract] The abstract mentions 'two benchmark RS datasets' but does not name them; including the dataset names would improve clarity for readers.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the detailed and constructive review of our manuscript. The major comment on the Experimental Results section highlights important gaps in reproducibility details, which we acknowledge and will address through revisions.

read point-by-point responses
  1. Referee: [Experimental Results] Experimental Results section: The reported 83.74% TASR in black-box settings is a key claim, but the manuscript does not provide details on the exact optimization procedure for the Perlin noise parameters (scale and octaves), the specific baseline attack methods compared against, the data splits used for the two benchmark datasets, or statistical significance tests supporting the superiority over existing methods. This lack of detail undermines the ability to verify the central claims of transferability and robustness.

    Authors: We agree that the Experimental Results section requires additional implementation details to support verification of the reported performance. In the revised manuscript, we will expand this section to include: (1) the precise optimization procedure for Perlin noise, specifying the iterative algorithm, parameter ranges or fixed values for scale and octaves, and any stopping criteria; (2) the full list of baseline attack methods with their exact configurations and references; (3) explicit descriptions of the data splits (e.g., train/validation/test ratios) for both benchmark RS datasets; and (4) statistical significance tests such as paired t-tests or Wilcoxon signed-rank tests with p-values to substantiate superiority claims. These additions will directly address the concerns about reproducibility and claim verification. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper proposes FogFool as an empirical optimization procedure that iteratively tunes Perlin noise parameters to produce fog-like perturbations, then evaluates the resulting adversarial examples on held-out test sets for white-box accuracy, black-box transferability (TASR), and defense robustness. All reported performance numbers are obtained from direct experimental measurement rather than from any equation or definition that presupposes the outcome; the method's formulation (atmospheric pattern modeling) does not contain self-referential loops, fitted parameters renamed as predictions, or load-bearing self-citations that close the derivation. The central claims therefore remain externally falsifiable by the reported tables and visualizations.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on domain assumptions about atmospheric modeling rather than new mathematical axioms or fitted constants beyond standard optimization.

free parameters (1)
  • Perlin noise scale and octaves
    Parameters controlling the fog pattern generation are chosen or optimized during the iterative process.
axioms (1)
  • domain assumption Fog formations exhibit natural irregular structures that can be approximated by Perlin noise
    Invoked to justify the choice of perturbation generation method.

pith-pipeline@v0.9.0 · 5579 in / 1248 out tokens · 42381 ms · 2026-05-10T12:17:28.799230+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

77 extracted references · 17 canonical work pages · 4 internal anchors

  1. [1]

    Progress and challenges in intelligent remote sensing satellite systems,

    B. Zhang, Y . Wu, B. Zhao, J. Chanussot, D. Hong, J. Yao, and L. Gao, “Progress and challenges in intelligent remote sensing satellite systems,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 1814–1822, 2022

  2. [2]

    Fedsn: A federated learning framework over heterogeneous leo satellite networks,

    Z. Lin, Z. Chen, Z. Fang, X. Chen, X. Wang, and Y . Gao, “Fedsn: A federated learning framework over heterogeneous leo satellite networks,” IEEE Transactions on Mobile Computing, vol. 24, no. 3, pp. 1293–1307, 2024

  3. [3]

    SatSense: Multi-Satellite Collaborative Framework for Spectrum Sensing,

    H. Yuan, Z. Chen, Z. Lin, J. Peng, Z. Fang, Y . Zhong, Z. Song, and Y . Gao, “SatSense: Multi-Satellite Collaborative Framework for Spectrum Sensing,”IEEE Trans. Cogn. Commun. Netw., 2025

  4. [4]

    SUMS: Sniffing Unknown Multiband Signals under Low Sampling Rates,

    J. Peng, Z. Chen, Z. Lin, H. Yuan, Z. Fang, L. Bao, Z. Song, Y . Li, J. Ren, and Y . Gao, “SUMS: Sniffing Unknown Multiband Signals under Low Sampling Rates,”IEEE Trans. Mobile Comput., 2024

  5. [5]

    LEO-Split: A Semi-Supervised Split Learning Framework over LEO Satellite Networks,

    Z. Lin, Y . Zhang, Z. Chen, Z. Fang, C. Wu, X. Chen, Y . Gao, and J. Luo, “LEO-Split: A Semi-Supervised Split Learning Framework over LEO Satellite Networks,”IEEE Trans. Mobile Comput., 2025

  6. [6]

    LEO Satellite Networks Assisted Geo-Distributed Data Processing,

    Z. Zhao, Z. Chen, Z. Lin, W. Zhu, K. Qiu, C. You, and Y . Gao, “LEO Satellite Networks Assisted Geo-Distributed Data Processing,”IEEE Wireless Commun. Lett., 2024

  7. [7]

    Graph Learning for Multi-Satellite Based Spectrum Sensing,

    H. Yuan, Z. Chen, Z. Lin, J. Peng, Z. Fang, Y . Zhong, Z. Song, X. Wang, and Y . Gao, “Graph Learning for Multi-Satellite Based Spectrum Sensing,” inProc. IEEE Int. Conf. Commun. Technol. (ICCT), 2023, pp. 1112–1116

  8. [8]

    Improving global land cover fraction change mapping using temporal deep learning,

    A. Slomp, D. Masili ¯unas, and N.-E. Tsendbazar, “Improving global land cover fraction change mapping using temporal deep learning,”Interna- tional Journal of Applied Earth Observation and Geoinformation, vol. 144, p. 104927, 2025

  9. [9]

    A combined convolutional neural network for urban land-use classification with gis data,

    J. Yu, P. Zeng, Y . Yu, H. Yu, L. Huang, and D. Zhou, “A combined convolutional neural network for urban land-use classification with gis data,”Remote Sensing, vol. 14, no. 5, p. 1128, 2022

  10. [10]

    Deep learning- based super-resolution of remote sensing images for enhanced ground- water quality assessment and environmental monitoring in urban areas,

    P. Shu, R. W. Aslam, I. Naz, B. Ghaffar, D. E. Kucher, A. Quddoos, D. Raza, M. Abdullah-Al-Wadud, and R. M. Zulqarnain, “Deep learning- based super-resolution of remote sensing images for enhanced ground- water quality assessment and environmental monitoring in urban areas,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2025

  11. [11]

    Landslide detection from open satellite imagery using distant domain transfer learning. remote sens 13 (17): 3383,

    S. Qin, X. Guo, J. Sun, S. Qiao, L. Zhang, J. Yao, Q. Cheng, and Y . Zhang, “Landslide detection from open satellite imagery using distant domain transfer learning. remote sens 13 (17): 3383,” 2021

  12. [12]

    Advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond,

    S. Paheding, A. Saleem, M. F. H. Siddiqui, N. Rawashdeh, A. Essa, and A. A. Reyes, “Advancing horizons in remote sensing: a comprehensive survey of deep learning models and applications in image classification and beyond,”Neural Computing and Applications, vol. 36, no. 27, pp. 16 727–16 767, 2024

  13. [13]

    HSplitLoRA: A Heterogeneous Split Parameter- Efficient Fine-Tuning Framework for Large Language Models,

    Z. Lin, Y . Zhang, Z. Chen, Z. Fang, X. Chen, P. Vepakomma, W. Ni, J. Luo, and Y . Gao, “HSplitLoRA: A Heterogeneous Split Parameter- Efficient Fine-Tuning Framework for Large Language Models,”arXiv preprint arXiv:2505.02795, 2025

  14. [14]

    Dynamic uncertainty-aware multimodal fusion for outdoor health monitoring,

    Z. Fang, Z. Lin, S. Hu, Y . Tao, Y . Deng, X. Chen, and Y . Fang, “Dynamic uncertainty-aware multimodal fusion for outdoor health monitoring,” arXiv preprint arXiv:2508.09085, 2025

  15. [15]

    Rrto: A high-performance transparent offloading system for model inference in mobile edge computing,

    Z. Sun, X. Guan, Z. Lin, Y . Qing, H. Song, Z. Fang, Z. Chen, F. Liu, H. Cui, W. Niet al., “Rrto: A high-performance transparent offloading system for model inference in mobile edge computing,”arXiv preprint arXiv:2507.21739, 2025

  16. [16]

    Nsc-sl: A bandwidth-aware neural subspace compression for communication-efficient split learning,

    Z. Fang, M. Yang, Z. Lin, Z. Lin, Z. Fang, Z. Zhang, T. Duan, D. Huang, and S. Zhu, “Nsc-sl: A bandwidth-aware neural subspace compression for communication-efficient split learning,”arXiv preprint arXiv:2602.02696, 2026

  17. [17]

    Efficient Parallel Split Learning over Resource-Constrained Wireless Edge Networks,

    Z. Lin, G. Zhu, Y . Deng, X. Chen, Y . Gao, K. Huang, and Y . Fang, “Efficient Parallel Split Learning over Resource-Constrained Wireless Edge Networks,”IEEE Trans. Mobile Comput., vol. 23, no. 10, pp. 9224–9239, 2024

  18. [18]

    Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities,

    G. Cheng, X. Xie, J. Han, L. Guo, and G.-S. Xia, “Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities,”IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 3735– 3756, 2020

  19. [19]

    Deep learning for remote sensing image scene classification: A review and meta-analysis,

    A. Thapa, T. Horanont, B. Neupane, and J. Aryal, “Deep learning for remote sensing image scene classification: A review and meta-analysis,” Remote Sensing, vol. 15, no. 19, p. 4804, 2023

  20. [20]

    Ai security for geoscience and remote sensing: Challenges and future trends,

    Y . Xu, T. Bai, W. Yu, S. Chang, P. M. Atkinson, and P. Ghamisi, “Ai security for geoscience and remote sensing: Challenges and future trends,”IEEE Geoscience and Remote Sensing Magazine, vol. 11, no. 2, pp. 60–85, 2023

  21. [21]

    A comprehen- sive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking,

    S. Mei, J. Lian, X. Wang, Y . Su, M. Ma, and L.-P. Chau, “A comprehen- sive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking,” Journal of Remote Sensing, vol. 4, p. 0219, 2024

  22. [22]

    Adversarial examples in remote sensing,

    W. Czaja, N. Fendley, M. Pekala, C. Ratto, and I.-J. Wang, “Adversarial examples in remote sensing,” inProceedings of the 26th ACM SIGSPA- TIAL International Conference on Advances in Geographic Information Systems, 2018, pp. 408–411

  23. [23]

    Natural weather-style black-box adversarial attacks against optical aerial detec- tors,

    G. Tang, W. Yao, T. Jiang, W. Zhou, Y . Yang, and D. Wang, “Natural weather-style black-box adversarial attacks against optical aerial detec- tors,”IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–11, 2023

  24. [24]

    Defense against adversarial cloud attack on remote sensing salient object detection,

    H. Sun, L. Fu, J. Li, Q. Guo, Z. Meng, T. Zhang, Y . Lin, and H. Yu, “Defense against adversarial cloud attack on remote sensing salient object detection,” inProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 8345–8354

  25. [25]

    Cloud adversarial example generation for remote sensing image classification,

    F. Ma, Y . Feng, F. Zhang, and Y . Zhou, “Cloud adversarial example generation for remote sensing image classification,”IEEE Transactions on Geoscience and Remote Sensing, 2025

  26. [26]

    Intriguing properties of neural networks

    C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,”arXiv preprint arXiv:1312.6199, 2013

  27. [27]

    Towards evaluating the robustness of neural networks,

    N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in2017 ieee symposium on security and privacy (sp). Ieee, 2017, pp. 39–57

  28. [28]

    Explaining and Harnessing Adversarial Examples

    I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,”arXiv preprint arXiv:1412.6572, 2014

  29. [29]

    Adversarial examples in the physical world,

    A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” inArtificial intelligence safety and security. Chapman and Hall/CRC, 2018, pp. 99–112

  30. [30]

    Towards Deep Learning Models Resistant to Adversarial Attacks

    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,”arXiv preprint arXiv:1706.06083, 2017

  31. [31]

    Boosting adversarial attacks with momentum,

    Y . Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” inProceedings of the IEEE confer- ence on computer vision and pattern recognition, 2018, pp. 9185–9193

  32. [32]

    Adversarial sticker: A stealthy attack method in the physical world,

    X. Wei, Y . Guo, and J. Yu, “Adversarial sticker: A stealthy attack method in the physical world,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 2711–2725, 2022

  33. [33]

    Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon,

    Y . Zhong, X. Liu, D. Zhai, J. Jiang, and X. Ji, “Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 15 345–15 354

  34. [34]

    fakeweather: Adversarial attacks for deep neural networks emulating weather condi- tions on the camera lens of autonomous systems,

    A. Marchisio, G. Caramia, M. Martina, and M. Shafique, “fakeweather: Adversarial attacks for deep neural networks emulating weather condi- tions on the camera lens of autonomous systems,” in2022 International joint conference on neural networks (IJCNN). IEEE, 2022, pp. 1–9

  35. [35]

    Adversarial camouflage: Hiding physical-world attacks with natural styles,

    R. Duan, X. Ma, Y . Wang, J. Bailey, A. K. Qin, and Y . Yang, “Adversarial camouflage: Hiding physical-world attacks with natural styles,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1000–1008

  36. [36]

    Adversarial example in remote sensing image recognition,

    L. Chen, G. Zhu, Q. Li, and H. Li, “Adversarial example in remote sensing image recognition,”arXiv preprint arXiv:1910.13222, 2019. 14

  37. [37]

    An empirical study of adversarial examples on remote sensing image scene classification,

    L. Chen, Z. Xu, Q. Li, J. Peng, S. Wang, and H. Li, “An empirical study of adversarial examples on remote sensing image scene classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 9, pp. 7419–7433, 2021

  38. [38]

    Assessing the threat of adversarial ex- amples on deep neural networks for remote sensing scene classification: Attacks and defenses,

    Y . Xu, B. Du, and L. Zhang, “Assessing the threat of adversarial ex- amples on deep neural networks for remote sensing scene classification: Attacks and defenses,”IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 2, pp. 1604–1617, 2020

  39. [39]

    Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples

    N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in ma- chine learning: from phenomena to black-box attacks using adversarial samples,”arXiv preprint arXiv:1605.07277, 2016

  40. [40]

    Universal adversarial examples in remote sens- ing: Methodology and benchmark,

    Y . Xu and P. Ghamisi, “Universal adversarial examples in remote sens- ing: Methodology and benchmark,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–15, 2022

  41. [41]

    Targeted universal adversarial examples for remote sensing,

    T. Bai, H. Wang, and B. Wen, “Targeted universal adversarial examples for remote sensing,”Remote Sensing, vol. 14, no. 22, p. 5833, 2022

  42. [42]

    Ppca: precise perturbation and feature approximation for enhanced black-box attacks in remote sensing image classification,

    J. Wang, D. Fang, and W. Hu, “Ppca: precise perturbation and feature approximation for enhanced black-box attacks in remote sensing image classification,”Multimedia Systems, vol. 31, no. 6, p. 441, 2025

  43. [43]

    An image synthesizer,

    K. Perlin, “An image synthesizer,”ACM Siggraph Computer Graphics, vol. 19, no. 3, pp. 287–296, 1985

  44. [44]

    Improving noise,

    K. Perlin, “Improving noise,” inProceedings of the 29th annual confer- ence on Computer graphics and interactive techniques, 2002, pp. 681– 682

  45. [45]

    Bag-of-visual-words and spatial extensions for land-use classification,

    Y . Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” inProceedings of the 18th SIGSPATIAL in- ternational conference on advances in geographic information systems, 2010, pp. 270–279

  46. [46]

    Remote sensing image scene classifi- cation: Benchmark and state of the art,

    G. Cheng, J. Han, and X. Lu, “Remote sensing image scene classifi- cation: Benchmark and state of the art,”Proceedings of the IEEE, vol. 105, no. 10, pp. 1865–1883, 2017

  47. [47]

    Imagenet classification with deep convolutional neural networks,

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Advances in neural informa- tion processing systems, vol. 25, 2012

  48. [48]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”arXiv preprint arXiv:1409.1556, 2014

  49. [49]

    Deep residual learning for image recognition,

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778

  50. [50]

    Densely connected convolutional networks,

    G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” inProceedings of the IEEE confer- ence on computer vision and pattern recognition, 2017, pp. 4700–4708

  51. [51]

    Mobilenetv2: Inverted residuals and linear bottlenecks,

    M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520

  52. [52]

    Efficientnet: Rethinking model scaling for con- volutional neural networks,

    M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for con- volutional neural networks,” inInternational conference on machine learning. PMLR, 2019, pp. 6105–6114

  53. [53]

    Pytorch: An imperative style, high-performance deep learning library,

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antigaet al., “Pytorch: An imperative style, high-performance deep learning library,”Advances in neural information processing systems, vol. 32, 2019

  54. [54]

    Exploring misclassifications of robust neural networks to enhance adversarial attacks,

    L. Schwinn, R. Raab, A. Nguyen, D. Zanca, and B. Eskofier, “Exploring misclassifications of robust neural networks to enhance adversarial attacks,”Applied intelligence, vol. 53, no. 17, pp. 19 843–19 859, 2023

  55. [55]

    Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,

    F. Croce and M. Hein, “Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks,” inInternational conference on machine learning. PMLR, 2020, pp. 2206–2216

  56. [56]

    Improving transferability of adversarial examples with input diversity,

    C. Xie, Z. Zhang, Y . Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille, “Improving transferability of adversarial examples with input diversity,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 2730–2739

  57. [57]

    Evading defenses to transferable adversarial examples by translation-invariant attacks,

    Y . Dong, T. Pang, H. Su, and J. Zhu, “Evading defenses to transferable adversarial examples by translation-invariant attacks,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4312–4321

  58. [58]

    Nesterov accelerated gradient and scale invariance for adversarial attacks,

    J. Lin, C. Song, K. He, L. Wang, and J. E. Hopcroft, “Nesterov accelerated gradient and scale invariance for adversarial attacks,”arXiv preprint arXiv:1908.06281, 2019

  59. [59]

    Enhancing the transferability of adversarial attacks through variance tuning,

    X. Wang and K. He, “Enhancing the transferability of adversarial attacks through variance tuning,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 1924–1933

  60. [60]

    Attack selectivity of adversarial examples in remote sensing image scene classification,

    L. Chen, H. Li, G. Zhu, Q. Li, J. Zhu, H. Huang, J. Peng, and L. Zhao, “Attack selectivity of adversarial examples in remote sensing image scene classification,”IEEE Access, vol. 8, pp. 137 477–137 489, 2020

  61. [61]

    A study of the effect of JPG compression on adversarial images

    G. K. Dziugaite, Z. Ghahramani, and D. M. Roy, “A study of the effect of jpg compression on adversarial images,”arXiv preprint arXiv:1608.00853, 2016

  62. [62]

    Countering Adversarial Images using Input Transformations

    C. Guo, M. Rana, M. Cisse, and L. Van Der Maaten, “Counter- ing adversarial images using input transformations,”arXiv preprint arXiv:1711.00117, 2017

  63. [63]

    The split bregman method for l1-regularized problems,

    T. Goldstein and S. Osher, “The split bregman method for l1-regularized problems,”SIAM journal on imaging sciences, vol. 2, no. 2, pp. 323– 343, 2009

  64. [64]

    Grad-cam: Visual explanations from deep networks via gradient-based localization,

    R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” inProceedings of the IEEE international conference on computer vision, 2017, pp. 618–626

  65. [65]

    SplitLoRA: A split parameter-efficient fine-tuning framework for large language models.arXiv preprint arXiv:2407.00952, 2024

    Z. Lin, X. Hu, Y . Zhang, Z. Chen, Z. Fang, X. Chen, A. Li, P. Vepakomma, and Y . Gao, “SplitLoRA: A Split Parameter-Efficient Fine-Tuning Framework for Large Language Models,”arXiv preprint arXiv:2407.00952, 2024

  66. [66]

    Hfedmoe: Resource-aware heterogeneous federated learning with mixture-of-experts,

    Z. Fang, Z. Lin, S. Hu, Y . Ma, Y . Tao, Y . Deng, X. Chen, and Y . Fang, “Hfedmoe: Resource-aware heterogeneous federated learning with mixture-of-experts,”arXiv preprint arXiv:2601.00583, 2026

  67. [67]

    Split Learning in 6G Edge Networks,

    Z. Lin, G. Qu, X. Chen, and K. Huang, “Split Learning in 6G Edge Networks,”IEEE Wirel. Commun., 2024

  68. [68]

    Mobile edge intelligence for large language models: A contemporary survey,

    G. Qu, Q. Chen, W. Wei, Z. Lin, X. Chen, and K. Huang, “Mobile edge intelligence for large language models: A contemporary survey,”IEEE Communications Surveys & Tutorials, 2025

  69. [69]

    Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Lan- guage Models,

    Z. Fang, Z. Lin, Z. Chen, X. Chen, Y . Gao, and Y . Fang, “Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Lan- guage Models,”IEEE Trans. Mobile Comput., 2025

  70. [70]

    Hi- erarchical Split Federated Learning: Convergence Analysis and System Optimization,

    Z. Lin, W. Wei, Z. Chen, C.-T. Lam, X. Chen, Y . Gao, and J. Luo, “Hi- erarchical Split Federated Learning: Convergence Analysis and System Optimization,”IEEE Trans. Mobile Comput., 2025

  71. [71]

    Satfed: A resource-efficient leo satellite-assisted heterogeneous federated learning framework,

    Y . Zhang, Z. Lin, Z. Chen, Z. Fang, W. Zhu, X. Chen, J. Zhao, and Y . Gao, “Satfed: A resource-efficient leo satellite-assisted heterogeneous federated learning framework,”Engineering, 2024

  72. [72]

    Conflict-aware client selection for multi-server federated learning,

    M. Hong, Z. Lin, Z. Lin, L. Li, M. Yang, X. Du, Z. Fang, Z. Kang, D. Luan, and S. Zhu, “Conflict-aware client selection for multi-server federated learning,”arXiv preprint arXiv:2602.02458, 2026

  73. [73]

    HASFL: Heterogeneity- aware Split Federated Learning over Edge Computing Systems,

    Z. Lin, Z. Chen, X. Chen, W. Ni, and Y . Gao, “HASFL: Heterogeneity- aware Split Federated Learning over Edge Computing Systems,”IEEE Trans. Mobile Comput., 2026

  74. [74]

    Accelerating Federated Learning with Model Segmentation for Edge Networks,

    M. Hu, J. Zhang, X. Wang, S. Liu, and Z. Lin, “Accelerating Federated Learning with Model Segmentation for Edge Networks,”IEEE Trans. Green Commun. Netw., 2024

  75. [75]

    Aggregation alignment for federated learning with mixture-of-experts under data heterogeneity,

    Z. Fang, Q. Wang, H. An, Z. Lin, Y . Deng, X. Chen, and Y . Fang, “Aggregation alignment for federated learning with mixture-of-experts under data heterogeneity,”arXiv preprint arXiv:2603.21276, 2026

  76. [76]

    Optimal resource allocation for u-shaped parallel split learning,

    S. Lyu, Z. Lin, G. Qu, X. Chen, X. Huang, and P. Li, “Optimal resource allocation for u-shaped parallel split learning,” in2023 IEEE Globecom Workshops (GC Wkshps), 2023, pp. 197–202

  77. [77]

    Adaptsfl: Adaptive Split Federated Learning in Resource-Constrained Edge Networks,

    Z. Lin, G. Qu, W. Wei, X. Chen, and K. K. Leung, “Adaptsfl: Adaptive Split Federated Learning in Resource-Constrained Edge Networks,” IEEE Trans. Netw., 2025