pith. machine review for the scientific record. sign in

arxiv: 2604.06954 · v1 · submitted 2026-04-08 · 💻 cs.CV

Recognition: no theorem link

Compression as an Adversarial Amplifier Through Decision Space Reduction

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:07 UTC · model grok-4.3

classification 💻 cs.CV
keywords adversarial robustnessimage compressionadversarial attacksdeep image classificationdecision space reductionrobustness evaluationcompression-in-the-loop
0
0 comments X

The pith

Image compression amplifies adversarial attacks on deep classifiers by contracting class margins.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper examines a setting where adversarial perturbations are crafted directly in the compressed image domain instead of the original pixels. It finds that these compression-aware attacks succeed more often than standard pixel-space attacks when both are limited to the same size of change. The authors trace the difference to decision space reduction: the compression step discards information in a way that narrows the separation between class decisions, so smaller perturbations can flip the output. This matters for any pipeline that compresses images before feeding them to a classifier, which includes most social media uploads and many edge devices. Experiments on standard datasets and models confirm the gap in attack strength and highlight the added risk in real deployments.

Core claim

The central claim is that compression acts as an adversarial amplifier through decision space reduction. When attacks are applied in the compressed representation rather than pixel space, they become substantially more effective under identical nominal perturbation budgets. The mechanism is the non-invertible, information-losing transformation performed by compression, which contracts classification margins and thereby increases the classifier's sensitivity to small changes. Extensive tests across benchmarks and architectures support this account and point to a previously overlooked vulnerability in any system that compresses images before inference.

What carries the argument

Decision space reduction: the contraction of classification margins produced by compression's non-invertible, information-losing transformation.

If this is right

  • Systems that compress images before classification are more exposed to adversarial attacks than pixel-only evaluations suggest.
  • Robustness benchmarks that ignore compression will underestimate real-world risk in deployed pipelines.
  • Defenses designed only for pixel-space perturbations may need to be re-evaluated or applied after decompression.
  • Attack optimization performed directly in the compressed domain can achieve higher success rates for the same computational or visibility budget.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Other irreversible operations common in vision pipelines, such as quantization or aggressive downsampling, may produce similar margin contraction and attack amplification.
  • If the effect scales with compression strength, then choosing lower-quality compression settings could inadvertently increase vulnerability in some applications.
  • Models trained with explicit exposure to compressed-domain perturbations might exhibit greater resistance to the amplified attacks described here.

Load-bearing premise

The amplification effect is caused by compression shrinking classification margins through information loss, rather than by some unrelated property of the compressed domain.

What would settle it

An experiment that applies the same attacks in a lossy compressed domain but measures no contraction in classification margins and finds no increase in attack success rate, or conversely finds margin contraction without higher attack power.

Figures

Figures reproduced from arXiv: 2604.06954 by Harkrishan Jandu, Lewis Evans, Shreyank N Gowda, Yang Lu, Zihan Ye.

Figure 1
Figure 1. Figure 1: Local decision regions around a correctly classified [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the compression-aware adversarial pipeline. An input image is first transformed by a lossy compression [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Decision space reduction under compression averaged over 100 correctly classified seeds. As JPEG quality decreases [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Effect of operation order on CIFAR-100 robust [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
read the original abstract

Image compression is a ubiquitous component of modern visual pipelines, routinely applied by social media platforms and resource-constrained systems prior to inference. Despite its prevalence, the impact of compression on adversarial robustness remains poorly understood. We study a previously unexplored adversarial setting in which attacks are applied directly in compressed representations, and show that compression can act as an adversarial amplifier for deep image classifiers. Under identical nominal perturbation budgets, compression-aware attacks are substantially more effective than their pixel-space counterparts. We attribute this effect to decision space reduction, whereby compression induces a non-invertible, information-losing transformation that contracts classification margins and increases sensitivity to perturbations. Extensive experiments across standard benchmarks and architectures support our analysis and reveal a critical vulnerability in compression-in-the-loop deployment settings. Code will be released.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript studies adversarial attacks applied directly in compressed image representations (rather than pixel space) and claims that compression acts as an adversarial amplifier for deep classifiers. Under identical nominal perturbation budgets, compression-aware attacks are substantially more effective than pixel-space counterparts; the authors attribute this to decision-space reduction, in which the non-invertible compression map contracts classification margins and increases perturbation sensitivity. The abstract states that extensive experiments on standard benchmarks and architectures support the analysis and highlight a vulnerability in compression-in-the-loop pipelines.

Significance. If the central claim survives proper controls for effective perturbation magnitude, the result would be significant for deployed systems that compress images before inference. It would identify a concrete, previously under-studied attack surface in social-media and edge-device pipelines. The planned code release would strengthen reproducibility.

major comments (2)
  1. [Abstract] Abstract: the claim that compression-aware attacks are 'substantially more effective' under 'identical nominal perturbation budgets' is load-bearing yet unsupported by any explicit normalization that equates the L_p radius in the compressed domain to an equivalent pixel-domain perturbation after decompression. Because compression is non-invertible and typically non-linear, the same nominal epsilon does not guarantee equal effective perturbation size or direction; without this control the observed difference could be an artifact of mismatched budgets rather than decision-space contraction.
  2. [Abstract] Abstract: the causal attribution to 'decision space reduction' (compression induces a non-invertible transformation that contracts classification margins) is presented as an interpretation without a formal derivation, quantitative model, or equation linking information loss to margin contraction. No fitted parameters, margin measurements, or information-theoretic quantities are shown to ground the explanation.
minor comments (2)
  1. The abstract asserts 'extensive experiments across standard benchmarks and architectures' but supplies no information on attack generation procedures, choice of baselines, statistical significance tests, or controls for compression hyperparameters (e.g., JPEG quality factor).
  2. Notation for the perturbation budget and the precise definition of 'nominal' versus 'effective' perturbation should be introduced early and used consistently when comparing domains.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major concern below, clarifying our use of nominal budgets and strengthening the grounding for decision-space reduction. The revisions will improve the manuscript's rigor without altering its core claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that compression-aware attacks are 'substantially more effective' under 'identical nominal perturbation budgets' is load-bearing yet unsupported by any explicit normalization that equates the L_p radius in the compressed domain to an equivalent pixel-domain perturbation after decompression. Because compression is non-invertible and typically non-linear, the same nominal epsilon does not guarantee equal effective perturbation size or direction; without this control the observed difference could be an artifact of mismatched budgets rather than decision-space contraction.

    Authors: We agree that explicit verification of effective perturbation magnitude is necessary to exclude artifacts. Our experiments constrain attacks to identical nominal epsilon values in their native domains, which is the appropriate comparison for domain-specific attacks in a compression-in-the-loop pipeline. In the revised version we will add a dedicated analysis that computes the realized L2 and L-infinity norms of the resulting perturbations in pixel space after decompression for both attack families. These measurements will be reported across the evaluated datasets and architectures; preliminary checks indicate that compression-aware attacks retain higher success rates even when their effective pixel-space norms are equal to or smaller than those of pixel-space attacks. The abstract and experimental sections will be updated to distinguish nominal from effective budgets and to present the new controls. revision: yes

  2. Referee: [Abstract] Abstract: the causal attribution to 'decision space reduction' (compression induces a non-invertible transformation that contracts classification margins) is presented as an interpretation without a formal derivation, quantitative model, or equation linking information loss to margin contraction. No fitted parameters, margin measurements, or information-theoretic quantities are shown to ground the explanation.

    Authors: The manuscript motivates decision-space reduction through the non-invertibility of compression and supports it with empirical observations of increased attack success. We recognize that a more explicit quantitative link would strengthen the causal claim. The revision will introduce a short formal subsection that models the effect of a non-invertible compression operator on classification margins, deriving a simple contraction bound under standard assumptions on the decision boundary. We will also report direct quantitative measurements of average margin distances (distance to the nearest decision boundary) computed before and after compression on the same images, together with basic information-theoretic quantities such as estimated entropy reduction where feasible. These additions will ground the interpretation with concrete numbers and a lightweight analytic model. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical observation with interpretive attribution

full rationale

The paper's core claim rests on experimental comparison of attack success rates under nominal perturbation budgets in compressed vs. pixel space, followed by an interpretive attribution to 'decision space reduction' as a non-invertible contraction of margins. No equations, fitted parameters, or self-citations are invoked to derive the effect; the attribution is presented as post-hoc explanation rather than a quantity obtained by construction from the paper's own model or prior self-work. The derivation chain is therefore self-contained against external benchmarks (empirical results) and does not reduce any prediction to its inputs by definition or renaming.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper rests on standard assumptions of deep image classification (differentiable models, bounded perturbations, standard benchmarks) and introduces 'decision space reduction' as an explanatory construct without independent evidence or formalization.

axioms (1)
  • domain assumption Deep image classifiers operate on a decision space whose margins can be contracted by non-invertible transformations such as compression.
    Invoked in the attribution sentence of the abstract to explain why compression amplifies attacks.

pith-pipeline@v0.9.0 · 5433 in / 1245 out tokens · 34618 ms · 2026-05-10T18:07:05.726080+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

51 extracted references · 16 canonical work pages · 6 internal anchors

  1. [1]

    GPT-4 Technical Report

    Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Aleman, F. L., Almeida, D., Altenschmidt, J., Altman, S., Anadkat, S., et al.Gpt-4 technical report.arXiv preprint arXiv:2303.08774(2023)

  2. [2]

    M., Tarchoun, B., Alouani, I., and Abu-Ghazaleh, N.Adversarial attention deficit: Fooling deformable vision transformers with collaborative ad- versarial patches

    Alam, Q. M., Tarchoun, B., Alouani, I., and Abu-Ghazaleh, N.Adversarial attention deficit: Fooling deformable vision transformers with collaborative ad- versarial patches. In2025 IEEE/CVF Winter Conference on Applications of Computer Vision (W ACV)(2025), IEEE, pp. 7123–7132

  3. [3]

    arXiv preprint arXiv:2406.05927(2024)

    Amini, S., Teymoorianfard, M., Ma, S., and Houmansadr, A.Meansparse: Post- training robustness enhancement through mean-centered feature sparsification. arXiv preprint arXiv:2406.05927(2024)

  4. [4]

    Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., et al.Qwen technical report.arXiv preprint arXiv:2309.16609(2023)

  5. [5]

    Bai, Y., Zhou, M., Patel, V., and Sojoudi, S.Mixednuts: Training-free accuracy- robustness balance via nonlinearly mixed classifiers.Transactions on machine learning research(2024)

  6. [6]

    Bell, B., Geyer, M., Glickenstein, D., Hamm, K., Scheidegger, C., Fernandez, A., and Moore, J.Persistent classification: Understanding adversarial attacks by studying decision boundary dynamics.Statistical Analysis and Data Mining: The ASA Data Science Journal 18, 1 (2025), e11716

  7. [7]

    In2017 ieee symposium on security and privacy (sp)(2017), Ieee, pp

    Carlini, N., and W agner, D.Towards evaluating the robustness of neural networks. In2017 ieee symposium on security and privacy (sp)(2017), Ieee, pp. 39– 57

  8. [8]

    Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., and Mukhopadhyay, D.A survey on adversarial attacks and defences.CAAI Transactions on Intelligence Technology 6, 1 (2021), 25–45

  9. [9]

    Chen, J., Fang, Y., Khisti, A., Özgür, A., and Shlezinger, N.Information compression in the ai era: Recent advances and future challenges.IEEE Journal on Selected Areas in Communications(2025)

  10. [10]

    Chlubna, T., and Zemčík, P.Comparative survey of image compression methods across different pixel formats and bit depths.Signal, Image and Video Processing 19, 12 (2025), 981

  11. [11]

    Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., and Hein, M.Robustbench: a standardized adversarial robustness benchmark.arXiv preprint arXiv:2010.09670(2020)

  12. [12]

    InInternational conference on machine learning(2020), PMLR, pp

    Croce, F., and Hein, M.Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. InInternational conference on machine learning(2020), PMLR, pp. 2206–2216

  13. [13]

    Cui, J., Tian, Z., Zhong, Z., Qi, X., Yu, B., and Zhang, H.Decoupled kullback- leibler divergence loss.Advances in Neural Information Processing Systems 37 (2024), 74461–74486

  14. [14]

    E., and Chau, D

    Das, N., Shanbhogue, M., Chen, S.-T., Hohman, F., Li, S., Chen, L., Kounavis, M. E., and Chau, D. H.Compression to the rescue: Defending from adversarial attacks across modalities. InACM SIGKDD Conference on Knowledge Discovery and Data Mining(2018)

  15. [15]

    In2009 IEEE conference on computer vision and pattern recognition(2009), Ieee, pp

    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L.Imagenet: A large-scale hierarchical image database. In2009 IEEE conference on computer vision and pattern recognition(2009), Ieee, pp. 248–255

  16. [16]

    Fawzi, A., Fawzi, O., and Frossard, P.Analysis of classifiers’ robustness to adversarial perturbations.Machine learning 107, 3 (2018), 481–508

  17. [17]

    InProceedings of the IEEE/CVF international conference on computer vision(2019), pp

    Feichtenhofer, C., Fan, H., Malik, J., and He, K.Slowfast networks for video recognition. InProceedings of the IEEE/CVF international conference on computer vision(2019), pp. 6202–6211

  18. [18]

    Ferrari, C., Becattini, F., Galteri, L., and Bimbo, A. D.(compress and restore) n: A robust defense against adversarial attacks on image classification.ACM Transactions on Multimedia Computing, Communications and Applications 19, 1s (2023), 1–16

  19. [19]

    Golpayegani, Z., and Bouguila, N.Patchsvd: A non-uniform svd-based image compression algorithm.arXiv preprint arXiv:2406.05129(2024)

  20. [20]

    Explaining and Harnessing Adversarial Examples

    Goodfellow, I. J., Shlens, J., and Szegedy, C.Explaining and harnessing adversarial examples.arXiv preprint arXiv:1412.6572(2014)

  21. [21]

    N., Rohrbach, M., and Sevilla-Lara, L.Smart frame selection for action recognition

    Gowda, S. N., Rohrbach, M., and Sevilla-Lara, L.Smart frame selection for action recognition. InProceedings of the AAAI conference on artificial intelligence (2021), vol. 35, pp. 1451–1459

  22. [22]

    InInternational Conference on Learning Representations(2018)

    Guo, C., Rana, M., Cisse, M., and van der Maaten, L.Countering adversarial images using input transformations. InInternational Conference on Learning Representations(2018)

  23. [23]

    InProceedings of the IEEE/CVF conference on computer vision and pattern recognition(2019), pp

    Jia, X., Wei, X., Cao, X., and Foroosh, H.Comdefend: An efficient image compression model to defend adversarial examples. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition(2019), pp. 6084–6092

  24. [24]

    arXiv preprint arXiv:1811.00525(2018)

    Khoury, M., and Hadfield-Menell, D.On the geometry of adversarial examples. arXiv preprint arXiv:1811.00525(2018)

  25. [25]

    D., and Montag, C.A practical guide to whatsapp data in so- cial science research

    Kohne, J., Elhai, J. D., and Montag, C.A practical guide to whatsapp data in so- cial science research. InDigital phenotyping and mobile sensing: New developments in psychoinformatics. Springer, 2022, pp. 171–205

  26. [26]

    E.Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems 25(2012)

    Krizhevsky, A., Sutskever, I., and Hinton, G. E.Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems 25(2012)

  27. [27]

    A., He, H., Shafiq, M., and Khan, A.Assessment of quality of experience (qoe) of image compression in social cloud computing.Multiagent and Grid Systems 14, 2 (2018), 125–143

    Laghari, A. A., He, H., Shafiq, M., and Khan, A.Assessment of quality of experience (qoe) of image compression in social cloud computing.Multiagent and Grid Systems 14, 2 (2018), 125–143

  28. [28]

    Li, F., Li, K., Wu, H., Tian, J., and Zhou, J.Towards robust learning via core feature-aware adversarial training.IEEE Transactions on Information Forensics and Security(2025)

  29. [29]

    In2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2019), IEEE, pp

    Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., W ang, Y., and Wen, W.Feature distillation: Dnn-oriented jpeg compression against adversarial examples. In2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2019), IEEE, pp. 860–868

  30. [30]

    Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A.Towards deep learning models resistant to adversarial attacks.arXiv preprint arXiv:1706.06083 (2017)

  31. [31]

    H.Beware the black- box: On the robustness of recent defenses to adversarial examples.Entropy 23, 10 (2021), 1359

    Mahmood, K., Gurevin, D., van Dijk, M., and Nguyen, P. H.Beware the black- box: On the robustness of recent defenses to adversarial examples.Entropy 23, 10 (2021), 1359

  32. [32]

    B., Ye, Z., Lu, Y., Pound, M

    Mustafa, A. B., Ye, Z., Lu, Y., Pound, M. P., and Gowda, S. N.Anyone can jailbreak: Prompt-based attacks on llms and t2is.arXiv preprint arXiv:2507.21820 (2025)

  33. [33]

    B., Ye, Z., Lu, Y., Pound, M

    Mustafa, A. B., Ye, Z., Lu, Y., Pound, M. P., and Gowda, S. N.Low-effort jailbreak attacks against text-to-image safety filters.arXiv preprint arXiv:2604.01888(2026)

  34. [34]

    InInternational conference on machine learning(2023), PMLR, pp

    Olivier, R., and Raj, B.How many perturbations break this model? evaluating robustness beyond adversarial accuracy. InInternational conference on machine learning(2023), PMLR, pp. 26583–26598

  35. [35]

    on lines and planes of closest fit to systems of points in space

    Pearson, K.Liii. on lines and planes of closest fit to systems of points in space. Conference’17, July 2017, Washington, DC, USA L. Evans et al. The London, Edinburgh, and Dublin philosophical magazine and journal of science 2, 11 (1901), 559–572

  36. [36]

    Fixing data augmentation to improve adversarial robustness.arXiv preprint arXiv:2103.01946,

    Rebuffi, S.-A., Gowal, S., Calian, D. A., Stimberg, F., Wiles, O., and Mann, T.Fixing data augmentation to improve adversarial robustness.arXiv preprint arXiv:2103.01946(2021)

  37. [37]

    do anything now

    Shen, X., Chen, Z., Backes, M., Shen, Y., and Zhang, Y." do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. InProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security(2024), pp. 1671–1685

  38. [38]

    Song, M., Choi, J., and Han, B.A training-free defense framework for robust learned image compression.arXiv preprint arXiv:2401.11902(2024)

  39. [39]

    In2024 Data Compression Conference (DCC)(2024), IEEE, pp

    Sui, Y., Li, Z., Ding, D., Pan, X., Xu, X., Liu, S., and Chen, Z.Transferable learned image compression-resistant adversarial perturbations. In2024 Data Compression Conference (DCC)(2024), IEEE, pp. 582–582

  40. [40]

    Team, G., Georgiev, P., Lei, V. I., Burnell, R., Bai, L., Gulati, A., Tanzer, G., Vincent, D., Pan, Z., W ang, S., et al.Gemini 1.5: Unlocking multimodal un- derstanding across millions of tokens of context.arXiv preprint arXiv:2403.05530 (2024)

  41. [41]

    Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and Mc- Daniel, P.Ensemble adversarial training: Attacks and defenses.arXiv preprint arXiv:1705.07204(2017)

  42. [42]

    G., and Murray, R.Adversarial classification: Necessary conditions and geometric flows.Journal of Machine Learning Research 23, 187 (2022), 1–38

    Trillos, N. G., and Murray, R.Adversarial classification: Necessary conditions and geometric flows.Journal of Machine Learning Research 23, 187 (2022), 1–38

  43. [43]

    K.The jpeg still picture compression standard.Communications of the ACM 34, 4 (1991), 30–44

    W allace, G. K.The jpeg still picture compression standard.Communications of the ACM 34, 4 (1991), 30–44

  44. [44]

    InProceedings of the IEEE/CVF conference on computer vision and pattern recognition(2024), pp

    W ang, K., He, X., W ang, W., and W ang, X.Boosting adversarial transferability by block shuffle and rotation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition(2024), pp. 24336–24346

  45. [45]

    InInternational conference on machine learning(2023), PMLR, pp

    W ang, Z., Pang, T., Du, C., Lin, M., Liu, W., and Y an, S.Better diffusion models further improve adversarial training. InInternational conference on machine learning(2023), PMLR, pp. 36246–36263

  46. [46]

    Wei, H., Tang, H., Jia, X., W ang, Z., Yu, H., Li, Z., Satoh, S., V an Gool, L., and W ang, Z.Physical adversarial attack meets computer vision: A decade survey.IEEE Transactions on Pattern Analysis and Machine Intelligence 46, 12 (2024), 9797–9817

  47. [47]

    Fast is better than free: Revisiting adversarial training

    Wong, E., Rice, L., and Kolter, J. Z.Fast is better than free: Revisiting adversarial training.arXiv preprint arXiv:2001.03994(2020)

  48. [48]

    Yi, S., Liu, Y., Sun, Z., Cong, T., He, X., Song, J., Xu, K., and Li, Q.Jailbreak attacks and defenses against large language models: A survey.arXiv preprint arXiv:2407.04295(2024)

  49. [49]

    D., and Gilmer, J.A fourier per- spective on model robustness in computer vision.Advances in Neural Information Processing Systems 32(2019)

    Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E. D., and Gilmer, J.A fourier per- spective on model robustness in computer vision.Advances in Neural Information Processing Systems 32(2019)

  50. [50]

    In33rd USENIX Security Symposium (USENIX Security 24)(2024), pp

    Yu, Z., Liu, X., Liang, S., Cameron, Z., Xiao, C., and Zhang, N.Don’t listen to me: Understanding and exploring jailbreak prompts of large language models. In33rd USENIX Security Symposium (USENIX Security 24)(2024), pp. 4675–4692

  51. [51]

    N., and Lin, X.Bridging mode connectivity in loss landscapes and adversarial robustness

    Zhao, P., Chen, P.-Y., Das, P., Ramamurthy, K. N., and Lin, X.Bridging mode connectivity in loss landscapes and adversarial robustness. InInternational Conference on Learning Representations(2020)