pith. machine review for the scientific record. sign in

arxiv: 2605.11392 · v1 · submitted 2026-05-12 · 💻 cs.AI

Recognition: no theorem link

Transformer Interpretability from Perspective of Attention and Gradient

Huajun Chen, Xiaohui Fan, Yongjin Cui

Authors on Pith no claims yet

Pith reviewed 2026-05-13 02:32 UTC · model grok-4.3

classification 💻 cs.AI
keywords transformer interpretabilityattention mechanismgradient guidancevision transformerfeature regionsclass rewritingperceptual differencesmodel interpretation
0
0 comments X

The pith

Guiding the gradient direction in Transformers provides comprehensive feature region interpretations and reveals imperceptible class-rewriting in Vision Transformers.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops an interpretation approach for Transformer models that works by steering the direction of gradients, which in turn directs attention patterns. This is intended to deliver a fuller view of the input regions that drive model decisions than standard gradient techniques alone. Applied to Vision Transformers, the method highlights detailed aspects of image processing. The authors further use it to modify an image's predicted class through changes too small for humans to detect, based on differing perceptions between the model and people. This outcome points to both interpretive gains and possible security issues in model deployment.

Core claim

From the perspective of attention and gradient, we conduct an in-depth study of Transformer interpretation and propose a method to achieve it by guiding the gradient direction, or more precisely, the attention direction. The method enables more comprehensive interpretation of feature regions, offers detail interpretation, and helps to better understand Transformer mechanism. Leveraging the difference in how Vision Transformer (ViT) and humans perceive images, we alter the class of an image in a way that is almost imperceptible to the human eye. This class rewriting phenomenon may potentially pose security risks in certain scenarios.

What carries the argument

Gradient direction guidance that steers attention to produce detailed maps of influential feature regions in the input.

If this is right

  • Feature regions receive more comprehensive and detailed interpretation than with unguided gradients.
  • The internal mechanisms of Transformers become clearer through the attention-directed views.
  • Image class predictions can be rewritten via changes imperceptible to humans.
  • Such rewriting introduces security risks for Vision Transformer applications.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The guidance technique might be adapted to test interpretability in non-vision Transformer models such as those used for text.
  • The observed perceptual gap could guide development of targeted robustness checks for deployed vision models.
  • Systematic comparison of guided maps against human eye-tracking data on the same images could quantify model-human attention differences.
  • The class-rewriting effect suggests a need to examine whether similar vulnerabilities appear under other small perturbation regimes.

Load-bearing premise

Actively guiding the gradient direction yields a faithful view of the model's reasoning rather than artifacts created by the guidance itself.

What would settle it

A test in which the feature regions identified by guided gradients are masked and the model's class prediction remains unchanged, or in which human observers can reliably detect the class-altered images under controlled viewing conditions.

Figures

Figures reproduced from arXiv: 2605.11392 by Huajun Chen, Xiaohui Fan, Yongjin Cui.

Figure 1
Figure 1. Figure 1: The overall idea of this work. in practice, elucidating the true causes of the cases pre￾sented in the TIS [Englebert et al., 2023] paper, refer to Figures 9 and 10. • We leveraged the differences in how humans and ViT perceive images to rewrite the image content in a manner that is imperceptible to humans. 2 Related work The interpretation methods of the Transformer model mainly utilize its unique attenti… view at source ↗
Figure 2
Figure 2. Figure 2: GAE The gradient calculation process covers all subsequent data processing including nonlinear processes in Transformer en￾coder and the MLP head which attention rollout overlooks. Gradient of certain value is its influence, or contribution on the loss. From another perspective, the gradient of the cur￾rent value is the weight of the subsequent data processing on the current value. We believe that Transfor… view at source ↗
Figure 3
Figure 3. Figure 3: Experimental results of absolute gradient correction (a) [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Interpretation results of ViT model in multi category im [PITH_FULL_IMAGE:figures/full_fig_p004_5.png] view at source ↗
Figure 4
Figure 4. Figure 4: Absolute gradient correction and positive gradient cor [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 7
Figure 7. Figure 7: Single category image interpretation.(Complete gradient [PITH_FULL_IMAGE:figures/full_fig_p005_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Guide attention transfer direction to interpret ViT in single [PITH_FULL_IMAGE:figures/full_fig_p005_8.png] view at source ↗
Figure 10
Figure 10. Figure 10: Interpretations provided by other methods for the bald [PITH_FULL_IMAGE:figures/full_fig_p006_10.png] view at source ↗
Figure 9
Figure 9. Figure 9: Detail interpretation.(The last row shows typical images [PITH_FULL_IMAGE:figures/full_fig_p006_9.png] view at source ↗
Figure 11
Figure 11. Figure 11: Pixel update experiment. (We set loss = logittusker − logitbullmastiff . The top-1 output for the original image is “tusker” with a probability of 35.1% and a logit of 11.150, while the top-1 output for the updated image is “bull mastiff” with a probability of 93.8% and a logit of 12.926.) readers might expect such an experiment as a matter of con￾vention, so we have included perturbation experiments here… view at source ↗
Figure 12
Figure 12. Figure 12: Perturbation experiments.(a)ViT [PITH_FULL_IMAGE:figures/full_fig_p007_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Interpretation results of ViT Base and ViT Large in multi category image recognition task.(The categories to be interpreted, listed from top to bottom, are Zebra, Bull mastiff, Zebra, Bull mas￾tiff.) clude attention rollout and Grad-ECLIP from our comparison because attention rollout cannot differentiate between cate￾gories, and Grad-ECLIP cannot be applied to the interpreta￾tion of multi-head attention. … view at source ↗
read the original abstract

Although researchers' attention is more focused on the performance of Transformer models, the interpretation of Transformer can never be ignored. Gradient is widely utilized in Transformer interpretation. From the perspective of attention and gradient, we conduct an in-depth study of Transformer interpretation and propose a method to achieve it by guiding the gradient direction, or more precisely, the attention direction. The method enables more comprehensive interpretation of feature regions, offers detail interpretation, and helps to better understand Transformer mechanism. Leveraging the difference in how Vision Transformer (ViT) and humans perceive images, we alter the class of an image in a way that is almost imperceptible to the human eye. This class rewriting phenomenon may potentially pose security risks in certain scenarios.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes a method for interpreting Vision Transformers by guiding the direction of gradients (equivalently, attention) to produce more comprehensive and detailed interpretations of feature regions. It further claims that differences in how ViTs and humans perceive images can be exploited to alter an image's class label in a manner nearly imperceptible to humans, potentially indicating security risks for ViT deployments.

Significance. If the guidance procedure can be shown to yield faithful rather than artifactual interpretations, the work would advance mechanistic understanding of attention in ViTs and draw attention to a possible attack surface arising from perceptual mismatches. The linkage between interpretability techniques and practical security implications is a promising direction, though it requires rigorous validation to be impactful.

major comments (2)
  1. The central claim that actively guiding the gradient/attention direction produces a 'more comprehensive' and faithful view of the model's reasoning (rather than steering outputs toward desired patterns) lacks any reported controls, ablations against vanilla gradient or attention-rollout baselines, or faithfulness metrics such as insertion/deletion scores. This is load-bearing for both the interpretability contribution and the subsequent security-risk demonstration.
  2. No experimental section, tables, or quantitative results are provided to support the 'detail interpretation' claim or to demonstrate that the imperceptible class-altering examples arise from the proposed guidance method rather than standard adversarial perturbations. Without such evidence the security-risk assertion cannot be evaluated.
minor comments (1)
  1. The abstract states that the method 'guides the gradient direction, or more precisely, the attention direction,' but the precise relationship and implementation details are not clarified, making it difficult to assess reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive report. The comments highlight important gaps in quantitative validation that we will address in revision. Our core contributions are the gradient-guiding procedure for attention interpretation and the resulting observation of imperceptible class-rewriting on ViTs; we provide visual evidence in the current manuscript but agree that controlled ablations and metrics are needed to substantiate the claims.

read point-by-point responses
  1. Referee: The central claim that actively guiding the gradient/attention direction produces a 'more comprehensive' and faithful view of the model's reasoning (rather than steering outputs toward desired patterns) lacks any reported controls, ablations against vanilla gradient or attention-rollout baselines, or faithfulness metrics such as insertion/deletion scores. This is load-bearing for both the interpretability contribution and the subsequent security-risk demonstration.

    Authors: We acknowledge that the current version relies primarily on qualitative visual comparisons. The guidance procedure is formulated to follow the model's existing attention flow by modulating gradient directions along high-attention paths rather than imposing external targets; this is why the resulting maps reveal class-specific regions that standard rollout often misses. To make this rigorous, we will add (i) side-by-side ablations against vanilla Grad-CAM, attention rollout, and raw gradient maps, (ii) insertion/deletion faithfulness curves, and (iii) a controlled comparison showing that disabling the guidance step collapses performance to baseline levels. These additions will be placed in a new experimental subsection. revision: yes

  2. Referee: No experimental section, tables, or quantitative results are provided to support the 'detail interpretation' claim or to demonstrate that the imperceptible class-altering examples arise from the proposed guidance method rather than standard adversarial perturbations. Without such evidence the security-risk assertion cannot be evaluated.

    Authors: The manuscript currently presents the class-rewriting phenomenon through qualitative before/after images and attention visualizations. We agree this is insufficient for the security claim. In revision we will introduce a dedicated experimental section containing: quantitative success rates of class rewriting at imperceptible perturbation budgets (L-infinity < 0.01), comparison against standard adversarial attacks (FGSM, PGD) to isolate the role of the guidance step, and human perceptual studies confirming the perturbations remain below detection thresholds. This will directly tie the attack efficacy to the interpretability method. revision: yes

Circularity Check

0 steps flagged

No circularity detected; method is empirical proposal without load-bearing derivation

full rationale

The paper presents a methodological contribution for Transformer interpretation via guided gradients/attention directions, plus an empirical demonstration of imperceptible class alteration in ViT images. No equations, first-principles derivations, predictions, or parameter fits appear in the abstract or described content. No self-citations, uniqueness theorems, or ansatzes are invoked to justify core claims. The guidance procedure is introduced as a novel technique rather than derived from or defined in terms of the interpretations it produces, so no reduction by construction occurs. The work is self-contained as an applied method without circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review yields no explicit free parameters, axioms, or invented entities; the approach appears to rest on standard notions of attention and gradients without additional postulates stated.

pith-pipeline@v0.9.0 · 5410 in / 1211 out tokens · 61382 ms · 2026-05-13T02:32:09.342333+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

31 extracted references · 31 canonical work pages · 2 internal anchors

  1. [1]

    Ethics guidelines for trustworthy ai

    [43, 2019] European commission. Ethics guidelines for trustworthy ai

  2. [2]

    National artifi- cial intelligence research and development strategic plan

    [44, 2023] Select committee on artificial intelligence of the national science and technology council. National artifi- cial intelligence research and development strategic plan

  3. [3]

    [Abnar and Zuidema, 2020] Samira Abnar and Willem H. Zuidema. Quantifying attention flow in transformers. In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors,Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4190–4197. Associ- ation for Computationa...

  4. [4]

    Deep integrated explanations

    [Barkanet al., 2023 ] Oren Barkan, Yehonatan Elisha, Jonathan Weill, Yuval Asher, Amit Eshel, and Noam Koenigstein. Deep integrated explanations. In Ingo Frommholz, Frank Hopfgartner, Mark Lee, Michael Oakes, Mounia Lalmas, Min Zhang, and Rodrygo L. T. Santos, editors,Proceedings of the 32nd ACM Inter- national Conference on Information and Knowledge Mana...

  5. [5]

    Layer-wise relevance propagation for neural networks with local renormalization layers

    [Binderet al., 2016 ] Alexander Binder, Gr´egoire Montavon, Sebastian Lapuschkin, Klaus-Robert M ¨uller, and Woj- ciech Samek. Layer-wise relevance propagation for neural networks with local renormalization layers. In Alessandro E. P. Villa, Paolo Masulli, and Antonio Javier Pons Rivero, editors,Artificial Neural Networks and Machine Learning - ICANN 2016...

  6. [6]

    Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers

    [Cheferet al., 2021a ] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 387–396. IEEE,

  7. [7]

    Transformer interpretability beyond attention visualiza- tion

    [Cheferet al., 2021b ] Hila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualiza- tion. InIEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 782–791. Computer Vision Foundation / IEEE,

  8. [8]

    Developing real-time streaming transformer transducer for speech recognition on large- scale dataset

    [Chenet al., 2021 ] Xie Chen, Yu Wu, Zhenghao Wang, Shu- jie Liu, and Jinyu Li. Developing real-time streaming transformer transducer for speech recognition on large- scale dataset. InIEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pages 5904–5908. IEEE,

  9. [9]

    Beyond intuition: Rethinking to- ken attributions inside transformers.Trans

    [Chenet al., 2023 ] Jiamin Chen, Xuhong Li, Lei Yu, Dejing Dou, and Haoyi Xiong. Beyond intuition: Rethinking to- ken attributions inside transformers.Trans. Mach. Learn. Res., 2023,

  10. [10]

    Efficient and effective text encoding for chinese llama and alpaca

    [Cuiet al., 2023 ] Yiming Cui, Ziqing Yang, and Xin Yao. Efficient and effective text encoding for chinese llama and alpaca.CoRR, abs/2304.08177,

  11. [11]

    Speech-transformer: A no-recurrence sequence-to- sequence model for speech recognition

    [Donget al., 2018 ] Linhao Dong, Shuang Xu, and Bo Xu. Speech-transformer: A no-recurrence sequence-to- sequence model for speech recognition. In2018 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, Calgary, AB, Canada, April 15-20, 2018, pages 5884–5888. IEEE,

  12. [12]

    An image is worth 16x16 words: Trans- formers for image recognition at scale

    [Dosovitskiyet al., 2021 ] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Min- derer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Trans- formers for image recognition at scale. In9th Inter- national Conference on Learn...

  13. [13]

    Explaining through transformer input sampling

    [Englebertet al., 2023 ] Alexandre Englebert, S ´edrick Stassin, G ´eraldin Nanfack, Sidi Ahmed Mahmoudi, Xavier Siebert, Olivier Cornu, and Christophe De Vleeschouwer. Explaining through transformer input sampling. InIEEE/CVF International Conference on Computer Vision, ICCV 2023 - Workshops, Paris, France, October 2-6, 2023, pages 806–815. IEEE,

  14. [14]

    Iterative answer predic- tion with pointer-augmented multimodal transformers for textvqa

    [Huet al., 2020 ] Ronghang Hu, Amanpreet Singh, Trevor Darrell, and Marcus Rohrbach. Iterative answer predic- tion with pointer-augmented multimodal transformers for textvqa. In2020 IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9989–9999. Computer Vi- sion Foundation / IEEE,

  15. [15]

    Multi-compound transformer for accurate biomedical im- age segmentation

    [Jiet al., 2021 ] Yuanfeng Ji, Ruimao Zhang, Huijie Wang, Zhen Li, Lingyun Wu, Shaoting Zhang, and Ping Luo. Multi-compound transformer for accurate biomedical im- age segmentation. In Marleen de Bruijne, Philippe C. Cattin, St ´ephane Cotin, Nicolas Padoy, Stefanie Spei- del, Yefeng Zheng, and Caroline Essert, editors,Medi- cal Image Computing and Comput...

  16. [16]

    Lay- ercam: Exploring hierarchical class activation maps for localization.IEEE Trans

    [Jianget al., 2021 ] Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei. Lay- ercam: Exploring hierarchical class activation maps for localization.IEEE Trans. Image Process., 30:5875–5888,

  17. [17]

    arXiv preprint arXiv:1908.03557 , year=

    [Liet al., 2019 ] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language.CoRR, abs/1908.03557,

  18. [18]

    UNIMO: towards unified-modal understanding and gener- ation via cross-modal contrastive learning

    [Liet al., 2021 ] Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. UNIMO: towards unified-modal understanding and gener- ation via cross-modal contrastive learning. In Chengqing Zong, Fei Xia, Wenjie Li, and Roberto Navigli, editors, Proceedings of the 59th Annual Meeting of the Associ- ation for Computational Li...

  19. [19]

    Swin transformer: Hierarchical vision transformer using shifted windows

    [Liuet al., 2021 ] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In2021 IEEE/CVF International Con- ference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pages 9992–10002. IEEE,

  20. [20]

    Are sixteen heads really better than one? In Hanna M

    [Michelet al., 2019 ] Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alch´e-Buc, Emily B. Fox, and Roman Gar- nett, editors,Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS ...

  21. [21]

    Bernstein, Alexander C

    [Russakovskyet al., 2015 ] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge.Int. J. Comput. Vis., 115(3):211–252,

  22. [22]

    Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra

    [Selvarajuet al., 2017 ] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 618–626. IEEE Computer Society,

  23. [23]

    LLaMA: Open and Efficient Foundation Language Models

    [Touvronet al., 2023a ] Hugo Touvron, Thibaut Lavril, Gau- tier Izacard, Xavier Martinet, Marie-Anne Lachaux, Tim- oth´ee Lacroix, Baptiste Rozi `ere, Naman Goyal, Eric Hambro, Faisal Azhar, Aur ´elien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models.CoRR, abs/2302.13971,

  24. [24]

    Llama 2: Open Foundation and Fine-Tuned Chat Models

    [Touvronet al., 2023b ] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya Chen, Guillem Cucurull, David Es- iobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, ...

  25. [25]

    Gomez, Lukasz Kaiser, and Illia Polosukhin

    [Vaswaniet al., 2017 ] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vish- wanathan, and Roman Garnett, editors,Advances in Neu- ral Information Processing Sy...

  26. [26]

    Tfnet: Transformer fusion network for ultrasound im- age segmentation

    [Wanget al., 2021 ] Tao Wang, Zhihui Lai, and Heng Kong. Tfnet: Transformer fusion network for ultrasound im- age segmentation. In Christian Wallraven, Qingshan Liu, and Hajime Nagahara, editors,Pattern Recognition - 6th Asian Conference, ACPR 2021, Jeju Island, South Ko- rea, November 9-12, 2021, Revised Selected Papers, Part I, volume 13188 ofLecture No...

  27. [27]

    Cotr: Efficiently bridging CNN and transformer for 3d medical image segmentation

    [Xieet al., 2021 ] Yutong Xie, Jianpeng Zhang, Chunhua Shen, and Yong Xia. Cotr: Efficiently bridging CNN and transformer for 3d medical image segmentation. In Mar- leen de Bruijne, Philippe C. Cattin, St´ephane Cotin, Nico- las Padoy, Stefanie Speidel, Yefeng Zheng, and Caroline Essert, editors,Medical Image Computing and Computer Assisted Intervention -...

  28. [28]

    [Xieet al., 2023 ] Weiyan Xie, Xiao-Hui Li, Caleb Chen Cao, and Nevin L. Zhang. Vit-cx: Causal explanation of vision transformers. InProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, pages 1569–1577. ijcai.org,

  29. [29]

    Explaining information flow inside vision transformers using markov chain

    [Yuanet al., 2021 ] Tingyi Yuan, Xuhong Li, Haoyi Xiong, Hui Cao, and Dejing Dou. Explaining information flow inside vision transformers using markov chain. IneXplain- able AI approaches for debugging and diagnosis.,

  30. [30]

    [Zhaoet al., 2024 ] Chenyang Zhao, Kun Wang, Xingyu Zeng, Rui Zhao, and Antoni B. Chan. Gradient-based vi- sual explanation for transformer-based CLIP. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27,

  31. [31]

    OpenReview.net, 2024