pith. machine review for the scientific record. sign in

arxiv: 1312.6034 · v2 · submitted 2013-12-20 · 💻 cs.CV

Recognition: 1 theorem link

· Lean Theorem

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

Andrea Vedaldi, Andrew Zisserman, Karen Simonyan

Authors on Pith no claims yet

Pith reviewed 2026-05-11 18:45 UTC · model grok-4.3

classification 💻 cs.CV
keywords convolutional networkssaliency mapsvisualizationweakly supervised segmentationgradient methodsimage classification
0
0 comments X

The pith

A convolutional network trained only for image classification can produce saliency maps from class-score gradients that support weakly supervised object segmentation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper presents two gradient-based methods for visualizing what convolutional networks learn during image classification. The first synthesizes an image that maximizes the score for a target class to show what the network associates with it. The second computes a saliency map for any input image by propagating the class score gradient back to the pixels. These saliency maps prove effective for segmenting the object of interest in the image using only a classification-trained network and no location labels.

Core claim

The central claim is that computing the gradient of the class score with respect to the input image pixels yields both class-representative visualizations through optimization and image-specific saliency maps, which in turn enable weakly supervised object segmentation.

What carries the argument

The class score gradient with respect to input image pixels, used to generate saliency maps and class visualizations.

If this is right

  • Saliency maps from classification networks support object segmentation without supervised location data.
  • Maximizing class scores produces images that illustrate learned class concepts.
  • Gradient visualization methods connect directly to those employed in deconvolutional networks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same gradient technique might help identify biases or failure modes in network decisions by revealing attended regions.
  • These visualizations could be combined with other interpretability tools for deeper model analysis.

Load-bearing premise

The gradient of the class score with respect to input pixels provides a faithful measure of each pixel's importance to the classification decision.

What would settle it

Experiments showing that the resulting saliency maps do not align with object boundaries or fail to produce accurate segmentations in a weakly supervised setting would contradict the central claim.

read the original abstract

This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces two gradient-based visualization techniques for ConvNets trained on image classification. The first synthesizes an input image that maximizes a target class score. The second computes a class-specific saliency map for a given image by back-propagating the class score gradient to the input pixels and taking the absolute value. The authors illustrate the use of these maps for weakly supervised object segmentation and establish a formal connection between the gradient-based visualizations and deconvolutional networks.

Significance. If the saliency maps reliably highlight object pixels, the work supplies practical tools for interpreting ConvNet decisions and enables segmentation from classification-only training data. The explicit link drawn to deconvolutional networks unifies two previously separate visualization approaches and is a clear strength of the manuscript.

major comments (2)
  1. [Abstract / segmentation experiments] Abstract and the weakly-supervised segmentation section: the claim that saliency maps 'can be employed for weakly supervised object segmentation' rests only on a handful of qualitative visual examples. No automatic thresholding rule, connected-component procedure, or post-processing step is formalized, and no quantitative metrics (IoU, pixel accuracy, or similar) are reported against ground-truth masks on any dataset.
  2. [Eq. (2)] Eq. (2): the saliency map is defined as the absolute value of the class-score gradient with respect to input pixels. The resulting maps are acknowledged to be noisy, yet the manuscript provides neither an analysis of how this noise propagates into the segmentation examples nor any error quantification that would support the central segmentation claim.
minor comments (2)
  1. [Figures 1-3] Figure captions for the synthesized class images and saliency maps should explicitly state the optimization parameters (learning rate, number of iterations, regularization) used to produce each example.
  2. [§3] The notation distinguishing the class score S_c from the network output f_c could be introduced once at the beginning of §3 and used consistently thereafter.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and positive assessment of the work's significance and its connection to deconvolutional networks. We respond point-by-point to the major comments below.

read point-by-point responses
  1. Referee: [Abstract / segmentation experiments] Abstract and the weakly-supervised segmentation section: the claim that saliency maps 'can be employed for weakly supervised object segmentation' rests only on a handful of qualitative visual examples. No automatic thresholding rule, connected-component procedure, or post-processing step is formalized, and no quantitative metrics (IoU, pixel accuracy, or similar) are reported against ground-truth masks on any dataset.

    Authors: We agree that the segmentation results are demonstrated through qualitative examples rather than a fully formalized pipeline with quantitative evaluation. The primary focus of the manuscript is the gradient-based visualization techniques themselves; the segmentation application is presented as an illustration of how the saliency maps might be used in a weakly-supervised setting. To address the concern, the revised manuscript will include an explicit description of the simple thresholding and connected-component post-processing applied to the examples, along with quantitative metrics (e.g., pixel accuracy and IoU) evaluated against ground-truth masks on a standard dataset such as PASCAL VOC. revision: yes

  2. Referee: [Eq. (2)] Eq. (2): the saliency map is defined as the absolute value of the class-score gradient with respect to input pixels. The resulting maps are acknowledged to be noisy, yet the manuscript provides neither an analysis of how this noise propagates into the segmentation examples nor any error quantification that would support the central segmentation claim.

    Authors: The manuscript does observe that the resulting saliency maps can appear noisy. The absolute-value operation is applied to produce a non-negative map that emphasizes pixels with the largest positive influence on the class score. We acknowledge the absence of a dedicated noise-propagation analysis or error quantification tied to the segmentation examples. In the revision we will add a short discussion of the noise characteristics of the raw gradients versus the absolute-value maps, supported by additional side-by-side visualizations that illustrate their effect on the downstream segmentation examples. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivations follow from standard back-propagation

full rationale

The paper defines its core visualization techniques directly from the gradient of the class score with respect to input pixels (via standard back-propagation) and the class-score maximization problem. These are not fitted to target outputs, nor are they defined in terms of the quantities they are later used to produce. The weakly-supervised segmentation application is presented as a qualitative demonstration rather than a formal prediction derived from fitted parameters. No load-bearing self-citations or uniqueness theorems are invoked; the cited prior work (Erhan et al., Zeiler et al.) is external. The claimed connection to deconvolutional networks is shown via explicit mathematical equivalence of the operations, not by renaming or self-reference. The derivation chain is therefore self-contained against external benchmarks and does not reduce to its inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract introduces no new free parameters, axioms, or invented entities. It relies on the standard assumption that gradients can be computed through a trained ConvNet and that these gradients carry semantic meaning for visualization.

pith-pipeline@v0.9.0 · 5422 in / 1086 out tokens · 36701 ms · 2026-05-11T18:45:55.981361+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 35 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. From Mechanistic to Compositional Interpretability

    cs.LG 2026-05 unverdicted novelty 7.0

    Compositional interpretability defines explanations as commuting syntactic-semantic mapping pairs grounded in compositionality and minimum description length, with compressive refinement and a parsimony theorem guaran...

  2. SeBA: Semi-supervised few-shot learning via Separated-at-Birth Alignment for tabular data

    cs.LG 2026-05 unverdicted novelty 7.0

    SeBA is a joint-embedding framework that separates tabular data into two complementary views and aligns one view's representations to the nearest-neighbor structure of the other, improving feature-label relationships ...

  3. ISAAC: Auditing Causal Reasoning in Deep Models for Drug-Target Interaction

    cs.LG 2026-05 unverdicted novelty 7.0

    ISAAC auditing applied to three DTI models on the Davis benchmark finds 25% relative differences in causal reasoning scores despite nearly identical AUROC values.

  4. From Local to Global to Mechanistic: An iERF-Centered Unified Framework for Interpreting Vision Models

    cs.CV 2026-05 unverdicted novelty 7.0

    An iERF-centric framework unifies local, global, and mechanistic interpretability in vision models via SRD for saliency, CAFE for concept anchoring, and ICAT for interlayer attribution.

  5. Mapping data sensitivities in global QCD analysis with linear response and influence functions

    hep-ph 2026-04 unverdicted novelty 7.0

    A framework based on linear response and influence functions maps data sensitivities in global QCD analyses to show how experiments determine central values, uncertainties, and correlations of non-perturbative functions.

  6. Homogeneous Stellar Parameters from Heterogeneous Spectra with Deep Learning

    astro-ph.GA 2026-04 unverdicted novelty 7.0

    A single end-to-end Transformer model unifies stellar labels from heterogeneous spectroscopic surveys into a self-consistent scale without post-hoc recalibration.

  7. Benchmarking bandgap prediction in semiconductors under experimental and realistic evaluation settings

    cond-mat.mtrl-sci 2026-04 unverdicted novelty 7.0

    Introduces the RealMat-BaG benchmark showing fundamental generalization limits of ML models when predicting experimental bandgaps from DFT-trained data.

  8. Mamba-SSM with LLM Reasoning for Feature Selection: Faithfulness-Aware Biomarker Discovery

    q-bio.QM 2026-04 unverdicted novelty 7.0

    LLM chain-of-thought filtering of Mamba saliency features on TCGA-BRCA data produces a 17-gene set with AUC 0.927 that beats both the raw 50-gene saliency list and a 5000-gene baseline while using far fewer features, ...

  9. Can Cross-Layer Transcoders Replace Vision Transformer Activations? An Interpretable Perspective on Vision

    cs.CV 2026-04 unverdicted novelty 7.0

    Cross-Layer Transcoders decompose ViT activations into sparse, depth-aware layer contributions that maintain zero-shot accuracy and enable faithful attribution of the final representation.

  10. Scaling and evaluating sparse autoencoders

    cs.LG 2024-06 unverdicted novelty 7.0

    K-sparse autoencoders with dead-latent fixes produce clean scaling laws and better feature quality metrics that improve with size, shown by training a 16-million-latent model on GPT-4 activations.

  11. APEX: Audio Prototype EXplanations for Classification Tasks

    cs.SD 2026-05 unverdicted novelty 6.0

    APEX generates four types of prototype-based explanations for pre-trained audio classifiers that preserve output invariance and target acoustic properties better than gradient methods applied to spectrograms.

  12. Scaling Vision Models Does Not Consistently Improve Localisation-Based Explanation Quality

    cs.CV 2026-05 accept novelty 6.0

    Scaling vision models by depth and parameter count does not consistently improve localisation-based explanation quality across architectures, datasets, and post-hoc methods; smaller models often perform comparably or better.

  13. GRALIS: A Unified Canonical Framework for Linear Attribution Methods via Riesz Representation

    cs.LG 2026-05 unverdicted novelty 6.0

    GRALIS supplies a canonical representation (Q, w, Delta) for every additive linear continuous attribution functional on L^2 via the Riesz Representation Theorem, unifying SHAP, IG, LIME and linearized GradCAM while pr...

  14. Manifold-Aligned Guided Integrated Gradients for Reliable Feature Attribution

    cs.LG 2026-05 unverdicted novelty 6.0

    MA-GIG improves Integrated Gradients by performing path integration in the latent space of a pre-trained VAE so that decoded points remain closer to the learned data manifold and reduce off-manifold gradient noise.

  15. Evaluating the Alignment Between GeoAI Explanations and Domain Knowledge in Satellite-Based Flood Mapping

    cs.CV 2026-04 unverdicted novelty 6.0

    ADAGE uses Channel-Group SHAP to quantify alignment between GeoAI model explanations and domain knowledge references in satellite-based flood mapping.

  16. H-Sets: Hessian-Guided Discovery of Set-Level Feature Interactions in Image Classifiers

    cs.CV 2026-04 unverdicted novelty 6.0

    H-Sets detects higher-order feature interactions in image classifiers via Hessian-guided pair merging and attributes them with IDG-Vis to generate more interpretable saliency maps than existing marginal or coarse methods.

  17. On the Importance and Evaluation of Narrativity in Natural Language AI Explanations

    cs.CL 2026-04 unverdicted novelty 6.0

    XAI explanations should be narratives with continuous structure, cause-effect, fluency and diversity, and new metrics are needed to evaluate this better than standard NLP scores.

  18. Contrastive Attribution in the Wild: An Interpretability Analysis of LLM Failures on Realistic Benchmarks

    cs.AI 2026-04 conditional novelty 6.0

    Token-level contrastive attribution yields informative signals for some LLM benchmark failures but is not universally applicable across datasets and models.

  19. Potential of Gaia XP Spectra in Red Giant Star Asteroseismology: A Deep-Learning Approach

    astro-ph.SR 2026-04 unverdicted novelty 6.0

    Hybrid deep learning models recover large frequency separation, frequency of maximum power, and dipole period spacing from low-resolution Gaia XP spectra with accuracy comparable to moderate-resolution spectroscopy.

  20. Towards Reliable Testing of Machine Unlearning

    cs.LG 2026-04 unverdicted novelty 6.0

    Causal fuzzing with budgeted interventions can detect residual direct and indirect influence of unlearned data that standard attribution methods miss due to proxies, cancellations, and masking.

  21. Cross-Modal Knowledge Distillation for PET-Free Amyloid-Beta Detection from MRI

    cs.CV 2026-04 unverdicted novelty 6.0

    A PET-guided knowledge distillation approach achieves AUCs of 0.74 and 0.68 for amyloid-beta detection from MRI alone across two datasets without requiring PET or clinical covariates at test time.

  22. Learn to Rank: Visual Attribution by Learning Importance Ranking

    cs.CV 2026-04 unverdicted novelty 6.0

    A new end-to-end training scheme for visual attribution maps that optimizes deletion and insertion metrics directly via differentiable ranking relaxation instead of surrogate objectives.

  23. Hierarchical, Interpretable, Label-Free Concept Bottleneck Model

    cs.CV 2026-04 unverdicted novelty 6.0

    HIL-CBM is a hierarchical label-free concept bottleneck model that improves classification accuracy and explanation quality over prior single-level CBMs using a visual consistency loss and dual heads.

  24. Biological Plausibility and Representational Alignment of Feedback Alignment in Convolutional Networks

    cs.AI 2026-05 unverdicted novelty 5.0

    Modified feedback alignment in convolutional networks produces representations geometrically aligned with backpropagation on CIFAR-10.

  25. ZScribbleSeg: A comprehensive segmentation framework with modeling of efficient annotation and maximization of scribble supervision

    cs.CV 2026-05 unverdicted novelty 5.0

    ZScribbleSeg maximizes scribble supervision with efficient annotation forms, spatial regularization, and EM-estimated class ratios to deliver competitive performance on six medical segmentation tasks without full labels.

  26. Seeing What Shouldn't Be There: Counterfactual GANs for Medical Image Attribution

    cs.CV 2026-05 unverdicted novelty 5.0

    A cycle-consistent GAN generates counterfactual medical images to attribute classification decisions more comprehensively than standard saliency methods.

  27. Understanding the Prompt Sensitivity

    cs.CL 2026-04 unverdicted novelty 5.0

    LLMs disperse meaning-preserving prompts internally instead of clustering them, which produces an excessively high upper bound on output log-probability differences via Taylor expansion and Cauchy-Schwarz.

  28. Path-Sampled Integrated Gradients

    cs.LG 2026-04 unverdicted novelty 5.0

    Path-sampled integrated gradients generalizes integrated gradients by averaging gradients over sampled baselines on the linear path, proving equivalence to a weighted version that improves convergence rate to O(m^{-1}...

  29. Interpretable and Explainable Surrogate Modeling for Simulations: A State-of-the-Art Survey and Perspectives on Explainable AI for Decision-Making

    cs.AI 2026-04 unverdicted novelty 5.0

    This survey synthesizes XAI methods with surrogate modeling workflows for simulations and outlines a research agenda to embed explainability into simulation-driven design and decision-making.

  30. ConceptTracer: Interactive Analysis of Concept Saliency and Selectivity in Neural Representations

    cs.LG 2026-04 unverdicted novelty 5.0

    ConceptTracer supplies an interactive interface and saliency/selectivity metrics to locate concept-responsive neurons in neural representations, shown on TabPFN.

  31. PECKER: A Precisely Efficient Critical Knowledge Erasure Recipe For Machine Unlearning in Diffusion Models

    cs.AI 2026-04 unverdicted novelty 5.0

    PECKER uses a saliency mask to prioritize parameter updates in distillation-based unlearning, achieving shorter training times for class and concept forgetting on CIFAR-10 and STL-10 while matching prior methods' efficacy.

  32. Evaluating Explainability in Safety-Critical ATR Systems: Limitations of Post-Hoc Methods and Paths Toward Robust XAI

    cs.AI 2026-05 unverdicted novelty 4.0

    Post-hoc XAI methods in ATR systems produce spurious explanations, show instability under perturbations, and induce overtrust, rendering them insufficient for safety-critical deployment without causal grounding.

  33. Learning Coarse-to-Fine Osteoarthritis Representations under Noisy Hierarchical Labels

    cs.CV 2026-05 unverdicted novelty 4.0

    Dual-head training on hierarchical OA labels yields backbone-dependent gains in KL metrics, more ordered latent severity axes, and better saliency alignment with cartilage for some 3D backbones.

  34. TabSHAP

    cs.LG 2026-04 unverdicted novelty 4.0

    TabSHAP attributes feature impact in LLM tabular classifiers via sampled Shapley coalitions and JSD on output distributions, reporting higher deletion faithfulness than random or XGBoost-proxy baselines on Adult Incom...

  35. Explainable Human Activity Recognition: A Unified Review of Concepts and Mechanisms

    cs.LG 2026-04 unverdicted novelty 4.0

    The paper delivers a mechanism-centric taxonomy and unified perspective on explainable human activity recognition methods across sensing modalities.

Reference graph

Works this paper leans on

13 extracted references · 13 canonical work pages · cited by 35 Pith papers

  1. [1]

    Baehrens, T

    D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.-R. M ¨uller. How to explain individual classification decisions. JMLR, 11:1803–1831, 2010

  2. [2]

    A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge (ILSVRC), 2010. URL http://www.image-net.org/challenges/LSVRC/2010/

  3. [3]

    Boykov and M

    Y . Boykov and M. P. Jolly. Interactive graph cuts for optimal boundary and region segmentation of objects in N-D images. In Proc. ICCV, volume 2, pages 105–112, 2001

  4. [4]

    D. C. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Proc. CVPR, pages 3642–3649, 2012

  5. [5]

    Erhan, Y

    D. Erhan, Y . Bengio, A. Courville, and P. Vincent. Visualizing higher-layer features of a deep network. Technical Report 1341, University of Montreal, Jun 2009

  6. [6]

    Felzenszwalb, D

    P. Felzenszwalb, D. Mcallester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In Proc. CVPR, 2008. 7

  7. [7]

    G. E. Hinton, S. Osindero, and Y . W. Teh. A fast learning algorithm for deep belief nets. Neural Compu- tation, 18(7):1527–1554, 2006

  8. [8]

    Krizhevsky, I

    A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012

  9. [9]

    Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level features using large scale unsupervised learning. In Proc. ICML, 2012

  10. [10]

    LeCun, L

    Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998

  11. [11]

    Perronnin, J

    F. Perronnin, J. S ´anchez, and T. Mensink. Improving the Fisher kernel for large-scale image classification. In Proc. ECCV, 2010

  12. [12]

    Simonyan, A

    K. Simonyan, A. Vedaldi, and A. Zisserman. Deep Fisher networks and class saliency maps for ob- ject classification and localisation. In ILSVRC workshop , 2013. URL http://image-net.org/ challenges/LSVRC/2013/slides/ILSVRC_az.pdf

  13. [13]

    M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901v3, 2013. 8