pith. machine review for the scientific record. sign in

arxiv: 1710.04773 · v2 · submitted 2017-10-13 · 💻 cs.CV

Recognition: unknown

Residual Connections Encourage Iterative Inference

Authors on Pith no claims yet
classification 💻 cs.CV
keywords resnetsiterativeresidualrefinementfeatureslayerslearningperform
0
0 comments X
read the original abstract

Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual connections naturally encourage features of residual blocks to move along the negative gradient of loss as we go from one block to the next. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and counterintuitively, overfitting, and we show that simple existing strategies can help alleviating this problem.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Eliciting Latent Predictions from Transformers with the Tuned Lens

    cs.LG 2023-03 accept novelty 7.0

    Training per-layer affine probes on frozen transformers yields more reliable latent predictions than the logit lens and enables detection of malicious inputs from prediction trajectories.

  2. Understanding intermediate layers using linear classifier probes

    stat.ML 2016-10 accept novelty 7.0

    Linear probes demonstrate that feature separability for classification increases monotonically with network depth in Inception v3 and ResNet-50.