pith. machine review for the scientific record. sign in

arxiv: 1407.7906 · v3 · submitted 2014-07-29 · 💻 cs.LG

Recognition: unknown

How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation

Authors on Pith no claims yet
classification 💻 cs.LG
keywords targetauto-encoderpropagationassignmentcreditdeepinputdiscrete
0
0 comments X
read the original abstract

We propose to exploit {\em reconstruction} as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. NICE: Non-linear Independent Components Estimation

    cs.LG 2014-10 accept novelty 8.0

    NICE learns a composition of invertible neural-network layers that transform data into independent latent variables, enabling exact log-likelihood training and sampling for density estimation.

  2. Covariance-Aware Goodness for Scalable Forward-Forward Learning

    cs.LG 2026-05 unverdicted novelty 6.0

    Covariance-aware goodness and auxiliary modules let Forward-Forward training scale to 16-layer networks, achieving 73.01% on ImageNet-100 and 50.30% on Tiny-ImageNet with roughly half the peak memory of backpropagation.