pith. machine review for the scientific record. sign in

arxiv: 1602.05179 · v5 · submitted 2016-02-16 · 💻 cs.LG

Recognition: unknown

Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation

Benjamin Scellier , Yoshua Bengio

Authors on Pith no claims yet
classification 💻 cs.LG
keywords phasepropagationsecondequilibriumerrorfunctionobjectiveprediction
0
0 comments X
read the original abstract

We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point, or stationary distribution) towards a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged towards their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal 'back-propagated' during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. FlowEqProp: Training Flow Matching Generative Models with Gradient Equilibrium Propagation

    cond-mat.dis-nn 2026-04 unverdicted novelty 7.0

    FlowEqProp trains flow matching generative models using gradient equilibrium propagation on a 25k-parameter MLP for digit generation without backpropagation, producing recognizable samples and allowing quality gains f...

  2. Selectivity and Shape in the Design of Forward-Forward Goodness Functions

    cs.LG 2026-03 unverdicted novelty 7.0

    Shape- and peak-sensitive goodness functions for Forward-Forward deliver up to 72pp gains over sum-of-squares, reaching 98.2% on MNIST and 89% on Fashion-MNIST.