pith. machine review for the scientific record. sign in

arxiv: 1605.08104 · v5 · submitted 2016-05-25 · 💻 cs.LG · cs.AI· cs.CV· cs.NE· q-bio.NC

Recognition: unknown

Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CVcs.NEq-bio.NC
keywords learningnetworkslearnunsupervisedmovementnetworkobjectprediction
0
0 comments X
read the original abstract

While great strides have been made in using deep learning algorithms to solve supervised learning tasks, the problem of unsupervised learning - leveraging unlabeled examples to learn about the structure of a domain - remains a difficult unsolved challenge. Here, we explore prediction of future frames in a video sequence as an unsupervised learning rule for learning about the structure of the visual world. We describe a predictive neural network ("PredNet") architecture that is inspired by the concept of "predictive coding" from the neuroscience literature. These networks learn to predict future frames in a video sequence, with each layer in the network making local predictions and only forwarding deviations from those predictions to subsequent network layers. We show that these networks are able to robustly learn to predict the movement of synthetic (rendered) objects, and that in doing so, the networks learn internal representations that are useful for decoding latent object parameters (e.g. pose) that support object recognition with fewer training views. We also show that these networks can scale to complex natural image streams (car-mounted camera videos), capturing key aspects of both egocentric movement and the movement of objects in the visual scene, and the representation learned in this setting is useful for estimating the steering angle. Altogether, these results suggest that prediction represents a powerful framework for unsupervised learning, allowing for implicit learning of object and scene structure.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. LPC-SM: Local Predictive Coding and Sparse Memory for Long-Context Language Modeling

    cs.CL 2026-03 unverdicted novelty 6.0

    LPC-SM is a hybrid architecture separating local attention, persistent memory, predictive correction, and control with ONT for memory writes, showing loss reductions on 158M-parameter models up to 4096-token contexts.

  2. Do vision models perceive illusory motion in static images like humans?

    cs.CV 2026-04 unverdicted novelty 4.0

    Most optical flow models do not generate flow fields matching human perception of the Rotating Snakes illusion, but a dual-channel recurrent model does during simulated saccades.