pith. machine review for the scientific record. sign in

arxiv: 1511.05440 · v6 · submitted 2015-11-17 · 💻 cs.LG · cs.CV· stat.ML

Recognition: unknown

Deep multi-scale video prediction beyond mean square error

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVstat.ML
keywords futurelearningpredictionvideodifferenterrorfeatureframes
0
0 comments X
read the original abstract

Learning to predict future images from a video sequence involves the construction of an internal representation that models the image evolution accurately, and therefore, to some degree, its content and dynamics. This is why pixel-space video prediction may be viewed as a promising avenue for unsupervised feature learning. In addition, while optical flow has been a very studied problem in computer vision for a long time, future frame prediction is rarely approached. Still, many vision applications could benefit from the knowledge of the next frames of videos, that does not require the complexity of tracking every pixel trajectories. In this work, we train a convolutional network to generate future frames given an input sequence. To deal with the inherently blurry predictions obtained from the standard Mean Squared Error (MSE) loss function, we propose three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. We compare our predictions to different published results based on recurrent neural networks on the UCF101 dataset

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. MTCurv: Deep learning for direct microtubule curvature mapping in noisy fluorescence microscopy images

    cs.CV 2026-04 unverdicted novelty 7.0

    MTCurv regresses pixel-wise microtubule curvature maps from noisy images using an attention-based residual U-Net trained on synthetic data with a gradient-aware loss.

  2. A Mixture of Experts Foundation Model for Scanning Electron Microscopy Image Analysis

    cs.LG 2026-04 unverdicted novelty 7.0

    A mixture-of-experts transformer foundation model pretrained on diverse SEM images enables generalization across materials and outperforms SOTA on unsupervised defocus-to-focus restoration.

  3. Imagen Video: High Definition Video Generation with Diffusion Models

    cs.CV 2022-10 unverdicted novelty 7.0

    Imagen Video generates high-definition text-conditional videos via a cascade of base and super-resolution diffusion models, achieving high fidelity and controllability.

  4. MagicVideo: Efficient Video Generation With Latent Diffusion Models

    cs.CV 2022-11 unverdicted novelty 6.0

    MagicVideo generates 256x256 text-conditioned video clips via latent diffusion with a custom 3D U-Net, achieving roughly 64 times lower compute than prior video diffusion models.