pith. machine review for the scientific record. sign in

arxiv: 1605.06432 · v3 · submitted 2016-05-20 · 📊 stat.ML · cs.LG· cs.SY

Recognition: unknown

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGcs.SY
keywords variationalbayesspacestatedatadeepdvbffilters
0
0 comments X
read the original abstract

We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models. Leveraging recent advances in Stochastic Gradient Variational Bayes, DVBF can overcome intractable inference distributions via variational inference. Thus, it can handle highly nonlinear input data with temporal and spatial dependencies such as image sequences without domain knowledge. Our experiments show that enabling backpropagation through transitions enforces state space assumptions and significantly improves information content of the latent embedding. This also enables realistic long-term prediction.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Mastering Atari with Discrete World Models

    cs.LG 2020-10 accept novelty 7.0

    DreamerV2 reaches human-level performance on 55 Atari games by learning behaviors inside a separately trained discrete-latent world model.

  2. Dream to Control: Learning Behaviors by Latent Imagination

    cs.LG 2019-12 accept novelty 7.0

    Dreamer learns to control from images by imagining and optimizing behaviors in a learned latent world model, outperforming prior methods on 20 visual tasks in data efficiency and final performance.

  3. Debiased Model-based Representations for Sample-efficient Continuous Control

    cs.LG 2026-05 unverdicted novelty 6.0

    DR.Q debiases model-based representations for Q-learning by maximizing mutual information between state-action and next-state representations and applying faded prioritized experience replay, achieving competitive or ...

  4. Learning to Theorize the World from Observation

    cs.LG 2026-05 unverdicted novelty 6.0

    NEO induces compositional latent programs as world theories from observations and executes them to enable explanation-driven generalization.

  5. Dissipative Latent Residual Physics-Informed Neural Networks for Modeling and Identification of Electromechanical Systems

    cs.LG 2026-04 unverdicted novelty 6.0

    DiLaR-PINN learns dissipative effects in electromechanical systems via a skew-dissipative latent residual PINN that guarantees non-increasing energy and uses recurrent curriculum training for partial observations.

  6. Adaptive Learned State Estimation based on KalmanNet

    cs.RO 2026-04 unverdicted novelty 5.0

    AM-KNet adds sensor-specific modules, hypernetwork conditioning on target type and pose, and Joseph-form covariance estimation to KalmanNet, yielding better accuracy and stability than base KalmanNet on nuScenes and V...