pith. machine review for the scientific record. sign in

arxiv: 1807.04689 · v1 · submitted 2018-07-12 · 📊 stat.ML · cs.LG

Recognition: unknown

Explorations in Homeomorphic Variational Auto-Encoding

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords latentmanifolddatagroupsvariablesdensitygaussianmanifold-valued
0
0 comments X
read the original abstract

The manifold hypothesis states that many kinds of high-dimensional data are concentrated near a low-dimensional manifold. If the topology of this data manifold is non-trivial, a continuous encoder network cannot embed it in a one-to-one manner without creating holes of low density in the latent space. This is at odds with the Gaussian prior assumption typically made in Variational Auto-Encoders (VAEs), because the density of a Gaussian concentrates near a blob-like manifold. In this paper we investigate the use of manifold-valued latent variables. Specifically, we focus on the important case of continuously differentiable symmetry groups (Lie groups), such as the group of 3D rotations $\operatorname{SO}(3)$. We show how a VAE with $\operatorname{SO}(3)$-valued latent variables can be constructed, by extending the reparameterization trick to compact connected Lie groups. Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Risk-Controlled Post-Processing of Decision Policies

    stat.ML 2026-05 unverdicted novelty 7.0

    Risk-controlled post-processing yields a threshold-structured policy that follows the baseline except where an oracle fallback sharply reduces conditional violation risk, achieving O(log n/n) expected excess risk in i...

  2. Beyond Gaussian Bottlenecks: Topologically Aligned Encoding of Vision-Transformer Feature Spaces

    cs.CV 2026-04 unverdicted novelty 6.0

    S²VAE replaces Gaussian bottlenecks with hyperspherical Power Spherical latents in a VAE on VGGT features, yielding better results on depth estimation, camera pose recovery, and point cloud reconstruction especially a...