pith. machine review for the scientific record. sign in

arxiv: 1812.02230 · v1 · submitted 2018-12-05 · 💻 cs.LG · stat.ML

Recognition: unknown

Towards a Definition of Disentangled Representations

Alexander Lerchner, Danilo Rezende, David Amos, David Pfau, Irina Higgins, Loic Matthey, Sebastien Racaniere

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords worlddefinitiondisentangleddisentanglingrepresentationrepresentationsstructuretransformations
0
0 comments X
read the original abstract

How can intelligent agents solve a diverse set of tasks in a data-efficient manner? The disentangled representation learning approach posits that such an agent would benefit from separating out (disentangling) the underlying structure of the world into disjoint parts of its representation. However, there is no generally agreed-upon definition of disentangling, not least because it is unclear how to formalise the notion of world structure beyond toy datasets with a known ground truth generative process. Here we propose that a principled solution to characterising disentangled representations can be found by focusing on the transformation properties of the world. In particular, we suggest that those transformations that change only some properties of the underlying world state, while leaving all other properties invariant, are what gives exploitable structure to any kind of data. Similar ideas have already been successfully applied in physics, where the study of symmetry transformations has revolutionised the understanding of the world structure. By connecting symmetry transformations to vector representations using the formalism of group and representation theory we arrive at the first formal definition of disentangled representations. Our new definition is in agreement with many of the current intuitions about disentangling, while also providing principled resolutions to a number of previous points of contention. While this work focuses on formally defining disentangling - as opposed to solving the learning problem - we believe that the shift in perspective to studying data transformations can stimulate the development of better representation learning algorithms.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. The Linear Representation Hypothesis and the Geometry of Large Language Models

    cs.CL 2023-11 conditional novelty 8.0

    Linear representations of high-level concepts in LLMs are formalized via counterfactuals in input and output spaces, unified under a causal inner product that enables consistent probing and steering.

  2. KamonBench: A Grammar-Based Dataset for Evaluating Compositional Factor Recovery in Vision-Language Models

    cs.CV 2026-05 unverdicted novelty 7.0

    KamonBench is a grammar-generated synthetic dataset of compositional kamon crests with explicit factor annotations to evaluate factor recovery in vision-language models.

  3. A framework for analyzing concept representations in neural models

    cs.CL 2026-05 unverdicted novelty 7.0

    A new framework shows concept subspaces are not unique, estimator choice affects containment and disentanglement, LEACE works well but generalizes poorly, and HuBERT encodes phone info as contained and disentangled fr...

  4. Transformation Categorization Based on Group Decomposition Theory Using Parameter Division

    cs.LG 2026-04 unverdicted novelty 7.0

    Parameter division decomposes group transformations via parameter splitting and homomorphism constraints to enable unsupervised categorization of image transformations like rotation, translation, and scale.

  5. A renormalization-group inspired lattice-based framework for piecewise generalized linear models

    stat.ME 2026-05 unverdicted novelty 6.0

    RG-inspired lattice models for piecewise GLMs provide explicit interpretable partitions and a replica-analysis-derived scaling law for regularization that allows increasing complexity without expected rise in generali...

  6. Learning to Theorize the World from Observation

    cs.LG 2026-05 unverdicted novelty 6.0

    NEO induces compositional latent programs as world theories from observations and executes them to enable explanation-driven generalization.

  7. Continuous Limits of Coupled Flows in Representation Learning

    cs.LG 2026-04 unverdicted novelty 6.0

    Discrete decentralized learning dynamics on manifolds converge uniformly to an overdamped Langevin SDE whose stationary states produce orthogonally disentangled, linearly separable features.