pith. machine review for the scientific record. sign in

Moto: Latent motion token as the bridging language for learning robot manipulation from videos.arXiv preprint arXiv:2412.04445

3 Pith papers cite this work. Polarity classification is still indexing.

3 Pith papers citing it

fields

cs.RO 2 cs.CV 1

years

2026 2 2025 1

verdicts

UNVERDICTED 3

representative citing papers

Motus: A Unified Latent Action World Model

cs.CV · 2025-12-15 · unverdicted · novelty 5.0

Motus unifies understanding, video generation, and action in one latent world model via MoT experts and optical-flow latent actions, reporting gains over prior methods in simulation and real robots.

citing papers explorer

Showing 3 of 3 citing papers.

  • RotVLA: Rotational Latent Action for Vision-Language-Action Model cs.RO · 2026-05-13 · unverdicted · none · ref 29

    RotVLA models latent actions as continuous SO(n) rotations with triplet-frame supervision and flow-matching to reach 98.2% success on LIBERO and 89.6%/88.5% on RoboTwin2.0 using a 1.7B-parameter model.

  • Unified 4D World Action Modeling from Video Priors with Asynchronous Denoising cs.RO · 2026-04-29 · unverdicted · none · ref 36 · 2 links

    X-WAM unifies robotic action execution and 4D world synthesis by adapting video diffusion priors with a lightweight depth branch and asynchronous noise sampling, achieving 79-91% success on robot benchmarks.

  • Motus: A Unified Latent Action World Model cs.CV · 2025-12-15 · unverdicted · none · ref 15

    Motus unifies understanding, video generation, and action in one latent world model via MoT experts and optical-flow latent actions, reporting gains over prior methods in simulation and real robots.