pith. machine review for the scientific record. sign in

hub

Vip: Towards universal visual reward and representation via value-implicit pre-training

19 Pith papers cite this work. Polarity classification is still indexing.

19 Pith papers citing it

hub tools

clear filters

representative citing papers

GazeVLA: Learning Human Intention for Robotic Manipulation

cs.RO · 2026-04-24 · unverdicted · novelty 6.0

GazeVLA pretrains on large human egocentric datasets to capture gaze-based intention, then finetunes on limited robot data with chain-of-thought reasoning to achieve better robotic manipulation performance than baselines.

citing papers explorer

Showing 15 of 15 citing papers after filters.