Fine-tuning VLMs to output action sequences for puzzles causes emergent internal visual representations that improve performance when integrated into reasoning.
arXiv preprint arXiv:2602.17270 (2026)
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3representative citing papers
Prior-Aligned AutoEncoders shape latent manifolds with spatial coherence, local continuity, and global semantics to improve latent diffusion, achieving SOTA gFID 1.03 on ImageNet 256x256 with up to 13x faster convergence.
Latent diffusability is quantified by decomposing the MMSE rate along diffusion trajectories into Fisher Information and Fisher Information Rate, with three geometric penalties (dimensional compression, tangential distortion, curvature injection) identified as sources of failure.
citing papers explorer
-
Do multimodal models imagine electric sheep?
Fine-tuning VLMs to output action sequences for puzzles causes emergent internal visual representations that improve performance when integrated into reasoning.
-
What Matters for Diffusion-Friendly Latent Manifold? Prior-Aligned Autoencoders for Latent Diffusion
Prior-Aligned AutoEncoders shape latent manifolds with spatial coherence, local continuity, and global semantics to improve latent diffusion, achieving SOTA gFID 1.03 on ImageNet 256x256 with up to 13x faster convergence.
-
Understanding Latent Diffusability via Fisher Geometry
Latent diffusability is quantified by decomposing the MMSE rate along diffusion trajectories into Fisher Information and Fisher Information Rate, with three geometric penalties (dimensional compression, tangential distortion, curvature injection) identified as sources of failure.