Sparse internal snapshots at canonical low-noise levels from frozen diffusion backbones suffice for competitive out-of-distribution detection without full trajectories or large heads.
Title resolution pending
3 Pith papers cite this work. Polarity classification is still indexing.
years
2026 3verdicts
UNVERDICTED 3representative citing papers
Scaling pretrained representations improves label-free OOD detection on frozen backbones, causing performance gaps between global and local detectors to vanish across vision and language tasks.
Latent diffusion models exhibit geometric decoupling where curvature in out-of-distribution generation is misallocated to unstable semantic boundaries instead of image details, identifying geometric hotspots as the structural cause of editing instability.
citing papers explorer
-
Backbone-Equated Diffusion OOD via Sparse Internal Snapshots
Sparse internal snapshots at canonical low-noise levels from frozen diffusion backbones suffice for competitive out-of-distribution detection without full trajectories or large heads.
-
Scaling Pretrained Representations Enables Label-Free Out-of-Distribution Detection Without Fine-Tuning
Scaling pretrained representations improves label-free OOD detection on frozen backbones, causing performance gaps between global and local detectors to vanish across vision and language tasks.
-
Geometric Decoupling: Diagnosing the Structural Instability of Latent
Latent diffusion models exhibit geometric decoupling where curvature in out-of-distribution generation is misallocated to unstable semantic boundaries instead of image details, identifying geometric hotspots as the structural cause of editing instability.