Dynamic Mode Decomposition shows that short contiguous spans of Vision Transformer blocks can be approximated by a low-rank linear operator K with high predictive fidelity for p<=4 steps, but this approximation fails to outperform an identity baseline when propagated to the final layer.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CV 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Dynamic Mode Decomposition along Depth in Vision Transformers
Dynamic Mode Decomposition shows that short contiguous spans of Vision Transformer blocks can be approximated by a low-rank linear operator K with high predictive fidelity for p<=4 steps, but this approximation fails to outperform an identity baseline when propagated to the final layer.