Stochastic tokenisation during pre-training and fine-tuning improves LLM robustness to perturbations while preserving accuracy.
Super tiny language models.arXiv preprint arXiv:2405.14159
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Stochasticity in Tokenisation Improves Robustness
Stochastic tokenisation during pre-training and fine-tuning improves LLM robustness to perturbations while preserving accuracy.