LLM.int8() performs 8-bit inference for transformers up to 175B parameters with no accuracy loss by combining vector-wise quantization for most features with 16-bit mixed-precision handling of systematic outlier dimensions.
citation dossier
Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=
1Pith papers citing it
1reference links
cs.LGtop field · 1 papers
CONDITIONALtop verdict bucket · 1 papers
why this work matters in Pith
Pith has found this work in 1 reviewed paper. Its strongest current cluster is cs.LG (1 papers). The largest review-status bucket among citing papers is CONDITIONAL (1 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.
fields
cs.LG 1years
2022 1verdicts
CONDITIONAL 1representative citing papers
citing papers explorer
-
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
LLM.int8() performs 8-bit inference for transformers up to 175B parameters with no accuracy loss by combining vector-wise quantization for most features with 16-bit mixed-precision handling of systematic outlier dimensions.