pith. machine review for the scientific record. sign in

citation dossier

Proceedings of the IEEE/CVF International Conference on Computer Vision , pages=

stub below hub threshold · 1 Pith inbound

Differentiable soft quantization: Bridging full-precision and low-bit neural networks , author=

1Pith papers citing it
1reference links
cs.LGtop field · 1 papers
CONDITIONALtop verdict bucket · 1 papers

This DOI or bibliographic work is known through the citation graph. Pith is enriching metadata through Crossref/OpenAlex; full non-arXiv reviews need publisher/open-access PDF resolution.

why this work matters in Pith

Pith has found this work in 1 reviewed paper. Its strongest current cluster is cs.LG (1 papers). The largest review-status bucket among citing papers is CONDITIONAL (1 papers). For highly cited works, this page shows a dossier first and a bounded explorer second; it never tries to render every citing paper at once.

fields

cs.LG 1

years

2022 1

verdicts

CONDITIONAL 1

representative citing papers

LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

cs.LG · 2022-08-15 · conditional · novelty 7.0

LLM.int8() performs 8-bit inference for transformers up to 175B parameters with no accuracy loss by combining vector-wise quantization for most features with 16-bit mixed-precision handling of systematic outlier dimensions.

citing papers explorer

Showing 1 of 1 citing paper.

  • LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale cs.LG · 2022-08-15 · conditional · none · ref 100

    LLM.int8() performs 8-bit inference for transformers up to 175B parameters with no accuracy loss by combining vector-wise quantization for most features with 16-bit mixed-precision handling of systematic outlier dimensions.