Recognition: unknown
A Study of BFLOAT16 for Deep Learning Training
read the original abstract
This paper presents the first comprehensive empirical study demonstrating the efficacy of the Brain Floating Point (BFLOAT16) half-precision format for Deep Learning training across image classification, speech recognition, language modeling, generative networks and industrial recommendation systems. BFLOAT16 is attractive for Deep Learning training for two reasons: the range of values it can represent is the same as that of IEEE 754 floating-point format (FP32) and conversion to/from FP32 is simple. Maintaining the same range as FP32 is important to ensure that no hyper-parameter tuning is required for convergence; e.g., IEEE 754 compliant half-precision floating point (FP16) requires hyper-parameter tuning. In this paper, we discuss the flow of tensors and various key operations in mixed precision training, and delve into details of operations, such as the rounding modes for converting FP32 tensors to BFLOAT16. We have implemented a method to emulate BFLOAT16 operations in Tensorflow, Caffe2, IntelCaffe, and Neon for our experiments. Our results show that deep learning training using BFLOAT16 tensors achieves the same state-of-the-art (SOTA) results across domains as FP32 tensors in the same number of iterations and with no changes to hyper-parameters.
This paper has not been read by Pith yet.
Forward citations
Cited by 10 Pith papers
-
From Characterization to Microarchitecture: Designing an Elegant and Reliable BFP-Based NPU
A BFP NPU microarchitecture using row/column blocking and per-path protections achieves near-DMR reliability at 3.55% geometric mean performance overhead and under 2% hardware cost.
-
ENEC: A Lossless AI Model Compression Method Enabling Fast Inference on Ascend NPUs
ENEC delivers 3.43X higher throughput than DietGPU and 1.12X better compression ratio than nvCOMP for lossless model weight compression on Ascend NPUs, yielding up to 6.3X end-to-end inference speedup.
-
ShardTensor: Domain Parallelism for Scientific Machine Learning
ShardTensor is a domain-parallelism system for SciML that enables flexible scaling of extreme-resolution spatial datasets by removing the constraint of batch size one per device.
-
LLM-PRISM: Characterizing Silent Data Corruption from Permanent GPU Faults in LLM Training
LLMs resist low-frequency permanent GPU faults but certain datapaths and precision formats trigger catastrophic training divergence even at moderate fault rates.
-
Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training
ScaleGNN uses communication-free sampling and 4D parallelism to scale mini-batch GNN training to 2048 GPUs, achieving 3.5x speedup over prior state-of-the-art on ogbn-products.
-
M$^2$RNN: Non-Linear RNNs with Matrix-Valued States for Scalable Language Modeling
M²RNN achieves perfect state tracking at unseen lengths and outperforms Gated DeltaNet hybrids by 0.4-0.5 perplexity on 7B models with 3x smaller recurrent states.
-
FP8 Formats for Deep Learning
FP8 formats E4M3 and E5M2 match 16-bit training accuracy on CNNs, RNNs, and Transformers up to 175B parameters without hyperparameter changes.
-
Reduced-Precision Stochastic Simulation for Mathematical Biology
Mixed-precision SSA with stochastic rounding preserves ensemble statistics across five biological models while cutting memory use by 2-4x and delivering up to 1.5x CPU speedup.
-
ZipCCL: Efficient Lossless Data Compression of Communication Collectives for Accelerating LLM Training
ZipCCL delivers up to 1.35x faster communication and 1.18x end-to-end speedup in LLM training through lossless compression of near-Gaussian collectives on 64-GPU clusters.
-
STEM: Structure-Tracing Evidence Mining for Knowledge Graphs-Driven Retrieval-Augmented Generation
STEM is a new framework for multi-hop KGQA that projects queries to adaptive schema graphs and uses Triple-GNN guidance to retrieve more accurate and complete evidence subgraphs, claiming SOTA results.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.