Monolithic silicon photonics at 25 Gb/s
Jason S. Orcutt, Douglas M. Gill, et al.
OFC 2016
The resilience of Deep Learning (DL) training and inference workloads to low-precision computations, coupled with the demand for power-and area-efficient hardware accelerators for these workloads, has led to the emergence of 16-bit floating point formats as the precision of choice for DL hardware accelerators. This paper describes our optimized 16-bit format that has 6 exponent bits and 9 fraction bits, derived from a study of the range of values encountered in DL applications. We demonstrate that our format preserves the accuracy of DL networks, and we compare its ease-of-use for DL against IEEE-754 half-precision (5 exponent bits and 10 fraction bits) and bfloat16 (8 exponent bits and 7 fraction bits). Further, our format eliminated sub-normals and simplifies rounding modes and handling of corner cases. This streamlines floating-point unit logic and enables realization of a compact power-efficient computation engine.
Jason S. Orcutt, Douglas M. Gill, et al.
OFC 2016
Sae Kyu Lee, Ankur Agrawal, et al.
IEEE JSSC
Ankur Agrawal, Chia-Yu Chen, et al.
DAC 2017
Ankur Agrawal, Monodeep Kar, et al.
VLSI Technology 2023