[C76] TruncQuant: Truncation-Ready Quantization for DNNs with Flexible Weight Bit Precision

Abstract

The deployment of deep neural networks on edge devices is a challenging task due to the increasing complexity of state-of-the-art models, requiring efforts to reduce model size and inference latency. Recent studies explore models operating at diverse quantization settings to find the optimal point that balances computational efficiency and accuracy. Truncation, an effective approach for achieving lower bit precision mapping, enables a single model to adapt to various hardware platforms with little to no cost. However, formulating a training scheme for deep neural networks to withstand the associated errors introduced by truncation remains a challenge, as the current quantization-aware training schemes are not designed for the truncation process. We propose TruncQuant, a novel truncation-ready training scheme allowing flexible bit precision through bit-shifting in runtime. We achieve this by aligning TruncQuant with the output of the truncation process, demonstrating strong robustness across bit-width settings, and offering an easily implementable training scheme within existing quantization-aware frameworks.

Publication
IEEE/ACM International Symposium on Low Power Electronics and Design 2025
Jin Hee Kim (김진희)
Jin Hee Kim (김진희)
PhD student in Duke University
Joo Chan Lee (이주찬)
Joo Chan Lee (이주찬)
Combined MS-PhD student
Kang Eun Jeon (전강은)
Kang Eun Jeon (전강은)
Post-doctoral researcher