Download PDFOpen PDF in browserCurrent version

How Many Bits Does it Take to Quantize Your Neural Network?

EasyChair Preprint no. 1000, version 1

Versions: 123history
9 pagesDate: May 25, 2019


Quantization converts neural networks from traditional floating-point to low-bit fixed-point computations, which can be carried out by efficient integer-only hardware. Quantization is standard practice for the deployment of neural networks on real-time embedded devices, as quantized networks use little memory and yield low latency for classifying inputs. However, like their real-number counterpart, quantized networks are not immune to malicious misclassifications caused by adversarial attacks. Our experiments show that the analysis of real-numbered networks often derives false conclusions about their quantizations, both when determining robustness and when detecting attacks. For this reason, we investigate how quantization affects a network's robustness to adversarial attacks, which is a formal verification question. We introduce a verification method for quantized neural networks which, using SMT solving over bit-vectors, accounts for their exact, bit-precise semantics. We built a tool and investigated the effect of quantization for multiple case studies. First, we analyzed a classifier for the MNIST dataset, answering the question as to how many bits are necessary for quantization to achieve robustness to attacks. Second, we examined a neural controller for the stabilization of an inverted pendulum, measuring the behavioral differences between different precisions used in quantizations. Finally, we verified the gender bias of a student performance predictor, assessing how many bits are necessary in quantization for safety properties to be satisfied.

Keyphrases: adversarial attacks, bit-vectors, Quantized Neural Networks, SMT solving

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Mirco Giacobbe and Thomas A. Henzinger and Mathias Lechner},
  title = {How Many Bits Does it Take to Quantize Your Neural Network?},
  howpublished = {EasyChair Preprint no. 1000},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browserCurrent version