Introduction to BNNs with Larq

This tutorial demonstrates how to train a simple binarized Convolutional Neural Network (CNN) to classify MNIST digits. This simple network will achieve approximately 98% accuracy on the MNIST test set. This tutorial uses Larq and the Keras Sequential API, so creating and training our model will require only a few lines of code.

import tensorflow as tf
import larq as lq

Download and prepare the MNIST dataset

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))

# Normalize pixel values to be between -1 and 1
train_images, test_images = train_images / 127.5 - 1, test_images / 127.5 - 1

Create the model

The following will create a simple binarized CNN.

The quantization function $$ q(x) = \begin{cases} -1 & x < 0 \\ 1 & x \geq 0 \end{cases} $$ is used in the forward pass to binarize the activations and the latent full precision weights. The gradient of this function is zero almost everywhere which prevents the model from learning.

To be able to train the model the gradient is instead estimated using the Straight-Through Estimator (STE) (the binarization is essentially replaced by a clipped identity on the backward pass): $$ \frac{\partial q(x)}{\partial x} = \begin{cases} 1 & \left|x\right| \leq 1 \\ 0 & \left|x\right| > 1 \end{cases} $$

In Larq this can be done by using input_quantizer="ste_sign" and kernel_quantizer="ste_sign". Additionally, the latent full precision weights are clipped to -1 and 1 using kernel_constraint="weight_clip".

# All quantized layers except the first will use the same options
kwargs = dict(input_quantizer="ste_sign",
              kernel_quantizer="ste_sign",
              kernel_constraint="weight_clip")

model = tf.keras.models.Sequential()

# In the first layer we only quantize the weights and not the input
model.add(lq.layers.QuantConv2D(32, (3, 3),
                                kernel_quantizer="ste_sign",
                                kernel_constraint="weight_clip",
                                use_bias=False,
                                input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization(scale=False))

model.add(lq.layers.QuantConv2D(64, (3, 3), use_bias=False, **kwargs))
model.add(tf.keras.layers.MaxPooling2D((2, 2)))
model.add(tf.keras.layers.BatchNormalization(scale=False))

model.add(lq.layers.QuantConv2D(64, (3, 3), use_bias=False, **kwargs))
model.add(tf.keras.layers.BatchNormalization(scale=False))
model.add(tf.keras.layers.Flatten())

model.add(lq.layers.QuantDense(64, use_bias=False, **kwargs))
model.add(tf.keras.layers.BatchNormalization(scale=False))
model.add(lq.layers.QuantDense(10, use_bias=False, **kwargs))
model.add(tf.keras.layers.BatchNormalization(scale=False))
model.add(tf.keras.layers.Activation("softmax"))

Almost all parameters in the network are binarized, so either -1 or 1. This makes the network extremely fast if it would be deployed on custom BNN hardware.

Here is the complete architecture of our model:

lq.models.summary(model)
+sequential_1 stats---------------------------------------------------------------+
| Layer                  Input prec.           Outputs  # 1-bit  # 32-bit  Memory |
|                              (bit)                                         (kB) |
+---------------------------------------------------------------------------------+
| quant_conv2d_3                   -  (-1, 26, 26, 32)      288         0    0.04 |
| max_pooling2d_2                  -  (-1, 13, 13, 32)        0         0    0.00 |
| batch_normalization_5            -  (-1, 13, 13, 32)        0        96    0.38 |
| quant_conv2d_4                   1  (-1, 11, 11, 64)    18432         0    2.25 |
| max_pooling2d_3                  -    (-1, 5, 5, 64)        0         0    0.00 |
| batch_normalization_6            -    (-1, 5, 5, 64)        0       192    0.75 |
| quant_conv2d_5                   1    (-1, 3, 3, 64)    36864         0    4.50 |
| batch_normalization_7            -    (-1, 3, 3, 64)        0       192    0.75 |
| flatten_1                        -         (-1, 576)        0         0    0.00 |
| quant_dense_2                    1          (-1, 64)    36864         0    4.50 |
| batch_normalization_8            -          (-1, 64)        0       192    0.75 |
| quant_dense_3                    1          (-1, 10)      640         0    0.08 |
| batch_normalization_9            -          (-1, 10)        0        30    0.12 |
| activation_1                     -          (-1, 10)        0         0    0.00 |
+---------------------------------------------------------------------------------+
| Total                                                   93088       702   14.11 |
+---------------------------------------------------------------------------------+
+sequential_1 summary------------+
| Total params           93790   |
| Trainable params       93322   |
| Non-trainable params   468     |
| Float-32 Equivalent    0.36 MB |
| Compression of Memory  25.97   |
+--------------------------------+

Compile and train the model

Note: This may take a few minutes depending on your system.

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(train_images, train_labels, batch_size=64, epochs=6)

test_loss, test_acc = model.evaluate(test_images, test_labels)
Epoch 1/6
60000/60000 [==============================] - 72s 1ms/sample - loss: 0.6494 - acc: 0.9070
Epoch 2/6
60000/60000 [==============================] - 67s 1ms/sample - loss: 0.4760 - acc: 0.9606
Epoch 3/6
60000/60000 [==============================] - 67s 1ms/sample - loss: 0.4480 - acc: 0.9691
Epoch 4/6
60000/60000 [==============================] - 68s 1ms/sample - loss: 0.4365 - acc: 0.9718
Epoch 5/6
60000/60000 [==============================] - 68s 1ms/sample - loss: 0.4329 - acc: 0.9739
Epoch 6/6
60000/60000 [==============================] - 68s 1ms/sample - loss: 0.4287 - acc: 0.9758
10000/10000 [==============================] - 6s 576us/sample - loss: 0.4283 - acc: 0.9751

Evaluate the model

print(f"Test accuracy {test_acc * 100:.2f} %")
Test accuracy 97.51 %

As you can see, our simple binarized CNN has achieved a test accuracy of over 97.5 %. Not bad for a few lines of code!