Larq is an ecosystem of open-source Python packages for building, training and deploying Binarized Neural Networks to enable efficient inference on mobile and edge devices.
Get started with LarqMost neural networks use 32, 16 or 8 bits to encode each weight and activation, making them slow and power-hungry. Binarized Neural Networks (BNNs) restrict weights and activations to be only +1 or -1, and drastically reduce the model’s memory footprint and computational complexity.
Larq lets engineers and researchers access state-of-the-art BNNs, train their own from scratch, and deploy them on mobile and edge devices.
Larq Zoo provides implementations and pretrained
weights for cutting-edge BNNs, allowing you to effortlessly start using efficient deep learning in your
projects.
Larq is a powerful yet easy-to-use library for
building and training BNNs that is fully compatible with the larger tf.keras ecosystem.
Larq Compute Engine is a
highly-optimized inference library for deploying BNNs on mobile and edge devices.