# larq.optimizers¶

## Bop¶

Bop(fp_optimizer, threshold=1e-07, gamma=0.01, name='Bop', **kwargs)

Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and Binary Weight Networks (BWN).

Bop maintains an exponential moving average of the gradients controlled by `gamma`

. If this average exceeds the `threshold`

, a weight is flipped. Additionally, Bop accepts a regular optimizer that is applied to the non-binary weights in the network.

The hyperparameter `gamma`

is somewhat analogues to the learning rate in SGD methods: a high `gamma`

results in rapid convergence but also makes training more noisy.

Note that the default `threshold`

is not optimal for all situations. Setting the threshold too high results in little learning, while setting it too low results in overly noisy behaviour.

Example

optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))

**Arguments**

`fp_optimizer`

: a`tf.keras.optimizers.Optimizer`

.`threshold`

: determines to whether to flip each weight.`gamma`

: the adaptivity rate.`name`

: name of the optimizer.

**References**

## XavierLearningRateScaling¶

XavierLearningRateScaling(optimizer, model)

Scale the weights learning rates respectively with the weights initialization

This is a wrapper and does not implement any optimization algorithm.

Example

optimizer = lq.optimizers.XavierLearningRateScaling( tf.keras.optimizers.Adam(0.01), model )

**Arguments**

`optimizer`

: A`tf.keras.optimizers.Optimizer`

`model`

: A`tf.keras.Model`

**References**