larq.activations

Activations can either be used through an Activation layer, or through the activation argument supported by all forward layers:

import tensorflow as tf
import larq as lq

model.add(lq.layers.QuantDense(64))
model.add(tf.keras.layers.Activation('hard_tanh'))

This is equivalent to:

model.add(lq.layers.QuantDense(64, activation='hard_tanh'))

You can also pass an element-wise TensorFlow function as an activation:

model.add(lq.layers.QuantDense(64, activation=lq.activations.hard_tanh))

hard_tanh

hard_tanh(x)
Hard tanh activation function.

Arguments

  • x: Input tensor.

Returns

Hard tanh activation.

leaky_tanh

leaky_tanh(x, alpha=0.2)
Leaky tanh activation function. Similar to hard tanh, but with non-zero slopes as in leaky ReLU.

Arguments

  • x: Input tensor.
  • alpha: Slope of the activation function outside of [-1, 1].

Returns

Leaky tanh activation.