This lab will show how you can define custom layers with the Lambda layer. You can either use lambda functions within the Lambda layer or define a custom function that the Lambda layer will call. Let’s get started!
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras import backend as K
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
Here, we’ll use a Lambda layer to define a custom layer in our network. We’re using a lambda function to get the absolute value of the layer input.
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128),
tf.keras.layers.Lambda(lambda x: tf.abs(x)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 5s 83us/sample - loss: 0.2233 - accuracy: 0.9369
Epoch 2/5
60000/60000 [==============================] - 5s 78us/sample - loss: 0.0942 - accuracy: 0.9726
Epoch 3/5
60000/60000 [==============================] - 5s 77us/sample - loss: 0.0648 - accuracy: 0.9801
Epoch 4/5
60000/60000 [==============================] - 5s 77us/sample - loss: 0.0497 - accuracy: 0.9844
Epoch 5/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.0396 - accuracy: 0.9874
10000/10000 [==============================] - 0s 39us/sample - loss: 0.0818 - accuracy: 0.9745
[0.08177149572630879, 0.9745]
Another way to use the Lambda layer is to pass in a function defined outside the model. The code below shows how a custom ReLU function is used as a custom layer in the model.
def my_relu(x):
return K.maximum(-0.1, x)
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128),
tf.keras.layers.Lambda(my_relu),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 5s 79us/sample - loss: 0.2600 - accuracy: 0.9246
Epoch 2/5
60000/60000 [==============================] - 5s 78us/sample - loss: 0.1164 - accuracy: 0.9650
Epoch 3/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.0788 - accuracy: 0.9768
Epoch 4/5
60000/60000 [==============================] - 5s 76us/sample - loss: 0.0586 - accuracy: 0.9821
Epoch 5/5
60000/60000 [==============================] - 5s 77us/sample - loss: 0.0443 - accuracy: 0.9865
10000/10000 [==============================] - 0s 29us/sample - loss: 0.0773 - accuracy: 0.9777
[0.07731671732312534, 0.9777]