import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import feature_column
from os import getcwd
from sklearn.model_selection import train_test_split
Pandas is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset and load it into a dataframe.
filePath = f"{getcwd()}/data/heart.csv"
dataframe = pd.read_csv(filePath)
dataframe.head()
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
age | sex | cp | trestbps | chol | fbs | restecg | thalach | exang | oldpeak | slope | ca | thal | target | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 63 | 1 | 1 | 145 | 233 | 1 | 2 | 150 | 0 | 2.3 | 3 | 0 | fixed | 0 |
1 | 67 | 1 | 4 | 160 | 286 | 0 | 2 | 108 | 1 | 1.5 | 2 | 3 | normal | 1 |
2 | 67 | 1 | 4 | 120 | 229 | 0 | 2 | 129 | 1 | 2.6 | 2 | 2 | reversible | 0 |
3 | 37 | 1 | 3 | 130 | 250 | 0 | 0 | 187 | 0 | 3.5 | 3 | 0 | normal | 0 |
4 | 41 | 0 | 2 | 130 | 204 | 0 | 2 | 172 | 0 | 1.4 | 1 | 0 | normal | 0 |
The dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
193 train examples
49 validation examples
61 test examples
tf.data
Next, we will wrap the dataframes with tf.data. This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly.
# EXERCISE: A utility method to create a tf.data dataset from a Pandas Dataframe.
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# Use Pandas dataframe's pop method to get the list of targets.
labels = dataframe.pop("target")
# Create a tf.data.Dataset from the dataframe and labels.
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
# Shuffle dataset.
ds = ds.shuffle(len(dataframe))
# Batch dataset with specified batch_size parameter.
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
Now that we have created the input pipeline, let’s call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
Every feature: ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal']
A batch of ages: tf.Tensor([38 56 52 51 62], shape=(5,), dtype=int64)
A batch of targets: tf.Tensor([1 0 0 0 1], shape=(5,), dtype=int64)
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe.
TensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
# Try to demonstrate several types of feature columns by getting an example.
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column and to transform a batch of data.
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column, dtype='float64')
print(feature_layer(example_batch).numpy())
The output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A numeric column is the simplest type of column. It is used to represent real valued features.
# EXERCISE: Create a numeric feature column out of 'age' and demo it.
age = tf.feature_column.numeric_column("age")
demo(age)
[[58.]
[50.]
[44.]
[53.]
[54.]]
In the heart disease dataset, most columns from the dataframe are numeric.
Often, you don’t want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person’s age. Instead of representing age as a numeric column, we could split the age into several buckets using a bucketized column.
# EXERCISE: Create a bucketized feature column out of 'age' with
# the following boundaries and demo it.
boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]
age_buckets = tf.feature_column.bucketized_column(
source_column=age,
boundaries=boundaries
)
demo(age_buckets)
[[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]]
Notice the one-hot values above describe which age range each row matches.
In this dataset, thal is represented as a string (e.g. ‘fixed’, ‘normal’, or ‘reversible’). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets).
Note: You will probably see some warning messages when running some of the code cell below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.
# EXERCISE: Create a categorical vocabulary column out of the
# above mentioned categories with the key specified as 'thal'.
thal = tf.feature_column.categorical_column_with_vocabulary_list(
key="thal",
vocabulary_list=["fixed", "normal", "reversible"]
)
# EXERCISE: Create an indicator column out of the created categorical column.
thal_one_hot = tf.feature_column.indicator_column(thal)
demo(thal_one_hot)
[[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]
[0. 1. 0.]]
The vocabulary can be passed as a list using categorical_column_with_vocabulary_list, or loaded from a file using categorical_column_with_vocabulary_file.
Suppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an embedding column represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. You can tune the size of the embedding with the dimension
parameter.
# EXERCISE: Create an embedding column out of the categorical
# vocabulary you just created (thal). Set the size of the
# embedding to 8, by using the dimension parameter.
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
[[-0.49215958 0.46196535 -0.38441586 -0.60815907 0.208964 -0.4181664
0.5831625 -0.46737102]
[ 0.00818959 0.4802443 0.07624934 -0.37232238 -0.02753487 -0.03232663
0.10810377 -0.49013278]
[ 0.00818959 0.4802443 0.07624934 -0.37232238 -0.02753487 -0.03232663
0.10810377 -0.49013278]
[ 0.00818959 0.4802443 0.07624934 -0.37232238 -0.02753487 -0.03232663
0.10810377 -0.49013278]
[ 0.00818959 0.4802443 0.07624934 -0.37232238 -0.02753487 -0.03232663
0.10810377 -0.49013278]]
Another way to represent a categorical column with a large number of values is to use a categorical_column_with_hash_bucket. This feature column calculates a hash value of the input, then selects one of the hash_bucket_size
buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash buckets significantly smaller than the number of actual categories to save space.
# EXERCISE: Create a hashed feature column with 'thal' as the key and
# 1000 hash buckets.
thal_hashed = tf.feature_column.categorical_column_with_hash_bucket(
'thal',
hash_bucket_size=1000
)
demo(feature_column.indicator_column(thal_hashed))
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
Combining features into a single feature, better known as feature crosses, enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that crossed_column
does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a hashed_column
, so you can choose how large the table is.
# EXERCISE: Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = tf.feature_column.crossed_column(
[age_buckets, thal],
hash_bucket_size=1000
)
demo(feature_column.indicator_column(crossed_feature))
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
We have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this exercise is to show you the complete code needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.
If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
dataframe.dtypes
age int64
sex int64
cp int64
trestbps int64
chol int64
fbs int64
restecg int64
thalach int64
exang int64
oldpeak float64
slope int64
ca int64
thal object
target int64
dtype: object
You can use the above list of column datatypes to map the appropriate feature column to every column in the dataframe.
# EXERCISE: Fill in the missing code below
feature_columns = []
# Numeric Cols.
# Create a list of numeric columns. Use the following list of columns
# that have a numeric datatype: ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca'].
numeric_columns = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']
for header in numeric_columns:
# Create a numeric feature column out of the header.
numeric_feature_column = tf.feature_column.numeric_column(header)
feature_columns.append(numeric_feature_column)
# Bucketized Cols.
# Create a bucketized feature column out of the age column (numeric column)
# that you've already created. Use the following boundaries:
# [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]
age_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# Indicator Cols.
# Create a categorical vocabulary column out of the categories
# ['fixed', 'normal', 'reversible'] with the key specified as 'thal'.
thal = tf.feature_column.categorical_column_with_vocabulary_list(
key="thal",
vocabulary_list=["fixed", "normal", "reversible"]
)
# Create an indicator column out of the created thal categorical column
thal_one_hot = tf.feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# Embedding Cols.
# Create an embedding column out of the categorical vocabulary you
# just created (thal). Set the size of the embedding to 8, by using
# the dimension parameter.
thal_embedding = tf.feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# Crossed Cols.
# Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
# Create an indicator column out of the crossed column created above to one-hot encode it.
crossed_feature = tf.feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
Now that we have defined our feature columns, we will use a DenseFeatures layer to input them to our Keras model.
# EXERCISE: Create a Keras DenseFeatures layer and pass the feature_columns you just created.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=100)
Epoch 1/100
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor. Received: inputs={'age': <tf.Tensor 'IteratorGetNext:0' shape=(None,) dtype=int64>, 'sex': <tf.Tensor 'IteratorGetNext:8' shape=(None,) dtype=int64>, 'cp': <tf.Tensor 'IteratorGetNext:3' shape=(None,) dtype=int64>, 'trestbps': <tf.Tensor 'IteratorGetNext:12' shape=(None,) dtype=int64>, 'chol': <tf.Tensor 'IteratorGetNext:2' shape=(None,) dtype=int64>, 'fbs': <tf.Tensor 'IteratorGetNext:5' shape=(None,) dtype=int64>, 'restecg': <tf.Tensor 'IteratorGetNext:7' shape=(None,) dtype=int64>, 'thalach': <tf.Tensor 'IteratorGetNext:11' shape=(None,) dtype=int64>, 'exang': <tf.Tensor 'IteratorGetNext:4' shape=(None,) dtype=int64>, 'oldpeak': <tf.Tensor 'IteratorGetNext:6' shape=(None,) dtype=float64>, 'slope': <tf.Tensor 'IteratorGetNext:9' shape=(None,) dtype=int64>, 'ca': <tf.Tensor 'IteratorGetNext:1' shape=(None,) dtype=int64>, 'thal': <tf.Tensor 'IteratorGetNext:10' shape=(None,) dtype=string>}. Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor. Received: inputs={'age': <tf.Tensor 'IteratorGetNext:0' shape=(None,) dtype=int64>, 'sex': <tf.Tensor 'IteratorGetNext:8' shape=(None,) dtype=int64>, 'cp': <tf.Tensor 'IteratorGetNext:3' shape=(None,) dtype=int64>, 'trestbps': <tf.Tensor 'IteratorGetNext:12' shape=(None,) dtype=int64>, 'chol': <tf.Tensor 'IteratorGetNext:2' shape=(None,) dtype=int64>, 'fbs': <tf.Tensor 'IteratorGetNext:5' shape=(None,) dtype=int64>, 'restecg': <tf.Tensor 'IteratorGetNext:7' shape=(None,) dtype=int64>, 'thalach': <tf.Tensor 'IteratorGetNext:11' shape=(None,) dtype=int64>, 'exang': <tf.Tensor 'IteratorGetNext:4' shape=(None,) dtype=int64>, 'oldpeak': <tf.Tensor 'IteratorGetNext:6' shape=(None,) dtype=float64>, 'slope': <tf.Tensor 'IteratorGetNext:9' shape=(None,) dtype=int64>, 'ca': <tf.Tensor 'IteratorGetNext:1' shape=(None,) dtype=int64>, 'thal': <tf.Tensor 'IteratorGetNext:10' shape=(None,) dtype=string>}. Consider rewriting this model with the Functional API.
1/7 [===>..........................] - ETA: 7s - loss: 2.3398 - accuracy: 0.7188WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor. Received: inputs={'age': <tf.Tensor 'IteratorGetNext:0' shape=(None,) dtype=int64>, 'sex': <tf.Tensor 'IteratorGetNext:8' shape=(None,) dtype=int64>, 'cp': <tf.Tensor 'IteratorGetNext:3' shape=(None,) dtype=int64>, 'trestbps': <tf.Tensor 'IteratorGetNext:12' shape=(None,) dtype=int64>, 'chol': <tf.Tensor 'IteratorGetNext:2' shape=(None,) dtype=int64>, 'fbs': <tf.Tensor 'IteratorGetNext:5' shape=(None,) dtype=int64>, 'restecg': <tf.Tensor 'IteratorGetNext:7' shape=(None,) dtype=int64>, 'thalach': <tf.Tensor 'IteratorGetNext:11' shape=(None,) dtype=int64>, 'exang': <tf.Tensor 'IteratorGetNext:4' shape=(None,) dtype=int64>, 'oldpeak': <tf.Tensor 'IteratorGetNext:6' shape=(None,) dtype=float64>, 'slope': <tf.Tensor 'IteratorGetNext:9' shape=(None,) dtype=int64>, 'ca': <tf.Tensor 'IteratorGetNext:1' shape=(None,) dtype=int64>, 'thal': <tf.Tensor 'IteratorGetNext:10' shape=(None,) dtype=string>}. Consider rewriting this model with the Functional API.
7/7 [==============================] - 2s 51ms/step - loss: 1.2491 - accuracy: 0.6321 - val_loss: 1.1033 - val_accuracy: 0.6939
Epoch 2/100
7/7 [==============================] - 0s 7ms/step - loss: 0.6739 - accuracy: 0.6891 - val_loss: 1.1093 - val_accuracy: 0.6939
Epoch 3/100
7/7 [==============================] - 0s 8ms/step - loss: 0.7438 - accuracy: 0.7358 - val_loss: 0.6243 - val_accuracy: 0.5714
Epoch 4/100
7/7 [==============================] - 0s 11ms/step - loss: 0.7406 - accuracy: 0.7202 - val_loss: 0.8554 - val_accuracy: 0.4898
Epoch 5/100
7/7 [==============================] - 0s 11ms/step - loss: 0.8869 - accuracy: 0.6218 - val_loss: 1.4996 - val_accuracy: 0.6939
Epoch 6/100
7/7 [==============================] - 0s 7ms/step - loss: 1.1536 - accuracy: 0.7358 - val_loss: 1.0770 - val_accuracy: 0.4898
Epoch 7/100
7/7 [==============================] - 0s 11ms/step - loss: 1.2174 - accuracy: 0.4922 - val_loss: 1.1844 - val_accuracy: 0.6939
Epoch 8/100
7/7 [==============================] - 0s 11ms/step - loss: 1.0940 - accuracy: 0.7358 - val_loss: 0.5311 - val_accuracy: 0.6531
Epoch 9/100
7/7 [==============================] - 0s 8ms/step - loss: 1.0743 - accuracy: 0.4870 - val_loss: 0.5701 - val_accuracy: 0.7143
Epoch 10/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4635 - accuracy: 0.7824 - val_loss: 0.6246 - val_accuracy: 0.5918
Epoch 11/100
7/7 [==============================] - 0s 10ms/step - loss: 0.6229 - accuracy: 0.6891 - val_loss: 0.7447 - val_accuracy: 0.6939
Epoch 12/100
7/7 [==============================] - 0s 7ms/step - loss: 0.8387 - accuracy: 0.7358 - val_loss: 0.4935 - val_accuracy: 0.7551
Epoch 13/100
7/7 [==============================] - 0s 8ms/step - loss: 1.0397 - accuracy: 0.4352 - val_loss: 0.5438 - val_accuracy: 0.7143
Epoch 14/100
7/7 [==============================] - 0s 13ms/step - loss: 0.8072 - accuracy: 0.7513 - val_loss: 1.0075 - val_accuracy: 0.6939
Epoch 15/100
7/7 [==============================] - 0s 10ms/step - loss: 0.5591 - accuracy: 0.7720 - val_loss: 0.5986 - val_accuracy: 0.6735
Epoch 16/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4474 - accuracy: 0.7979 - val_loss: 0.5564 - val_accuracy: 0.6939
Epoch 17/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4226 - accuracy: 0.7979 - val_loss: 0.5263 - val_accuracy: 0.7143
Epoch 18/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4295 - accuracy: 0.7979 - val_loss: 0.5085 - val_accuracy: 0.7551
Epoch 19/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4268 - accuracy: 0.7824 - val_loss: 0.4881 - val_accuracy: 0.7551
Epoch 20/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3940 - accuracy: 0.8031 - val_loss: 0.4814 - val_accuracy: 0.7551
Epoch 21/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3759 - accuracy: 0.7824 - val_loss: 0.5095 - val_accuracy: 0.7347
Epoch 22/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3910 - accuracy: 0.8031 - val_loss: 0.4912 - val_accuracy: 0.7347
Epoch 23/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3777 - accuracy: 0.8187 - val_loss: 0.4975 - val_accuracy: 0.7347
Epoch 24/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3965 - accuracy: 0.8187 - val_loss: 0.4952 - val_accuracy: 0.7551
Epoch 25/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3749 - accuracy: 0.8083 - val_loss: 0.4859 - val_accuracy: 0.7347
Epoch 26/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3601 - accuracy: 0.7979 - val_loss: 0.4936 - val_accuracy: 0.7347
Epoch 27/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3631 - accuracy: 0.8031 - val_loss: 0.5018 - val_accuracy: 0.7551
Epoch 28/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3588 - accuracy: 0.7979 - val_loss: 0.5173 - val_accuracy: 0.7143
Epoch 29/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4712 - accuracy: 0.7979 - val_loss: 0.5569 - val_accuracy: 0.7143
Epoch 30/100
7/7 [==============================] - 0s 11ms/step - loss: 0.4983 - accuracy: 0.7720 - val_loss: 0.5088 - val_accuracy: 0.7959
Epoch 31/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4345 - accuracy: 0.7824 - val_loss: 0.5476 - val_accuracy: 0.6939
Epoch 32/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4286 - accuracy: 0.7979 - val_loss: 0.4682 - val_accuracy: 0.7551
Epoch 33/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4018 - accuracy: 0.7979 - val_loss: 0.7092 - val_accuracy: 0.6939
Epoch 34/100
7/7 [==============================] - 0s 11ms/step - loss: 0.9867 - accuracy: 0.7358 - val_loss: 0.4593 - val_accuracy: 0.7551
Epoch 35/100
7/7 [==============================] - 0s 7ms/step - loss: 0.8727 - accuracy: 0.5803 - val_loss: 0.4935 - val_accuracy: 0.7143
Epoch 36/100
7/7 [==============================] - 0s 10ms/step - loss: 1.0973 - accuracy: 0.7461 - val_loss: 1.3884 - val_accuracy: 0.6939
Epoch 37/100
7/7 [==============================] - 0s 10ms/step - loss: 0.8404 - accuracy: 0.7668 - val_loss: 0.8680 - val_accuracy: 0.5102
Epoch 38/100
7/7 [==============================] - 0s 7ms/step - loss: 0.8560 - accuracy: 0.5285 - val_loss: 0.6477 - val_accuracy: 0.6939
Epoch 39/100
7/7 [==============================] - 0s 10ms/step - loss: 0.6821 - accuracy: 0.7358 - val_loss: 0.4947 - val_accuracy: 0.7551
Epoch 40/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3962 - accuracy: 0.7979 - val_loss: 0.4898 - val_accuracy: 0.7959
Epoch 41/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3672 - accuracy: 0.8238 - val_loss: 0.4579 - val_accuracy: 0.7755
Epoch 42/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3508 - accuracy: 0.8187 - val_loss: 0.4528 - val_accuracy: 0.7755
Epoch 43/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3520 - accuracy: 0.8290 - val_loss: 0.4650 - val_accuracy: 0.7755
Epoch 44/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3688 - accuracy: 0.8135 - val_loss: 0.4353 - val_accuracy: 0.7551
Epoch 45/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3505 - accuracy: 0.8394 - val_loss: 0.4276 - val_accuracy: 0.7551
Epoch 46/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3326 - accuracy: 0.8342 - val_loss: 0.4366 - val_accuracy: 0.7755
Epoch 47/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3388 - accuracy: 0.8238 - val_loss: 0.4615 - val_accuracy: 0.7755
Epoch 48/100
7/7 [==============================] - 0s 7ms/step - loss: 0.4715 - accuracy: 0.7617 - val_loss: 0.4279 - val_accuracy: 0.7755
Epoch 49/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4040 - accuracy: 0.7876 - val_loss: 0.4166 - val_accuracy: 0.7755
Epoch 50/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3626 - accuracy: 0.8031 - val_loss: 0.4791 - val_accuracy: 0.7959
Epoch 51/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3783 - accuracy: 0.7876 - val_loss: 0.4255 - val_accuracy: 0.7959
Epoch 52/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3334 - accuracy: 0.8394 - val_loss: 0.4073 - val_accuracy: 0.7755
Epoch 53/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3238 - accuracy: 0.8549 - val_loss: 0.5078 - val_accuracy: 0.7755
Epoch 54/100
7/7 [==============================] - 0s 8ms/step - loss: 0.4985 - accuracy: 0.7617 - val_loss: 0.4427 - val_accuracy: 0.7551
Epoch 55/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3338 - accuracy: 0.8342 - val_loss: 0.4481 - val_accuracy: 0.7959
Epoch 56/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3479 - accuracy: 0.8290 - val_loss: 0.4315 - val_accuracy: 0.7551
Epoch 57/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3324 - accuracy: 0.8031 - val_loss: 0.4302 - val_accuracy: 0.7959
Epoch 58/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3962 - accuracy: 0.8135 - val_loss: 0.4966 - val_accuracy: 0.7755
Epoch 59/100
7/7 [==============================] - 0s 11ms/step - loss: 0.7145 - accuracy: 0.7461 - val_loss: 0.5461 - val_accuracy: 0.7755
Epoch 60/100
7/7 [==============================] - 0s 7ms/step - loss: 0.6679 - accuracy: 0.6684 - val_loss: 0.4669 - val_accuracy: 0.7551
Epoch 61/100
7/7 [==============================] - 0s 7ms/step - loss: 0.5107 - accuracy: 0.7461 - val_loss: 0.7235 - val_accuracy: 0.7143
Epoch 62/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4449 - accuracy: 0.7927 - val_loss: 0.4416 - val_accuracy: 0.7755
Epoch 63/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3489 - accuracy: 0.8290 - val_loss: 0.4690 - val_accuracy: 0.7755
Epoch 64/100
7/7 [==============================] - 0s 8ms/step - loss: 0.3523 - accuracy: 0.8083 - val_loss: 0.4118 - val_accuracy: 0.8163
Epoch 65/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3506 - accuracy: 0.8238 - val_loss: 0.4225 - val_accuracy: 0.7755
Epoch 66/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3348 - accuracy: 0.8238 - val_loss: 0.4280 - val_accuracy: 0.7755
Epoch 67/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3274 - accuracy: 0.8290 - val_loss: 0.4205 - val_accuracy: 0.7755
Epoch 68/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3234 - accuracy: 0.8238 - val_loss: 0.4212 - val_accuracy: 0.7755
Epoch 69/100
7/7 [==============================] - 0s 8ms/step - loss: 0.3252 - accuracy: 0.8446 - val_loss: 0.4154 - val_accuracy: 0.7755
Epoch 70/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3341 - accuracy: 0.8238 - val_loss: 0.4993 - val_accuracy: 0.7755
Epoch 71/100
7/7 [==============================] - 0s 10ms/step - loss: 0.4140 - accuracy: 0.7876 - val_loss: 0.4048 - val_accuracy: 0.7959
Epoch 72/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3374 - accuracy: 0.8290 - val_loss: 0.3996 - val_accuracy: 0.7755
Epoch 73/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3257 - accuracy: 0.8290 - val_loss: 0.4092 - val_accuracy: 0.7959
Epoch 74/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3220 - accuracy: 0.8446 - val_loss: 0.4034 - val_accuracy: 0.7959
Epoch 75/100
7/7 [==============================] - 0s 8ms/step - loss: 0.4542 - accuracy: 0.7409 - val_loss: 0.4144 - val_accuracy: 0.8163
Epoch 76/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3264 - accuracy: 0.8497 - val_loss: 0.3965 - val_accuracy: 0.7959
Epoch 77/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3100 - accuracy: 0.8290 - val_loss: 0.4605 - val_accuracy: 0.7755
Epoch 78/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3269 - accuracy: 0.8135 - val_loss: 0.4125 - val_accuracy: 0.7551
Epoch 79/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3312 - accuracy: 0.8549 - val_loss: 0.4237 - val_accuracy: 0.7755
Epoch 80/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3243 - accuracy: 0.8135 - val_loss: 0.4174 - val_accuracy: 0.7959
Epoch 81/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3076 - accuracy: 0.8497 - val_loss: 0.4207 - val_accuracy: 0.7755
Epoch 82/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3824 - accuracy: 0.8342 - val_loss: 0.4021 - val_accuracy: 0.7959
Epoch 83/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3338 - accuracy: 0.8135 - val_loss: 0.4205 - val_accuracy: 0.7959
Epoch 84/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3077 - accuracy: 0.8601 - val_loss: 0.4022 - val_accuracy: 0.8163
Epoch 85/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3395 - accuracy: 0.8083 - val_loss: 0.4377 - val_accuracy: 0.7755
Epoch 86/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3103 - accuracy: 0.8342 - val_loss: 0.4143 - val_accuracy: 0.7959
Epoch 87/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3194 - accuracy: 0.8238 - val_loss: 0.4359 - val_accuracy: 0.7959
Epoch 88/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3195 - accuracy: 0.8394 - val_loss: 0.4132 - val_accuracy: 0.7755
Epoch 89/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3239 - accuracy: 0.8446 - val_loss: 0.4193 - val_accuracy: 0.7755
Epoch 90/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3382 - accuracy: 0.8290 - val_loss: 0.4134 - val_accuracy: 0.7755
Epoch 91/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3232 - accuracy: 0.8549 - val_loss: 0.4281 - val_accuracy: 0.7755
Epoch 92/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3205 - accuracy: 0.8187 - val_loss: 0.4210 - val_accuracy: 0.7755
Epoch 93/100
7/7 [==============================] - 0s 11ms/step - loss: 0.2980 - accuracy: 0.8497 - val_loss: 0.4091 - val_accuracy: 0.8367
Epoch 94/100
7/7 [==============================] - 0s 9ms/step - loss: 0.2928 - accuracy: 0.8342 - val_loss: 0.4381 - val_accuracy: 0.7755
Epoch 95/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3928 - accuracy: 0.7927 - val_loss: 0.3943 - val_accuracy: 0.8163
Epoch 96/100
7/7 [==============================] - 0s 12ms/step - loss: 0.3183 - accuracy: 0.8756 - val_loss: 0.4019 - val_accuracy: 0.8367
Epoch 97/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3008 - accuracy: 0.8290 - val_loss: 0.4378 - val_accuracy: 0.7959
Epoch 98/100
7/7 [==============================] - 0s 7ms/step - loss: 0.3431 - accuracy: 0.8238 - val_loss: 0.4211 - val_accuracy: 0.7755
Epoch 99/100
7/7 [==============================] - 0s 11ms/step - loss: 0.3383 - accuracy: 0.8031 - val_loss: 0.4232 - val_accuracy: 0.8163
Epoch 100/100
7/7 [==============================] - 0s 10ms/step - loss: 0.3509 - accuracy: 0.8290 - val_loss: 0.4145 - val_accuracy: 0.7959
<keras.callbacks.History at 0x7fb02828eac0>
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
2/2 [==============================] - 0s 5ms/step - loss: 0.4076 - accuracy: 0.8361
Accuracy 0.8360655903816223
# Now click the 'Submit Assignment' button above.
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
<IPython.core.display.Javascript object>
%%javascript
<!-- Shutdown and close the notebook -->
window.onbeforeunload = null
window.close();
IPython.notebook.session.delete();