Coursera

AI Platform: Qwik Start

This lab gives you an introductory, end-to-end experience of training and prediction on AI Platform. The lab will use a census dataset to:

import os

Step 1: Get your training data

The relevant data files, adult.data and adult.test, are hosted in a public Cloud Storage bucket.

You can read the files directly from Cloud Storage or copy them to your local environment. For this lab you will download the samples for local training, and later upload them to your own Cloud Storage bucket for cloud training.

Run the following command to download the data to a local file directory and set variables that point to the downloaded data files:

%%bash

mkdir data
gsutil -m cp gs://cloud-samples-data/ml-engine/census/data/* data/
Copying gs://cloud-samples-data/ml-engine/census/data/test.json...
Copying gs://cloud-samples-data/ml-engine/census/data/adult.data.csv...         
Copying gs://cloud-samples-data/ml-engine/census/data/adult.test.csv...         
Copying gs://cloud-samples-data/ml-engine/census/data/census.test.csv...        
Copying gs://cloud-samples-data/ml-engine/census/data/census.train.csv...       
Copying gs://cloud-samples-data/ml-engine/census/data/test.csv...               
/ [6/6 files][ 10.7 MiB/ 10.7 MiB] 100% Done                                    
Operation completed over 6 objects/10.7 MiB.                                     
%%bash

export TRAIN_DATA=$(pwd)/data/adult.data.csv
export EVAL_DATA=$(pwd)/data/adult.test.csv

Inspect what the data looks like by looking at the first couple of rows:

%%bash

head data/adult.data.csv
39, State-gov, 77516, Bachelors, 13, Never-married, Adm-clerical, Not-in-family, White, Male, 2174, 0, 40, United-States, <=50K
50, Self-emp-not-inc, 83311, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 13, United-States, <=50K
38, Private, 215646, HS-grad, 9, Divorced, Handlers-cleaners, Not-in-family, White, Male, 0, 0, 40, United-States, <=50K
53, Private, 234721, 11th, 7, Married-civ-spouse, Handlers-cleaners, Husband, Black, Male, 0, 0, 40, United-States, <=50K
28, Private, 338409, Bachelors, 13, Married-civ-spouse, Prof-specialty, Wife, Black, Female, 0, 0, 40, Cuba, <=50K
37, Private, 284582, Masters, 14, Married-civ-spouse, Exec-managerial, Wife, White, Female, 0, 0, 40, United-States, <=50K
49, Private, 160187, 9th, 5, Married-spouse-absent, Other-service, Not-in-family, Black, Female, 0, 0, 16, Jamaica, <=50K
52, Self-emp-not-inc, 209642, HS-grad, 9, Married-civ-spouse, Exec-managerial, Husband, White, Male, 0, 0, 45, United-States, >50K
31, Private, 45781, Masters, 14, Never-married, Prof-specialty, Not-in-family, White, Female, 14084, 0, 50, United-States, >50K
42, Private, 159449, Bachelors, 13, Married-civ-spouse, Exec-managerial, Husband, White, Male, 5178, 0, 40, United-States, >50K

Step 2: Run a local training job

A local training job loads your Python training program and starts a training process in an environment that’s similar to that of a live Cloud AI Platform cloud training job.

Step 2.1: Create files to hold the Python program

To do that, let’s create three files. The first, called util.py, will contain utility methods for cleaning and preprocessing the data, as well as performing any feature engineering needed by transforming and normalizing the data.

%%bash
mkdir -p trainer
touch trainer/__init__.py
%%writefile trainer/util.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
from six.moves import urllib
import tempfile

import numpy as np
import pandas as pd
import tensorflow as tf

# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')

# Download options.
DATA_URL = (
    'https://storage.googleapis.com/cloud-samples-data/ai-platform/census'
    '/data')
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)

# These are the features in the dataset.
# Dataset information: https://archive.ics.uci.edu/ml/datasets/census+income
_CSV_COLUMNS = [
    'age', 'workclass', 'fnlwgt', 'education', 'education_num',
    'marital_status', 'occupation', 'relationship', 'race', 'gender',
    'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
    'income_bracket'
]

# This is the label (target) we want to predict.
_LABEL_COLUMN = 'income_bracket'

# These are columns we will not use as features for training. There are many
# reasons not to use certain attributes of data for training. Perhaps their
# values are noisy or inconsistent, or perhaps they encode bias that we do not
# want our model to learn. For a deep dive into the features of this Census
# dataset and the challenges they pose, see the Introduction to ML Fairness
# Notebook: https://colab.research.google.com/github/google/eng-edu/blob
# /master/ml/cc/exercises/intro_to_fairness.ipynb
UNUSED_COLUMNS = ['fnlwgt', 'education', 'gender']

_CATEGORICAL_TYPES = {
    'workclass': pd.api.types.CategoricalDtype(categories=[
        'Federal-gov', 'Local-gov', 'Never-worked', 'Private', 'Self-emp-inc',
        'Self-emp-not-inc', 'State-gov', 'Without-pay'
    ]),
    'marital_status': pd.api.types.CategoricalDtype(categories=[
        'Divorced', 'Married-AF-spouse', 'Married-civ-spouse',
        'Married-spouse-absent', 'Never-married', 'Separated', 'Widowed'
    ]),
    'occupation': pd.api.types.CategoricalDtype([
        'Adm-clerical', 'Armed-Forces', 'Craft-repair', 'Exec-managerial',
        'Farming-fishing', 'Handlers-cleaners', 'Machine-op-inspct',
        'Other-service', 'Priv-house-serv', 'Prof-specialty', 'Protective-serv',
        'Sales', 'Tech-support', 'Transport-moving'
    ]),
    'relationship': pd.api.types.CategoricalDtype(categories=[
        'Husband', 'Not-in-family', 'Other-relative', 'Own-child', 'Unmarried',
        'Wife'
    ]),
    'race': pd.api.types.CategoricalDtype(categories=[
        'Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White'
    ]),
    'native_country': pd.api.types.CategoricalDtype(categories=[
        'Cambodia', 'Canada', 'China', 'Columbia', 'Cuba', 'Dominican-Republic',
        'Ecuador', 'El-Salvador', 'England', 'France', 'Germany', 'Greece',
        'Guatemala', 'Haiti', 'Holand-Netherlands', 'Honduras', 'Hong',
        'Hungary',
        'India', 'Iran', 'Ireland', 'Italy', 'Jamaica', 'Japan', 'Laos',
        'Mexico',
        'Nicaragua', 'Outlying-US(Guam-USVI-etc)', 'Peru', 'Philippines',
        'Poland',
        'Portugal', 'Puerto-Rico', 'Scotland', 'South', 'Taiwan', 'Thailand',
        'Trinadad&Tobago', 'United-States', 'Vietnam', 'Yugoslavia'
    ]),
    'income_bracket': pd.api.types.CategoricalDtype(categories=[
        '<=50K', '>50K'
    ])
}


def _download_and_clean_file(filename, url):
    """Downloads data from url, and makes changes to match the CSV format.

    The CSVs may use spaces after the comma delimters (non-standard) or include
    rows which do not represent well-formed examples. This function strips out
    some of these problems.

    Args:
      filename: filename to save url to
      url: URL of resource to download
    """
    temp_file, _ = urllib.request.urlretrieve(url)
    with tf.io.gfile.GFile(temp_file, 'r') as temp_file_object:
        with tf.io.gfile.GFile(filename, 'w') as file_object:
            for line in temp_file_object:
                line = line.strip()
                line = line.replace(', ', ',')
                if not line or ',' not in line:
                    continue
                if line[-1] == '.':
                    line = line[:-1]
                line += '\n'
                file_object.write(line)
    tf.io.gfile.remove(temp_file)


def download(data_dir):
    """Downloads census data if it is not already present.

    Args:
      data_dir: directory where we will access/save the census data
    """
    tf.io.gfile.makedirs(data_dir)

    training_file_path = os.path.join(data_dir, TRAINING_FILE)
    if not tf.io.gfile.exists(training_file_path):
        _download_and_clean_file(training_file_path, TRAINING_URL)

    eval_file_path = os.path.join(data_dir, EVAL_FILE)
    if not tf.io.gfile.exists(eval_file_path):
        _download_and_clean_file(eval_file_path, EVAL_URL)

    return training_file_path, eval_file_path


def preprocess(dataframe):
    """Converts categorical features to numeric. Removes unused columns.

    Args:
      dataframe: Pandas dataframe with raw data

    Returns:
      Dataframe with preprocessed data
    """
    dataframe = dataframe.drop(columns=UNUSED_COLUMNS)

    # Convert integer valued (numeric) columns to floating point
    numeric_columns = dataframe.select_dtypes(['int64']).columns
    dataframe[numeric_columns] = dataframe[numeric_columns].astype('float32')

    # Convert categorical columns to numeric
    cat_columns = dataframe.select_dtypes(['object']).columns
    dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.astype(
        _CATEGORICAL_TYPES[x.name]))
    dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.cat.codes)
    return dataframe


def standardize(dataframe):
    """Scales numerical columns using their means and standard deviation to get
    z-scores: the mean of each numerical column becomes 0, and the standard
    deviation becomes 1. This can help the model converge during training.

    Args:
      dataframe: Pandas dataframe

    Returns:
      Input dataframe with the numerical columns scaled to z-scores
    """
    dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))
    # Normalize numeric columns.
    for column, dtype in dtypes:
        if dtype == 'float32':
            dataframe[column] -= dataframe[column].mean()
            dataframe[column] /= dataframe[column].std()
    return dataframe


def load_data():
    """Loads data into preprocessed (train_x, train_y, eval_y, eval_y)
    dataframes.

    Returns:
      A tuple (train_x, train_y, eval_x, eval_y), where train_x and eval_x are
      Pandas dataframes with features for training and train_y and eval_y are
      numpy arrays with the corresponding labels.
    """
    # Download Census dataset: Training and eval csv files.
    training_file_path, eval_file_path = download(DATA_DIR)

    # This census data uses the value '?' for missing entries. We use
    # na_values to
    # find ? and set it to NaN.
    # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv
    # .html
    train_df = pd.read_csv(training_file_path, names=_CSV_COLUMNS,
                           na_values='?')
    eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values='?')

    train_df = preprocess(train_df)
    eval_df = preprocess(eval_df)

    # Split train and eval data with labels. The pop method copies and removes
    # the label column from the dataframe.
    train_x, train_y = train_df, train_df.pop(_LABEL_COLUMN)
    eval_x, eval_y = eval_df, eval_df.pop(_LABEL_COLUMN)

    # Join train_x and eval_x to normalize on overall means and standard
    # deviations. Then separate them again.
    all_x = pd.concat([train_x, eval_x], keys=['train', 'eval'])
    all_x = standardize(all_x)
    train_x, eval_x = all_x.xs('train'), all_x.xs('eval')

    # Reshape label columns for use with tf.data.Dataset
    train_y = np.asarray(train_y).astype('float32').reshape((-1, 1))
    eval_y = np.asarray(eval_y).astype('float32').reshape((-1, 1))

    return train_x, train_y, eval_x, eval_y
Writing trainer/util.py

The second file, called model.py, defines the input function and the model architecture. In this example, we use tf.data API for the data pipeline and create the model using the Keras Sequential API. We define a DNN with an input layer and 3 additonal layers using the Relu activation function. Since the task is a binary classification, the output layer uses the sigmoid activation.

%%writefile trainer/model.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf


def input_fn(features, labels, shuffle, num_epochs, batch_size):
    """Generates an input function to be used for model training.

    Args:
      features: numpy array of features used for training or inference
      labels: numpy array of labels for each example
      shuffle: boolean for whether to shuffle the data or not (set True for
        training, False for evaluation)
      num_epochs: number of epochs to provide the data for
      batch_size: batch size for training

    Returns:
      A tf.data.Dataset that can provide data to the Keras model for training or
        evaluation
    """
    if labels is None:
        inputs = features
    else:
        inputs = (features, labels)
    dataset = tf.data.Dataset.from_tensor_slices(inputs)

    if shuffle:
        dataset = dataset.shuffle(buffer_size=len(features))

    # We call repeat after shuffling, rather than before, to prevent separate
    # epochs from blending together.
    dataset = dataset.repeat(num_epochs)
    dataset = dataset.batch(batch_size)
    return dataset


def create_keras_model(input_dim, learning_rate):
    """Creates Keras Model for Binary Classification.

    The single output node + Sigmoid activation makes this a Logistic
    Regression.

    Args:
      input_dim: How many features the input has
      learning_rate: Learning rate for training

    Returns:
      The compiled Keras model (still needs to be trained)
    """
    Dense = tf.keras.layers.Dense
    model = tf.keras.Sequential(
        [
            Dense(100, activation=tf.nn.relu, kernel_initializer='uniform',
                  input_shape=(input_dim,)),
            Dense(75, activation=tf.nn.relu),
            Dense(50, activation=tf.nn.relu),
            Dense(25, activation=tf.nn.relu),
            Dense(1, activation=tf.nn.sigmoid)
        ])

    # Custom Optimizer:
    # https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer
    optimizer = tf.keras.optimizers.RMSprop(lr=learning_rate)

    # Compile Keras model
    model.compile(
        loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
Writing trainer/model.py

The last file, called task.py, trains on data loaded and preprocessed in util.py. Using the tf.distribute.MirroredStrategy() scope, it is possible to train on a distributed fashion. The trained model is then saved in a TensorFlow SavedModel format.

%%writefile trainer/task.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import argparse
import os

from . import model
from . import util

import tensorflow as tf


def get_args():
    """Argument parser.

    Returns:
      Dictionary of arguments.
    """
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--job-dir',
        type=str,
        required=True,
        help='local or GCS location for writing checkpoints and exporting '
             'models')
    parser.add_argument(
        '--num-epochs',
        type=int,
        default=20,
        help='number of times to go through the data, default=20')
    parser.add_argument(
        '--batch-size',
        default=128,
        type=int,
        help='number of records to read during each training step, default=128')
    parser.add_argument(
        '--learning-rate',
        default=.01,
        type=float,
        help='learning rate for gradient descent, default=.01')
    parser.add_argument(
        '--verbosity',
        choices=['DEBUG', 'ERROR', 'FATAL', 'INFO', 'WARN'],
        default='INFO')
    args, _ = parser.parse_known_args()
    return args


def train_and_evaluate(args):
    """Trains and evaluates the Keras model.

    Uses the Keras model defined in model.py and trains on data loaded and
    preprocessed in util.py. Saves the trained model in TensorFlow SavedModel
    format to the path defined in part by the --job-dir argument.

    Args:
      args: dictionary of arguments - see get_args() for details
    """

    train_x, train_y, eval_x, eval_y = util.load_data()

    # dimensions
    num_train_examples, input_dim = train_x.shape
    num_eval_examples = eval_x.shape[0]

    # Create the Keras Model
    keras_model = model.create_keras_model(
        input_dim=input_dim, learning_rate=args.learning_rate)

    # Pass a numpy array by passing DataFrame.values
    training_dataset = model.input_fn(
        features=train_x.values,
        labels=train_y,
        shuffle=True,
        num_epochs=args.num_epochs,
        batch_size=args.batch_size)

    # Pass a numpy array by passing DataFrame.values
    validation_dataset = model.input_fn(
        features=eval_x.values,
        labels=eval_y,
        shuffle=False,
        num_epochs=args.num_epochs,
        batch_size=num_eval_examples)

    # Setup Learning Rate decay.
    lr_decay_cb = tf.keras.callbacks.LearningRateScheduler(
        lambda epoch: args.learning_rate + 0.02 * (0.5 ** (1 + epoch)),
        verbose=True)

    # Setup TensorBoard callback.
    tensorboard_cb = tf.keras.callbacks.TensorBoard(
        os.path.join(args.job_dir, 'keras_tensorboard'),
        histogram_freq=1)

    # Train model
    keras_model.fit(
        training_dataset,
        steps_per_epoch=int(num_train_examples / args.batch_size),
        epochs=args.num_epochs,
        validation_data=validation_dataset,
        validation_steps=1,
        verbose=1,
        callbacks=[lr_decay_cb, tensorboard_cb])

    export_path = os.path.join(args.job_dir, 'keras_export')
    tf.keras.models.save_model(keras_model, export_path)
    print('Model exported to: {}'.format(export_path))



if __name__ == '__main__':
    strategy = tf.distribute.MirroredStrategy()
    with strategy.scope():
        args = get_args()
        tf.compat.v1.logging.set_verbosity(args.verbosity)
        train_and_evaluate(args)

Writing trainer/task.py

Step 2.2: Run a training job locally using the Python training program

NOTE When you run the same training job on AI Platform later in the lab, you’ll see that the command is not much different from the above.

Specify an output directory and set a MODEL_DIR variable to hold the trained model, then run the training job locally by running the following command (by default, verbose logging is turned off. You can enable it by setting the –verbosity tag to DEBUG):

%%bash

MODEL_DIR=output
gcloud ai-platform local train \
    --module-name trainer.task \
    --package-path trainer/ \
    --job-dir $MODEL_DIR \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100
2023-08-29 15:01:35.423754: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64
2023-08-29 15:01:35.423822: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2023-08-29 15:01:35.423864: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (instance-20230829-215700): /proc/driver/nvidia/version does not exist
2023-08-29 15:01:35.426622: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:There are non-GPU devices in `tf.distribute.Strategy`, not using nccl allreduce.
/opt/conda/lib/python3.9/site-packages/keras/optimizer_v2/optimizer_v2.py:355: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
  warnings.warn(
2023-08-29 15:01:37.360095: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2023-08-29 15:01:37.360348: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2023-08-29 15:01:37.363896: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2023-08-29 15:01:37.430871: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2"
op: "TensorSliceDataset"
input: "Placeholder/_0"
input: "Placeholder/_1"
attr {
  key: "Toutput_types"
  value {
    list {
      type: DT_FLOAT
      type: DT_FLOAT
    }
  }
}
attr {
  key: "output_shapes"
  value {
    list {
      shape {
        dim {
          size: 11
        }
      }
      shape {
        dim {
          size: 1
        }
      }
    }
  }
}

2023-08-29 15:01:37.650991: W tensorflow/core/framework/dataset.cc:679] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
2023-08-29 15:01:37.656802: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:185] None of the MLIR Optimization Passes are enabled (registered 2)


Epoch 1/20

Epoch 00001: LearningRateScheduler setting learning rate to 0.02.


2023-08-29 15:01:40.623244: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2023-08-29 15:01:40.624507: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2023-08-29 15:01:40.632396: I tensorflow/core/profiler/lib/profiler_session.cc:66] Profiler session collecting data.
2023-08-29 15:01:40.642718: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
2023-08-29 15:01:40.656697: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40

2023-08-29 15:01:40.666562: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for trace.json.gz to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.trace.json.gz
2023-08-29 15:01:40.686438: I tensorflow/core/profiler/rpc/client/save_profile.cc:136] Creating directory: output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40

2023-08-29 15:01:40.687840: I tensorflow/core/profiler/rpc/client/save_profile.cc:142] Dumped gzipped tool data for memory_profile.json.gz to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.memory_profile.json.gz
2023-08-29 15:01:40.688641: I tensorflow/core/profiler/rpc/client/capture_profile.cc:251] Creating directory: output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40
Dumped tool data for xplane.pb to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.xplane.pb
Dumped tool data for overview_page.pb to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.overview_page.pb
Dumped tool data for input_pipeline.pb to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.input_pipeline.pb
Dumped tool data for tensorflow_stats.pb to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.tensorflow_stats.pb
Dumped tool data for kernel_stats.pb to output/keras_tensorboard/train/plugins/profile/2023_08_29_15_01_40/instance-20230829-215700.kernel_stats.pb

2023-08-29 15:01:41.316648: W tensorflow/core/grappler/optimizers/data/auto_shard.cc:695] AUTO sharding policy will apply DATA sharding policy as it failed to apply FILE sharding policy because of the following reason: Found an unshardable source dataset: name: "TensorSliceDataset/_2"
op: "TensorSliceDataset"
input: "Placeholder/_0"
input: "Placeholder/_1"
attr {
  key: "Toutput_types"
  value {
    list {
      type: DT_FLOAT
      type: DT_FLOAT
    }
  }
}
attr {
  key: "output_shapes"
  value {
    list {
      shape {
        dim {
          size: 11
        }
      }
      shape {
        dim {
          size: 1
        }
      }
    }
  }
}



254/254 [==============================] - 4s 5ms/step - loss: 0.5243 - accuracy: 0.7871 - val_loss: 0.3796 - val_accuracy: 0.8167
Epoch 2/20

Epoch 00002: LearningRateScheduler setting learning rate to 0.015.
254/254 [==============================] - 1s 3ms/step - loss: 0.3643 - accuracy: 0.8333 - val_loss: 0.3370 - val_accuracy: 0.8432
Epoch 3/20

Epoch 00003: LearningRateScheduler setting learning rate to 0.0125.
254/254 [==============================] - 1s 3ms/step - loss: 0.3461 - accuracy: 0.8409 - val_loss: 0.3288 - val_accuracy: 0.8460
Epoch 4/20

Epoch 00004: LearningRateScheduler setting learning rate to 0.01125.
254/254 [==============================] - 1s 3ms/step - loss: 0.3394 - accuracy: 0.8450 - val_loss: 0.3270 - val_accuracy: 0.8508
Epoch 5/20

Epoch 00005: LearningRateScheduler setting learning rate to 0.010625.
254/254 [==============================] - 1s 4ms/step - loss: 0.3335 - accuracy: 0.8441 - val_loss: 0.3232 - val_accuracy: 0.8507
Epoch 6/20

Epoch 00006: LearningRateScheduler setting learning rate to 0.0103125.
254/254 [==============================] - 1s 4ms/step - loss: 0.3336 - accuracy: 0.8450 - val_loss: 0.3244 - val_accuracy: 0.8507
Epoch 7/20

Epoch 00007: LearningRateScheduler setting learning rate to 0.01015625.
254/254 [==============================] - 1s 3ms/step - loss: 0.3306 - accuracy: 0.8466 - val_loss: 0.3288 - val_accuracy: 0.8460
Epoch 8/20

Epoch 00008: LearningRateScheduler setting learning rate to 0.010078125.
254/254 [==============================] - 1s 3ms/step - loss: 0.3295 - accuracy: 0.8472 - val_loss: 0.3198 - val_accuracy: 0.8519
Epoch 9/20

Epoch 00009: LearningRateScheduler setting learning rate to 0.0100390625.
254/254 [==============================] - 1s 4ms/step - loss: 0.3268 - accuracy: 0.8485 - val_loss: 0.3191 - val_accuracy: 0.8532
Epoch 10/20

Epoch 00010: LearningRateScheduler setting learning rate to 0.01001953125.
254/254 [==============================] - 1s 4ms/step - loss: 0.3269 - accuracy: 0.8479 - val_loss: 0.3337 - val_accuracy: 0.8497
Epoch 11/20

Epoch 00011: LearningRateScheduler setting learning rate to 0.010009765625.
254/254 [==============================] - 1s 4ms/step - loss: 0.3284 - accuracy: 0.8486 - val_loss: 0.3234 - val_accuracy: 0.8500
Epoch 12/20

Epoch 00012: LearningRateScheduler setting learning rate to 0.010004882812500001.
254/254 [==============================] - 1s 4ms/step - loss: 0.3281 - accuracy: 0.8477 - val_loss: 0.3237 - val_accuracy: 0.8518
Epoch 13/20

Epoch 00013: LearningRateScheduler setting learning rate to 0.01000244140625.
254/254 [==============================] - 1s 4ms/step - loss: 0.3264 - accuracy: 0.8499 - val_loss: 0.3435 - val_accuracy: 0.8510
Epoch 14/20

Epoch 00014: LearningRateScheduler setting learning rate to 0.010001220703125.
254/254 [==============================] - 1s 3ms/step - loss: 0.3239 - accuracy: 0.8505 - val_loss: 0.3364 - val_accuracy: 0.8476
Epoch 15/20

Epoch 00015: LearningRateScheduler setting learning rate to 0.0100006103515625.
254/254 [==============================] - 1s 3ms/step - loss: 0.3264 - accuracy: 0.8500 - val_loss: 0.3262 - val_accuracy: 0.8496
Epoch 16/20

Epoch 00016: LearningRateScheduler setting learning rate to 0.01000030517578125.
254/254 [==============================] - 1s 3ms/step - loss: 0.3253 - accuracy: 0.8496 - val_loss: 0.3207 - val_accuracy: 0.8500
Epoch 17/20

Epoch 00017: LearningRateScheduler setting learning rate to 0.010000152587890625.
254/254 [==============================] - 1s 3ms/step - loss: 0.3240 - accuracy: 0.8493 - val_loss: 0.3204 - val_accuracy: 0.8524
Epoch 18/20

Epoch 00018: LearningRateScheduler setting learning rate to 0.010000076293945313.
254/254 [==============================] - 1s 3ms/step - loss: 0.3223 - accuracy: 0.8504 - val_loss: 0.3359 - val_accuracy: 0.8441
Epoch 19/20

Epoch 00019: LearningRateScheduler setting learning rate to 0.010000038146972657.
254/254 [==============================] - 1s 4ms/step - loss: 0.3243 - accuracy: 0.8488 - val_loss: 0.3199 - val_accuracy: 0.8494
Epoch 20/20

Epoch 00020: LearningRateScheduler setting learning rate to 0.010000019073486329.
254/254 [==============================] - 1s 3ms/step - loss: 0.3230 - accuracy: 0.8487 - val_loss: 0.3280 - val_accuracy: 0.8443


2023-08-29 15:02:00.272251: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
INFO:tensorflow:Assets written to: output/keras_export/assets


Model exported to: output/keras_export


Exception ignored in: <function Pool.__del__ at 0x7f8a22d1a430>
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/multiprocessing/pool.py", line 268, in __del__
  File "/opt/conda/lib/python3.9/multiprocessing/queues.py", line 371, in put
AttributeError: 'NoneType' object has no attribute 'dumps'

Check if the output has been written to the output folder:

%%bash

ls output/keras_export/
assets
keras_metadata.pb
saved_model.pb
variables

Step 2.3: Prepare input for prediction

To receive valid and useful predictions, you must preprocess input for prediction in the same way that training data was preprocessed. In a production system, you may want to create a preprocessing pipeline that can be used identically at training time and prediction time.

For this exercise, use the training package’s data-loading code to select a random sample from the evaluation data. This data is in the form that was used to evaluate accuracy after each epoch of training, so it can be used to send test predictions without further preprocessing.

Run the following snippet of code to preprocess the raw data from the adult.test.csv file. Here, we are grabbing 5 examples to run predictions on:

from trainer import util
_, _, eval_x, eval_y = util.load_data()

prediction_input = eval_x.sample(5)
prediction_targets = eval_y[prediction_input.index]

Check the numerical representation of the features by printing the preprocessed data:

print(prediction_input)
            age  workclass  education_num  marital_status  occupation  \
12978 -0.411611         -1      -0.419265               0          -1   
10135 -1.140958          3      -0.030304               2           5   
12474  0.171866          3      -0.030304               2           3   
347    1.484690          3      -2.364072               2           2   
11442 -1.432697          3      -0.030304               4           7   

       relationship  race  capital_gain  capital_loss  hours_per_week  \
12978             4     4     -0.144792     -0.217132        0.611568   
10135             0     4     -0.144792     -0.217132       -0.195441   
12474             0     4     -0.144792     -0.217132        1.983484   
347               0     4      0.550035     -0.217132        0.772970   
11442             1     4     -0.144792     -0.217132       -0.437544   

       native_country  
12978              38  
10135              38  
12474              38  
347                38  
11442              38  

Notice that categorical fields, like occupation, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like age, have been scaled to a z-score. Some fields have been dropped from the original data.

Export the prediction input to a newline-delimited JSON file:

import json

with open('test.json', 'w') as json_file:
  for row in prediction_input.values.tolist():
    json.dump(row, json_file)
    json_file.write('\n')

Inspect the .json file:

%%bash

cat test.json
[-0.4116113781929016, -1.0, -0.41926509141921997, 0.0, -1.0, 4.0, 4.0, -0.1447920799255371, -0.2171318680047989, 0.6115681529045105, 38.0]
[-1.1409581899642944, 3.0, -0.030303768813610077, 2.0, 5.0, 0.0, 4.0, -0.1447920799255371, -0.2171318680047989, -0.1954410821199417, 38.0]
[0.1718660145998001, 3.0, -0.030303768813610077, 2.0, 3.0, 0.0, 4.0, -0.1447920799255371, -0.2171318680047989, 1.983483910560608, 38.0]
[1.4846901893615723, 3.0, -2.3640716075897217, 2.0, 2.0, 0.0, 4.0, 0.5500345826148987, -0.2171318680047989, 0.7729700207710266, 38.0]
[-1.43269681930542, 3.0, -0.030303768813610077, 4.0, 7.0, 1.0, 4.0, -0.1447920799255371, -0.2171318680047989, -0.4375438690185547, 38.0]

Step 2.4: Use your trained model for prediction

Once you’ve trained your TensorFlow model, you can use it for prediction on new data. In this case, you’ve trained a census model to predict income category given some information about a person.

Run the following command to run prediction on the test.json file we created above:

Note: If you get a “Bad magic number in .pyc file” error, go to the terminal and run:

cd ../../usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine/

sudo rm *.pyc

%%bash

gcloud ai-platform local predict \
    --model-dir output/keras_export/ \
    --json-instances ./test.json
If the signature defined in the model is not serving_default then you must specify it via --signature-name flag, otherwise the command may fail.
WARNING: WARNING:tensorflow:From /opt/conda/lib/python3.9/site-packages/tensorflow/python/compat/v2_compat.py:101: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
2023-08-29 15:02:07.915615: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64
2023-08-29 15:02:07.915719: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2023-08-29 15:02:07.915769: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (instance-20230829-215700): /proc/driver/nvidia/version does not exist
2023-08-29 15:02:07.916122: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:From /usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py:235: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
WARNING:tensorflow:From /usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py:235: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
WARNING:root:Error updating signature __saved_model_init_op: The name 'NoOp' refers to an Operation, not a Tensor. Tensor names must be of the form "<op_name>:<output_index>".



DENSE_4
[0.00899195671081543]
[0.09160658717155457]
[0.45643091201782227]
[0.396020770072937]
[0.0019281506538391113]

Since the model’s last layer uses a sigmoid function for its activation, outputs between 0 and 0.5 represent negative predictions (”<=50K”) and outputs between 0.5 and 1 represent positive ones (”>50K”).

Step 3: Run your training job in the cloud

Now that you’ve validated your model by running it locally, you will now get practice training using Cloud AI Platform.

Note: The initial job request will take several minutes to start, but subsequent jobs run more quickly. This enables quick iteration as you develop and validate your training job.

First, set the following variables:

%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
Your current GCP Project Name is: qwiklabs-gcp-04-bffeced66bf5
PROJECT = "qwiklabs-gcp-04-bffeced66bf5"  # Replace with your project name
BUCKET_NAME=PROJECT+"-aiplatform"
REGION="us-central1"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET_NAME"] = BUCKET_NAME
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.6"
os.environ["PYTHONVERSION"] = "3.7"

Step 3.1: Set up a Cloud Storage bucket

The AI Platform services need to access Cloud Storage (GCS) to read and write data during model training and batch prediction.

Create a bucket using BUCKET_NAME as the name for the bucket and copy the data into it.

%%bash

if ! gsutil ls | grep -q gs://${BUCKET_NAME}; then
    gsutil mb -l ${REGION} gs://${BUCKET_NAME}
fi
gsutil cp -r data gs://$BUCKET_NAME/data
Creating gs://qwiklabs-gcp-04-bffeced66bf5-aiplatform/...
Copying file://data/census.test.csv [Content-Type=text/csv]...
Copying file://data/test.csv [Content-Type=text/csv]...                         
Copying file://data/adult.data.csv [Content-Type=text/csv]...                   
Copying file://data/census.train.csv [Content-Type=text/csv]...                 
\ [4 files][  8.8 MiB/  8.8 MiB]                                                
==> NOTE: You are performing a sequence of gsutil operations that may
run significantly faster if you instead use gsutil -m cp ... Please
see the -m section under "gsutil help options" for further information
about when gsutil -m can be advantageous.

Copying file://data/adult.test.csv [Content-Type=text/csv]...
Copying file://data/test.json [Content-Type=application/json]...                
\ [6 files][ 10.7 MiB/ 10.7 MiB]                                                
Operation completed over 6 objects/10.7 MiB.                                     

Set the TRAIN_DATA and EVAL_DATA variables to point to the files:

%%bash

export TRAIN_DATA=gs://$BUCKET_NAME/data/adult.data.csv
export EVAL_DATA=gs://$BUCKET_NAME/data/adult.test.csv

Use gsutil again to copy the JSON test file test.json to your Cloud Storage bucket:

%%bash

gsutil cp test.json gs://$BUCKET_NAME/data/test.json
Copying file://test.json [Content-Type=application/json]...
/ [1 files][  685.0 B/  685.0 B]                                                
Operation completed over 1 objects/685.0 B.                                      

Set the TEST_JSON variable to point to that file:

%%bash

export TEST_JSON=gs://$BUCKET_NAME/data/test.json

Go back to the lab instructions and check your progress by testing the completed tasks:

- “Set up a Google Cloud Storage”.

- “Upload the data files to your Cloud Storage bucket”.

Step 3.2: Run a single-instance trainer in the cloud

With a validated training job that runs in both single-instance and distributed mode, you’re now ready to run a training job in the cloud. For this example, we will be requesting a single-instance training job.

Use the default BASIC scale tier to run a single-instance training job. The initial job request can take a few minutes to start, but subsequent jobs run more quickly. This enables quick iteration as you develop and validate your training job.

Select a name for the initial training run that distinguishes it from any subsequent training runs. For example, we can use date and time to compose the job id.

Specify a directory for output generated by AI Platform by setting an OUTPUT_PATH variable to include when requesting training and prediction jobs. The OUTPUT_PATH represents the fully qualified Cloud Storage location for model checkpoints, summaries, and exports. You can use the BUCKET_NAME variable you defined in a previous step. It’s a good practice to use the job name as the output directory.

Run the following command to submit a training job in the cloud that uses a single process. This time, set the –verbosity tag to DEBUG so that you can inspect the full logging output and retrieve accuracy, loss, and other metrics. The output also contains a number of other warning messages that you can ignore for the purposes of this sample:

%%bash

JOB_ID=census_$(date -u +%y%m%d_%H%M%S)
OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID
gcloud ai-platform jobs submit training $JOB_ID \
    --job-dir $OUTPUT_PATH \
    --runtime-version $TFVERSION \
    --python-version $PYTHONVERSION \
    --module-name trainer.task \
    --package-path trainer/ \
    --region $REGION \
    -- \
    --train-files $TRAIN_DATA \
    --eval-files $EVAL_DATA \
    --train-steps 1000 \
    --eval-steps 100 \
    --verbosity DEBUG
Job [census_230829_150257] submitted successfully.
Your job is still active. You may view the status of your job with the command

  $ gcloud ai-platform jobs describe census_230829_150257

or continue streaming the logs with the command

  $ gcloud ai-platform jobs stream-logs census_230829_150257


jobId: census_230829_150257
state: QUEUED

Set an environment variable with the jobId generated above:

os.environ["JOB_ID"] = "census_230829_150257" # Replace with your job id

You can monitor the progress of your training job by watching the logs on the command line by running:

gcloud ai-platform jobs stream-logs $JOB_ID

Or monitor it in the Console at AI Platform > Jobs. Wait until your AI Platform training job is done. It is finished when you see a green check mark by the jobname in the Cloud Console, or when you see the message Job completed successfully from the Cloud Shell command line.

Wait for the job to complete before proceeding to the next step. Go back to the lab instructions and check your progress by testing the completed task:

- “Run a single-instance trainer in the cloud”.

Step 3.3: Deploy your model to support prediction

By deploying your trained model to AI Platform to serve online prediction requests, you get the benefit of scalable serving. This is useful if you expect your trained model to be hit with many prediction requests in a short period of time.

Note: You will get Using endpoint [https://ml.googleapis.com/] output after running the next cells. If you try to open that link, you will see 404 error message. You have to ignore it and move forward.

Create an AI Platform model:

os.environ["MODEL_NAME"] = "census"
%%bash

gcloud ai-platform models create $MODEL_NAME --regions=$REGION
Using endpoint [https://ml.googleapis.com/]
Created ai platform model [projects/qwiklabs-gcp-04-bffeced66bf5/models/census].

Set the environment variable MODEL_BINARIES to the full path of your exported trained model binaries $OUTPUT_PATH/keras_export/.

You’ll deploy this trained model.

Run the following command to create a version v1 of your model:

%%bash

OUTPUT_PATH=gs://$BUCKET_NAME/$JOB_ID
MODEL_BINARIES=$OUTPUT_PATH/keras_export/
gcloud ai-platform versions create v1 \
--model $MODEL_NAME \
--origin $MODEL_BINARIES \
--runtime-version $TFVERSION \
--python-version $PYTHONVERSION \
--region=global
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done.

It may take several minutes to deploy your trained model. When done, you can see a list of your models using the models list command:

%%bash

gcloud ai-platform models list --region=global
Using endpoint [https://ml.googleapis.com/]


NAME    DEFAULT_VERSION_NAME
census  v1

Go back to the lab instructions and check your progress by testing the completed tasks:

- “Create an AI Platform model”.

- “Create a version v1 of your model”.

Step 3.4: Send an online prediction request to your deployed model

You can now send prediction requests to your deployed model. The following command sends a prediction request using the test.json.

The response includes the probabilities of each label (>50K and <=50K) based on the data entry in test.json, thus indicating whether the predicted income is greater than or less than 50,000 dollars.

%%bash

gcloud ai-platform predict \
--model $MODEL_NAME \
--version v1 \
--json-instances ./test.json \
--region global
Using endpoint [https://ml.googleapis.com/]


DENSE_4
[0.007440894842147827]
[0.13361433148384094]
[0.4129127860069275]
[0.8567488789558411]
[0.007680416107177734]

Note: AI Platform supports batch prediction, too, but it’s not included in this lab. See the documentation for more info.

Go back to the lab instructions to answer some multiple choice questions to reinforce your uncerstanding of some of these lab’s concepts.

Congratulations!

In this lab you’ve learned how to train a TensorFlow model both locally and on AI Platform, how to prepare data for prediction and to perform predictions both locally and in the Cloud AI Platform.