Build, Train, Deploy (Sample Lesson)

The following sample lesson is a quick introduction to the machine learning concepts covered in the mini-course. You will develop an image classification model that can recognize handwritten numbers using TensorFlow 2.x, then deploy the model as a mobile app using PalletML.

Click the button below to open the lesson in a Colab notebook in order to build and train the model interactively:

Alternatively, you can download a pre-built model that you can deploy by following the steps at the end of the lesson:

MNIST Model.zip

Build, Train, and Deploy a Handwritten Digit Classifier

Table of Contents

  1. Intro
  2. Setup
  3. Prepare the Data
  4. Build the Model
  5. Train the Model
  6. Evaluate the Model
  7. Export to TensorFlow Lite
  8. Deploy with PalletML

Intro

In this quick sample lesson you will use TensorFlow + Keras to navigate an end-to-end machine learning workflow by:

  1. Building an image classification model that can identify handwritten digits
  2. Training your model on a dataset of 60,000 sample images
  3. Evaluating your model on a test set of 10,000 example images
  4. Deploying your model to a mobile app using the no-code tool PalletML.

Keras is a high-level API for TensorFlow that enables developers to rapidly build, train, and iterate on machine learning models for various tasks.

We’ll use Keras to build a simple image classification model and train it on the MNIST dataset, which contains 70,000 28x28 grayscale images of individual handwritten digits in 10 categories (0, 1,…, 9), as seen here:

This classic MNIST dataset is often used as the “Hello, World” of machine learning programs for computer vision. Most often, 60,000 images from the dataset are used to train a neural network and 10,000 images used to evaluate how accurately the network learned to classify images.


Setup

Import TensorFlow and supporting libraries.

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import logging

print("TF\t", tf.__version__)
print("GPU\t", 'yes' if tf.config.list_physical_devices('GPU') else 'no')
tf.get_logger().setLevel(logging.ERROR)
TF   2.4.0
GPU  no

Prepare the data

Load the MNIST dataset from the built-in Keras datasets module, then preprocess the data by converting the samples from integers to floating-point numbers and scaling the pixel values to a range of 0 to 1.

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

x_train = x_train.reshape(-1, 28, 28, 1).astype('float32') / 255
x_test = x_test.reshape(-1, 28, 28, 1).astype('float32') / 255

print()
print('Image shape:', x_train.shape[1:])
print(x_train.shape[0], "Train samples")
print(x_test.shape[0], "Test samples")
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step

Image shape: (28, 28, 1)
60000 Train samples
10000 Test samples

In the full Quick TensorFlow mini-course, you learn how to load data from multiple other sources, including your own data sources, and TensorFlow Datasets - a vast repository of production datasets that you can use to facilitate training and testing your own custom models.

We also teach you how to create data pipelines using the tf.data API that efficiently scale to hundreds of thousands or millions of training samples, and how to use these pipelines for both preprocessing and data augmentation.

Let’s look at a sample image and label from the training dataset.

sample_index = 2 # from 0 to 60000
sample_image, sample_label = x_train[sample_index], y_train[sample_index]

plt.subplot(1, 1, 1)
plt.axis('off')
plt.title('True Label: {}'.format(sample_label))
plt.imshow(sample_image.reshape(28, 28), cmap=plt.cm.gray)
<matplotlib.image.AxesImage at 0x7fc130342c88>

Note that unlike the picture at the beginning of this lesson, the colors in the MNIST dataset are actually inverted, which has been shown to improve model performance. (link)


Build the model

Here we build a simple convolutional neural network (CNN) that takes an image of size 28x28 as input, and outputs a list of 10 predictions, each representing a confidence that the input image contains the number corresponding to the index of the prediction in the list.

Note that because the MNIST dataset is grayscale, the images only have 1 color channel.

img_shape = (28, 28, 1) # height x width x channels
num_classes = 10

model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D((2, 2)),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(num_classes, activation='softmax')
])

We then configure the model with a cross entropy loss function that works for scalar y values (instead of one-hot encoded values), and a standard optimization algorithm.

model.compile(loss='sparse_categorical_crossentropy',
              optimizer='rmsprop', 
              metrics=["accuracy"])

Take the mini-course to learn how to build more powerful CNNs, and experiment with different optimizers like Adam, which dramatically improve image classification model performance. Most importantly, learn how to use transfer learning to create state-of-the-art classification models for color images of any size.

Let’s look at a summary of your model so far:

model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_2 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 32)        0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 5408)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 10)                54090     
=================================================================
Total params: 54,410
Trainable params: 54,410
Non-trainable params: 0
_________________________________________________________________

Note that we’ll be training less than 55,000 parameters in the next section, a fraction of the number of parameters in state-of-the-art image classification models - but the results here may still surprise you.


Train the model

We train the model using Model.fit, which iterates over the 60k training images in batches of 128, learning the features of the data, and adjusting the model parameters to minimize loss. Each iteration over the dataset during training is called an epoch, and here we train our model for 5 epochs.

One of the first lectures in the mini-course teaches you how to split training data into an additional subset of data called a validation set, and use it during training to report your model’s performance at the end of each epoch.

model.fit(x_train, y_train, batch_size=128, epochs=5, verbose=1)
Epoch 1/5
469/469 [==============================] - 18s 37ms/step - loss: 0.5686 - accuracy: 0.8518
Epoch 2/5
469/469 [==============================] - 18s 37ms/step - loss: 0.1363 - accuracy: 0.9609
Epoch 3/5
469/469 [==============================] - 17s 37ms/step - loss: 0.0857 - accuracy: 0.9758
Epoch 4/5
469/469 [==============================] - 17s 37ms/step - loss: 0.0719 - accuracy: 0.9793
Epoch 5/5
469/469 [==============================] - 17s 37ms/step - loss: 0.0598 - accuracy: 0.9832

After just 5 epochs your model is over 98% accurate in classifying images from the training dataset! Now let’s test it on images it hasn’t seen.


Evaluate the trained model

Use the test set to evaluate your model on data it’s never seen. This is the best way to accurately judge its predictive power.

score = model.evaluate(x_test, y_test)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
313/313 [==============================] - 2s 5ms/step - loss: 0.0596 - accuracy: 0.9801
Test loss: 0.05959947034716606
Test accuracy: 0.9800999760627747

Show a sample prediction

test_index = 0 # from 0 to 10000
test_image, test_label = x_test[test_index], y_test[test_index]

# Models work with batches of data, so add an extra dimension to the test image
# and make a prediction
prediction = model.predict(np.expand_dims(test_image, axis=0))
predicted_label = np.argmax(prediction)
color = 'green' if predicted_label == test_label else 'red'

plt.subplot(1, 1, 1)
plt.axis('off')
plt.title('Prediction: {}, True: {}'.format(predicted_label, test_label), color=color)
plt.imshow(test_image.reshape(28, 28), cmap=plt.cm.gray)
<matplotlib.image.AxesImage at 0x7fc12ae49d68>

Looks like your model is pretty robust!

But this notebook is a controlled setting. Lets deploy it to a device and try it for ourselves.


Export the model to TensorFlow Lite

In order to deploy your model to a mobile device, we need to convert it to a TensorFlow Lite model and export it as a file.

We also need to export a set of labels corresponding to the numerical output of our model so the mobile app knows how to display the result.

Save the TensorFlow Lite model as binary file with a conventional .tflite extension

model.save('mnist_model')
!tflite_convert --saved_model_dir=mnist_model --output_file=mnist_model.tflite

Save the labels as a plain text file, one label per row.

labels = list('0123456789')
with open('mnist_labels.txt', 'w') as f:
    for label in labels[:-1]:
        f.write(label + '\n')
    f.write(labels[-1])

Download the model and labels

Now download your model (mnist_model.tflite) and labels (mnist_labels.txt) from the Files pane on the left: * To download a file, right-click on the name of the file in the pane and select Download. (See this answer for more details)


Deploy the model with PalletML

PalletML is a no-code platform that turns image classification models into mobile apps. You can learn more about it on the website. (Currently supports Android, iOS coming soon).

Let’s use Pallet to deploy your model as a shareable app:

  1. Install Pallet from the Google Play Store

    and Sign Up to create a new account

  2. In your computer browser, visit app.palletml.com in a new tab and Log In to the account you just created.

  3. Every model you deploy with Pallet belongs to a Project. Create a New Project for your model.

  4. Choose a name for your project and click Create.

  5. Browse or Drag & Drop your model and labels, then click Upload.

    Once the upload finishes, your model has been deployed ✅.

  6. Return to the Pallet app, navigate to your Profile 👤, and pull to refresh your list of Projects.

    Your new Project will appear. ✨

  7. With Pallet, most image classification models work out of the box, but in this case we need to configure your Project to preprocess input images according to your model’s expectations: with inversion and grayscale.

    Tap your Project to open a detailed view.

    Then tap the Edit icon to access Project Settings.

  8. Under Input Settings:

    Toggle the Invert Colors switch to On.

    Toggle the Grayscale switch to On.

  9. Tap Update to save these settings.

  10. Now Launch your app 🚀

    Test drive your model by Drawing different digits to see how well it performs.

    (For best results, leave a little space around the sides, just like the images your model was trained on  )

  • Pro Tip: Tap the classification to see a preprocessed version of the input image. This is a great way to double check exactly what image your model is running inference on.

The ability to quickly deploy models with tools like Pallet gives you an opportunity to get early feedback on how your models perform on devices in the real world. Despite the high training and testing accuracy, you may notice a skew in your model’s performance on-device. This skew is caused by various factors that you learn to migitate in the mini-course (by building a more powerful model for example).


Up Next

Congratulations! You’ve made it to the end of the lesson.

In just a short period of time, you learned how to build an image classification model that can identify handwritten digits, train a model on thousands of images, evaluate models for predictive power, and deploy image classification models to mobile using PalletML.

Join the full Quick TensorFlow mini-course to pick up advanced techniques for improving your model’s performance, and expand your knowledge of building, training, and deploying performant image classification models. Inside, you will learn how to:

  • Load data from multiple sources, including your own data sources, and TensorFlow Datasets.
  • Split data into subsets for training, testing, and validation
  • Create data pipelines using the tf.data API that efficiently scale to hundreds of thousands or millions of training samples.
  • Use convolutional layers to build convolutional neural networks (CNNs)
  • Experiment with optimizers like Adam to improve model performance.
  • Use transfer learning to create state-of-the-art classification models for color images of any shape.
  • Optimize TensorFlow Lite models for size and accuracy before deployment
Complete and Continue