Tensorflow is a popular open-source framework for deep learning. It is based on dataflow programming. TensorFlow uses a dataflow graph to represent your computation in terms of the dependencies between individual operations. This leads to a low-level programming model in which you first define the dataflow graph, then create a TensorFlow session to run parts of the graph across a set of local and remote devices. The objective of this tutorial is to demonstrate a low level implementation of tensorflow package. 

So, How does tensorflow execute the program ?

  1. It’s a graph based way of conceptualizing mathematical calculations.
  2. It does not compute any code as it goes through it. Instead it creates static computational graph.
  3. It loads the actual values in variables and executes the actual computation when we run the graph/result within a tensorflow session.

DataFlow graph is a common programming model for parallel computing.

Advantages ?

Parallelism. By using explicit edges to represent dependencies between operations, it is easy for the system to identify operations that can execute in parallel.

Distributed execution. By using explicit edges to represent the values that flow between operations, it is possible for TensorFlow to partition your program across multiple devices (CPUs, GPUs) attached to different machines.

Compilation. TensorFlow’s XLA compiler can use the information in your dataflow graph to generate faster code, for example, by fusing together adjacent operations.

Portability. The dataflow graph is a language-independent representation of the code in your model. You can build a dataflow graph in Python, store it in a SavedModel, and restore it in a C++ program for low-latency inference

In this tutorial, we will initially follow some basic examples using tensorflow and then go on building a deep learning classification model on fashion_MNIST dataset.

1. Basic Example

Consider the following expression
a = (b+c) ∗ (c+2)

Let us write the python codes to evaluate the above expression using tensorflow.


            
import tensorflow as tf
import numpy as np

const = tf.constant(2.0, name="constant")
b = tf.Variable(2.0, name="b")
c = tf.Variable(1.0, name="c")
d = tf.add(b,c, name="d")
e = tf.add(c,const, name="e")
a = tf.multiply(d,e, name="a")

            

It is important to remember that the above code does not actually instantiate the variables in memory. It creates a computational graph like below instead.

Simple-TensorFlow-graph

Further,global_variables_initializer() function of ‘tf’ can be invoked to declare all the variables that were declared before in the computation graph. 


            
init_op = tf.global_variables_initializer()

            

Coming to tensorflow session, A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.

the control: A TensorFlow graph is a description of computations. To compute anything, a graph must be launched in a Session.

state: A Session places the graph ops onto Devices, such as CPUs or GPUs, and provides methods to execute them. These methods return tensors produced by ops as numpy ndarray objects in Python.

The below python code run the computational graph within a session.


            
with tf.Session() as sess:
    
    # initialize the variables
    sess.run(init_op)
    
    # run the operation
    a_out = sess.run(a)
 
    print("Value of the equation is : {}".format(a_out))
    sess.close()

            
Value of the equation is : 9.0

2. Placeholder Array Example

  • Lot of times, we only get to know the values of input variables at the time of execution.
  • The values of the variables can always change depending on data (not known beforehand).
  • Keeping this in mind, tensorflow has placeholders where we only define the data type of tensor objects.
  • The actual data is fetched at run time with keyword ‘feed_dict’.

The below codes create similar computation graph but we create an undefined array variable ‘d’ instead of defined variable in Basic Example. 


            
const = tf.constant(2.0, name="constant")
d = tf.placeholder(tf.float32, [None,1], name='d')
c = tf.Variable(1.0, name="c")

a_arr = tf.multiply(tf.add(d, c), tf.add(c, const), name="a_arr")

            

            
init_op = tf.global_variables_initializer()

            

In the below code, we will feed the values of the variable in the dictionary at run time. The resultant array after computation is returned after the graph is executed within session.


            
with tf.Session() as sess:
    
    # initialize variables
    sess.run(init_op)
    
    # run operation
    a_arr_out = sess.run(a_arr, feed_dict = {d: np.arange(0,10)[:, np.newaxis]})
    
    print("Value of the equation is : {}".format(a_arr_out))
    sess.close()

            
Value of the equation is : [[ 3.]
 [ 6.]
 [ 9.]
 [12.]
 [15.]
 [18.]
 [21.]
 [24.]
 [27.]
 [30.]]

3. Deep Learning with Tensorflow

As said earlier, we will implement each step in building deep learning model from scratch using python and tensorflow. Most tutorials, blogs and implementations import datasets from APIs like tensorflow/keras etc. We won’t, instead we will write our data loader. We will not use any high level APIs like keras or tf.keras etc and will stick to basic tensorflow.

Moving ahead, we will introduce ourselves with a Fashion MNIST dataset in this blog-post. The neural network model implementation using tensorflow will be described in the subsequent post.

Fashion MNIST Dataset

Fashion-MNIST is a dataset of Zalando’s article images consisting of a training set of 60,000 examples and a test set of 10,000 examples.

Each example is a 28×28 gray scale image, associated with a label from 10 classes.

Fashion-MNIST is a direct drop-in replacement for the original MNIST digit dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.

Why Fashion MNIST ?

  • Too easy: Digit MNIST
  • Overused: Digit MNIST
  • Digit MNIST can not represent modern CV tasks (like Batchnorm)

Deep Learning heros like Ian Goodfellow & Francois Chollet have advised serious researchers to stay away from digit recognition MNIST. One can download the .gz files of train and test data along with labels from https://github.com/zalandoresearch/fashion-mnist#get-the-data

Each training and test example is assigned to one of the following labels:

0 T-shirt/top
1 Trouser
2 Pullover
3 Dress
4 Coat
5 Sandal
6 Shirt
7 Sneaker
8 Bag
9 Ankle boot

We will write data loader for Fashion MNIST data-set using python class. There are 3 functions in Dataset class.

  1. def load_data(self) reads the downloaded .gz train and test image data and labels if found in a directory. If the validation flag is set then it uses train_test_split() method to create validation set from training data-set. It returns train, test and validation (optional) numpy arrays of training images and labels.
  2. def show_samples_in_grid(self, w, h) is a function defined to display images from the training set in a grid layout. One has to pass width and height (h) to form the grid. This is just for visualization purpose.
  3. def create_label_dict(self) returns a dictionary of mapping of indices and labels.

            
import os
import gzip
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

class Dataset(object):

    def __init__(self, data_download_path="", validation_flag=False, verbose=False):
        self.data_download_path = data_download_path
        self.validation_flag=validation_flag
        self.verbose = verbose
        self.X_train = None
        self.X_val = None
        self.X_test = None
        self.y_train = None
        self.y_val = None
        self.y_test = None

    def load_data(self):

        if not os.path.exists(self.data_download_path):
            raise ValueError('No "%s" directory found, keep data in the given directory path' %self.data_download_path)

        if not os.path.exists(os.path.join(self.data_download_path,"train-images-idx3-ubyte.gz")):
            raise ValueError('No train images data found, Kindly download the data-set.')

        if not os.path.exists(os.path.join(self.data_download_path,"train-labels-idx1-ubyte.gz")):
            raise ValueError('No train labels data found, Kindly download the data-set.')

        if not os.path.exists(os.path.join(self.data_download_path,"t10k-images-idx3-ubyte.gz")):
            raise ValueError('No test images data found, Kindly download the data-set.')

        if not os.path.exists(os.path.join(self.data_download_path,"t10k-labels-idx1-ubyte.gz")):
            raise ValueError('No test labels data found, Kindly download the data-set.')

        train_images_path = os.path.join(self.data_download_path, "train-images-idx3-ubyte.gz")
        train_label_path = os.path.join(self.data_download_path, "train-labels-idx1-ubyte.gz")
        test_images_path = os.path.join(self.data_download_path, "t10k-images-idx3-ubyte.gz")
        test_label_path = os.path.join(self.data_download_path, "t10k-labels-idx1-ubyte.gz")

        with gzip.open(train_label_path) as train_labelpath:
            y_train = np.frombuffer(train_labelpath.read(), dtype=np.uint8, offset=8)

        with gzip.open(train_images_path) as train_imgpath:
            X_train = np.frombuffer(train_imgpath.read(), dtype=np.uint8, offset=16).reshape(len(y_train),784)

        with gzip.open(test_label_path) as test_labelpath:
            y_test = np.frombuffer(test_labelpath.read(), dtype=np.uint8, offset=8)

        with gzip.open(test_images_path) as test_imgpath:
            X_test = np.frombuffer(test_imgpath.read(), dtype=np.uint8, offset=16).reshape(len(y_test),784)

        if self.validation_flag:
            if self.verbose:
                print("Dataset split is Train : 54k, Val: 6k, Test: 10k")
            X_train, X_val, y_train, y_val = train_test_split(X_train,
                                                      y_train,
                                                      stratify = y_train,
                                                      test_size = 0.1,
                                                      random_state = 42)
            self.X_train, self.X_val, self.X_test, self.y_train, self.y_val, self.y_test = X_train, X_val, X_test, y_train, y_val, y_test
            return X_train, X_val, X_test, y_train, y_val, y_test

        if self.verbose:
            print("Dataset split is Train : 60k, Val: 10k")
        self.X_train, self.X_test, self.y_train, self.y_test = X_train, X_test, y_train, y_test
        return X_train, X_test, y_train, y_test

    def show_samples_in_grid(self, w=0, h=0):

        k = w*h
        for i in range(w):
            for j in range(h):
                plt.subplot2grid((w,h),(i,j))
                plt.imshow(self.X_train[k].reshape(28,28), cmap='Greys')
                plt.axis('off')
                k  = k + 1
        plt.show()

    def create_label_dict(self):

        label_dict = {
         0: 'T-shirt/top',
         1: 'Trouser',
         2: 'Pullover',
         3: 'Dress',
         4: 'Coat',
         5: 'Sandal',
         6: 'Shirt',
         7: 'Sneaker',
         8: 'Bag',
         9: 'Ankle boot'
        }
        return label_dict

            

What’s Next ?

Hope it was easy to go through tutorial on tensorflow as I have tried to keep it short and simple. In the subsequent tutorial, we will see implementation of following steps in tensorflow.

  1. Building NN Model
  2. Defining loss function
  3. Creating Optimizer
  4. One hot encoder
  5. Train Model
  6. Validation and Test Model 

If you liked the post, follow this blog to get updates about upcoming articles. Also, share it so that it can reach out to the readers who can actually gain from this. Please feel free to discuss anything regarding the post. I would love to hear feedback from you.

Happy deep learning 🙂