Fast Inference: TFLite GPU Delegate!!

Running inference over the edge devices, especially on mobile devices is very demanding. When you have a really big machine learning model taking inference with the limited resources is a very crucial task. 

Many mobile devices especially mobile devices have hardware accelerators such as GPU. Tensorflow Lite Delegate is useful to optimize our trained model and leveraged the benefits of hardware acceleration.

What is Tensorflow Lite Delegate?

Delegator’s job, in general, is to delegate or transfer your work to someone. TensorFlow Lite supports several hardware accelerators.

A TensorFlow Lite delegate is a way to delegate part or all of graph execution to another executor.

Why should you use delegates?

Running inference on compute-heavy deep learning models on edge devices is resource-demanding due to the mobile devices’ limited processing, memory, and power. Instead of relying on the device CPU, some devices have hardware accelerators, such as GPU or DSP(Digital Signal Processing), that allows for better performance and higher energy efficiency.

How TFLite Delegate work?

How TFLite Delegate work. tensorflow.org

Let us consider the graph on the left side. It has an input node where we will get input for inference. We will get input node going through convolutional operation and then mean operation and it uses the output of these two operations to compute the SquareDifference. 

Let us assume we have a hardware accelerator that can perform Conv2d and mean operations very fastly and efficiently and above graph will be like this:

In this case, we will delegate conv2d and mean these two operations to a specialized hardware accelerator using the TFLite delegator. 

TFLite GPU delegator will delegate the operations to a GPU delegator if available.

TFLite allows us to provide delegates for specific operations, in which case the graph will split into multiple subgraphs, where each subgraph handled by a delegate. Each and every subgraph that is handled by a delegate will be replaced with a node that evaluates the subgraph on its invoked call. Depending on the model, the final graph can end up with one node or many nodes, which means that all of the graphs were delegated or multiple nodes handled the subgraphs. In general, you don’t want to have multiple subgraphs handled by the delegate, since each time you switch from delegate to the main graph, there is an overhead for passing the results from the subgraph to the main graph. 

It’s not always safe to share memory.

How to add a delegate?

  1. Define a kernel node that is responsible for evaluating the delegate subgraph.
  2. Create an instance of TfLiteDelegate, which will register the kernel and claim the nodes that the delegate can execute.

Android:

Tensorflow has provided a demo app for android:

In your application, add the AAR as above, import org.tensorflow.lite.gpu.GpuDelegate module, and use theaddDelegate function to register the GPU delegate to the interpreter

import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;

// Initialize interpreter with GPU delegate
GpuDelegate delegate = new GpuDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
Interpreter interpreter = new Interpreter(model, options);

// Run inference
while (true) {
  writeToInput(input);
  interpreter.run(input, output);
  readFromOutput(output);
}

// Clean up
delegate.close();

iOS:

Include the GPU delegate header and call the Interpreter::ModifyGraphWithDelegate function to register the GPU delegate to the interpreter:

#import "tensorflow/lite/delegates/gpu/metal_delegate.h"

// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr);  // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;

// Run inference
while (true) {
  WriteToInputTensor(interpreter->typed_input_tensor<float>(0));
  if (interpreter->Invoke() != kTfLiteOk) return false;
  ReadFromOutputTensor(interpreter->typed_output_tensor<float>(0));
}

// Clean up
interpreter = nullptr;
DeleteGpuDelegate(delegate);

Note:-

Some operations that are trivial on the CPU may have a high cost for the GPU.

Reference Link:

https://www.tensorflow.org/lite/performance/gpu

For more such stories

Colab getting started!!

Train deep neural network free using google colaboratory.

GPU and TPU compute for free? Are you kidding?

Google Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.

With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. If you don’t have money to procure GPU and want to train neural network or want to makes hands dirty with zero investment then this if for you. Colab is a Google internal research tool for data science.

You can use GPU as a backend for free for 12 hours at a time.

It supports Python 2.7 and 3.6, but not R or Scala yet.

Many people want to train some machine learning model or deep learning model but playing with this requires GPU computation and huge resources that blocking many people to try out these things and make hands dirty.

Google Colab is nothing but cloud-hosted jupyter notebook.

Colaboratory is a free Jupyter notebook environment provided by Google where you can use free GPUs and TPUs which can solve all these issues. The best thing about colab is TPUs (tensor processing unity) the special hardware designed by google to process tensor.

Let’s Start:- 

 To start with this you should know jupyter notebook and should have a google account. 

http://colab.research.google.com/

Click on the above link to access google colaboratory. This is not only a static page but an interactive environment that lets you write and execute code in Python and other languages. You can create a new Jupyter notebook by File →New python3 notebook. clicking New Python3 Notebook or New Python2 Notebook.

We will create one python3 notebook and it will create one for us save it on google drive. 

Colab is an ideal way to start everything from improving your Python coding skills to working with deep learning frameworks, like PyTorch, Keras, and TensorFlow and you can install any Python package which is require for your python coding like from simple sklearn, numpy too TensorFlow. 

You can create notebooks in Colab, upload existing notebooks, store notebooks, share notebooks with anyone, mount your Google Drive and use whatever you’ve got stored in there, import most of your directories, upload notebooks directly from GitHub, upload Kaggle files, download your notebooks, and do whatever your doing with your local jupyter notebook.

On the top right you can choose to connect to hosted runtime or connect to local runtime

Set up GPU or TPU:-

It’s very simple and straight forward as going to the “runtime” dropdown menu, selecting “change runtime type” and selecting GPU/TPU in the hardware accelerator drop-down menu!

Now you can start coding and start executing your code !!

How to install a framework or libraries?

It’s as simple as writing import statement in python!.

!pip install fastai

use normal pip install command to install different packages like TensorFlow or PyTorch and start playing with it.

For more details and information

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=gJr_9dXGpJ05

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=-Rh3-Vt9Nev9

Implementing neural networks using Gluon API

Implementing neural networks using Gluon API

In the previous chapter, we discussed the basics of deep learning and over of Gluon API and MxNet. This chapter explains how to use Gluon API to create a different neural network by exploring Gluon API.

Gluon API is the abstraction over the mathematical computation deep learning framework MxNet. As we have discussed in the last chapter different types of machine learning and different algorithms to implement each method. As part of this chapter, we will look into linear regression, binary classification and multiclass classification using Gluon.

Neural Network using Gluon:

Gluon has a hybrid approach in deep learning programming. It supports both symbolic as well as the imperative style of programming. There are different machine learning algorithms to address different problems. As we stated in the last chapter artificial neural network is a mathematical computation representation of a human brain. It’s mathematical computation to deal with that we need to do a matrix or tensor manipulation and to do that we have Gluon API, NDArray API. Artificial Neural network contains nodes and each node has some weight and bias and the data will get transform from layer to layer up to the output layer. The middle layers between the input layer and the output layer are called hidden layers. Let us explore something:

Linear Regression:

Linear regression is a very basic algorithm in the field of machine learning. Everyone will come across to this algorithm whether you are a novice or expert machine learning engineer or data scientist. Linear regression is categorized under supervised machine learning. As the name state, linear regression used to identify the relationship between two continuous variables. In this case, there are two variables, one is predictor (independent) and another one is the dependent (response) variable. We will be able to model the relationship between two variables by fitting them in a linear equation.

A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0). This is in a mathematical way.

In the above diagram, you can able to see there is a linear equation to fit these data points. On X-axis we have some data points and on Y-axis we have some data. The data points plotted in this diagram states the linear equation between two variable one is dependent and another one is independent.

To understand this, let us take one small example. There is some real-life example like predict the sale of products based on buying history. Predict the house price based on the size of a house, location of a property, amenities, demand and historical records.

There are two different types of Linear regression.

  1. Simple Linear Regression:– Simple linear regression where there are two variables one is dependent and another is an independent variable. X is used to predicate the dependent variable Y. Like Predicate the total fuel expense based on the distance in kilometers.
  2. Multiple(Multi-Variable) Linear Regression:- Multiple linear regression where we have one dependent variable and two or more independent variable. In certain case, we have two or number features that could affect the dependent variable. Like in the case of predicate the house price depends on the size of a house, location of a house, construction year, etc. There are (X1, X2,..) dependent variable to predict Y.

Let us consider the problem when we are in academics we are predicting marks based on how we solved the paper. Let us take one small example you are planning a road trip to Shimla (the city from India) as you recently watched Tripling (India Web series ) with your two siblings. You started from Pune and the total distance have to travel is 1790 km. Its long journey so you have to plan each and every expense such as fuel, meal, and halt, etc. We will take a blank paper you will put when to start and stop and how much fuel is required? how much money need to reserve for meal and hotel charges and to follow these questions you will list out those things and based on your travel car mileage and current fuel prices you can predict total paid for fuel. So, it’s a simple linear relationship between two variables, If I drive for 1790 km, how much will I pay for fuel? If you want to predict the overall expense of a trip then you can convert this simple linear regression into the complex linear regression model. Add more independent variable such as meal cost, lodging charges, other expenses, and historical data in the last trips.

This is the way we can forecast the current trip charges and plan accordingly. The core idea is to obtain a line that best fits the data. Linear Regression is the simplest and far most popular method in machine learning for problem-solving.

Linear regression using Gluon:

Linear regression is the entry pass to the journey of machine learning, given that it is a very straight forward problem and we can solve this using Gluon API. A linear equation is y=Wx+b by constructing the above graph that learns the gradient of the slope (W) and bias (b) through a number of iterations. The target of each iteration to reduce the loss between actual y and predicated y and to achieve this we want to modify the W and b, so inputs of x will give us the y we want. Let us take one small example, implement linear regression using Gluon API, In this example, we are not developing each and everything from scratch but we will take advantage of gluon API to form our implementation,

# let is importantss
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon
# this is for nural layers
from mxnet.gluon import nn, Trainer
# this is for data loading
from mxnet.gluon.data import DataLoader, ArrayDataset

Here is the above code black we have just imported required modules. If you observed carefully gluon API is the part of the mxnet package. We have imported ndarray to numerical tensor processing and autograd for automatic differentiation of a graph of NDArray operations. mxnet.gluon.data is the module which contains API that can help us to load and process the common public dataset such as MNIST.

from mxnet.gluon import nn, Trainer

Gluon provides nn API to define different layers of neural network and Trainer API help us to train the defined neural network. Data is an important part let us build the data set.

We start by generating our dataset, one is for

# set context for optimisation
data_ctx = mx.cpu()
model_ctx = mx.cpu()
# to generate random data
number_inputs = 2
number_outputs = 1
number_examples = 10000
def real_fn(X):
    return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
#generate randome records of 10000
X = nd.random_normal(shape=(number_examples, number_inputs))
noise = 0.01 * nd.random_normal(shape=(number_examples,))
y = real_fn(X) + noise

The above code can generate the dataset for the problem.

Now data is ready, load the data using DataLoader API.

batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
                                      batch_size=batch_size, shuffle=True)

Let us build a Neural network with two input and one output layer as we defined this using nn.Dense(1, in_units=2). It’s called a dense layer because every node in the input is connected to every node in the subsequent layer.

net = gluon.nn.Dense(1, in_units=2)
# dense layer with 2 inputs and 1 output layer
# print just weight and bias for neural network
print(net.weight)
print(net.bias)
# output of above print statements
Parameter dense6_weight (shape=(1, 2), dtype=float32)
Parameter dense6_bias (shape=(1,), dtype=float32)
mxnet.gluon.parameter.ParameterDict

The output of this weight and bias are actually not a ndArrays. They are an instance of Parameter class. We are using Parameter over NDArray for distinct reasons. Parameters can be associated with multiple contexts unlike NDArray. As we discussed in the first chapter Block is the basic building block of neural network in the Gluon, Block will take input and generate output. We can collect all parameters using net.collect_params() irrespective of how complex the neural network is. This method will return the dictionary of parameters.

Next step would be to initialization of parameter of a neural network. The initialization step is very important. In this step, we can access contexts, data and also we can feed data to a neural network.

net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
# Deferred initialization
example_data = nd.array([[4,7]])
net(example_data) 
# access the weight and bias data
print(net.weight.data())
print(net.bias.data())
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)

let us observe the difference net = gluon.nn.Dense(1) and the first layer code net = gluon.nn.Dense(1, in_units=2), Gluon inference the shape on parameters.

square_loss = gluon.loss.L2Loss()

Now need to optimize the neural network, Implementing Stochastic gradient descent from scratch to optimize the neural network every time better we can reuse the code gluon.Trainer, pass a parameter dictionary to optimize the network.

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})

SGD is Stochastic gradient descent implementation given by Gluon, the learning rate is 0.0001 and passing a dictionary of parameters to optimize the neural network. Now we have actual y and y-pred, we want to know how far the predicted y is away from our generated y. The difference between this two y is called as a loss function and to reduce this loss we are using SGD.

epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
    cumulative_loss = 0
    # inner loop
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(model_ctx)
        label = label.as_in_context(model_ctx)
        with autograd.record():
            output = net(data)
            loss = square_loss(output, label)
        loss.backward()
        trainer.step(batch_size)
        cumulative_loss += nd.mean(loss).asscalar()
    print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
    loss_sequence.append(cumulative_loss)

Let us visualize the learning loss.

# plot the convergence of the estimated loss function 
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)

SGD learns the linear regression model by plotting the learning curve. The graph indicates the average loss over each epoch. Loss is getting reduced over each iteration.

Now our model is ready and everything working as expected but we need to do some sanity testing for validation purpose.

params = net.collect_params()
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# we will iterate over the dictionary and print the parameters.
for param in params.values():
    print(param.name,param.data())

From this example, we can say that Gluon can help us to build quick, easy prototyping.

In this example, we used a few API that helps to build a neural network without writing everything from scratch. Gluon provides us a more concise way to express model. API is too powerful to prototype, build model quick and easy. Linear regression we can use in many real-life scenarios,

  1. Predicate the house price
  2. Predicate the weather conditions
  3. Predicate the stock price

These are just a few scenarios where you can apply linear regression to predicate the values. The predicted values in linear regression are continuous values.

Binary Classification:

In the above section, we explored linear regression with sample code. When we implemented this linear regression the output value is continuous values, but there are few real-life examples where we don’t have continuous values but we need to classification such email is spam or not or which party will be getting elected in the next elections, customer should buy an insurance policy or not. The classification problem may be binary or multiclass classification where you have more than two classes. In this type of problem, the output neurons are two or more. In classification problem the prediction values are categorical. Logistic regression is the machine learning technique used to solve such classification problems. Basically, logistic regression is an algorithm to solve a binary classification problem.

Let us consider a problem we will provide an image as an input to the neural network and output could be labeled as to whether its dog(1) or non dog(0). In supervised learning there are two types of a problem one is regression and another one is classification problem. In regression problems, the output is a rational number whereas in classification problems the output is categorical. There are different algorithms available to solve such type of classification problems such as support vector machine, discriminant analysis, naive Bayes, nearest neighbor, and logistic regression. Classification problem-solving means identifying in which of the category a new observation.

In the above diagram, you can easily categories data into two classes, one is circled another one is a cross sign. This called binary classification.

Binary classification using logistic regression:

Logistic regression is a very popular and powerful machine learning technique to solve the classification problem. Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables. Logistic regression will answer the question like how likely is it?. Then you will get the question of why are not using linear regression? We have tumor cancer dataset and each of one is malignant or not denoted by zero or one. If we use linear regression then we can construct a line to best fit for an equation y =wx +b then we can decide all values left to the line are non-malignant and values right of the line are malignant based on a threshold (ex. 0.5), what if there is an outlier means some positive class values into the negative class. we need a way to deal with outlier and logistic regression will give us that power. Logistic regression does not try to predict the rational value of a given a set of inputs. Instead, the output is a probability that the given input point belongs to a certain category and based on the threshold we can easily categorize the input observation. Logistic Regression is a type of classification algorithm involving a linear discriminant. The linear discriminant means the input space is separated into two regions by a linear boundary and model will be able to differentiate between points belonging to different category.

Logistic regression technique is useful when several independent variables on a single outcome variable. Let us consider we are watching cricket world cup matches, we want to predicate whether the match will be getting scheduled or not based on weather conditions

OutlookTemperatureHumidityWindyPlaysunnyhothighfalsenosunnyhothightruenoovercasthothighfalseyesrainymildhighfalseyesrainycoolnormalfalseyesrainycoolnormaltruenoovercastcoolnormaltrueyessunnymildhighfalsenosunnycoolnormalfalseyesrainymildnormalfalseyessunnymildnormaltrueyesovercastmildhightrueyesovercasthotnormalfalseyesrainymildhightrueno

In the above dataset, the output is yes(1) or no(0). Here the output is categorical with two output classes that’s why this is aka as binary classification.

Let us start some code, for this example, we are considering the (https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html ) with total sample 569 with 30 dimensions and two classes.

Import the required modules. Here we need sklearn python library which contains breast cancer data inbuild, we can use this dataset and apply logistic regression for binary classification.

import mxnet as mx
from mxnet import gluon, autograd, ndarray
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

Load the data set and use the pandas data frame to hold the data for further processing.

# the dataset is part of below module
from sklearn.datasets import load_breast_cancer 
# load data 
data = load_breast_cancer()
# use pandas data frame to hold the dataset
df = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X = data.data
# print first five records
df.head()
# display record shape means number for rows and cloumns
df.shape
# number of dimentions
df.ndim

Now data is available but this data is human readable format and to train neural network it won’t be useful. Before start train our neural network we need to normalize the data. To normalize the data we are using pandas. We can also use gluon to normalize the dataset.

df_norm = (df - df.mean()) / (df.max() - df.min())

Before training any machine learning algorithm the critical part is the dataset, We need to split the dataset into training and testing dataset. Let us do that

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=12345)

Tuning the hyperparameters is another important aspect in training the artificial neural network.

BATCH_SIZE = 32
LEARNING_R = 0.001
EPOCHS = 150

Let us prepare the data for according to gluon API, so that we can feed that data to network and train. To do that we can use mx.gluon.data module

train_dataset = mx.gluon.data.ArrayDataset(X_train.as_matrix(),y_train)
test_dataset = mx.gluon.data.ArrayDataset(X_test.as_matrix(),y_test)
train_data = mx.gluon.data.DataLoader(train_dataset,
                                      batch_size=BATCH_SIZE, shuffle=True)
test_data = mx.gluon.data.DataLoader(test_dataset,
                                     batch_size=BATCH_SIZE, shuffle=False)

Let us use gluons plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers. It has some predefined layers such Dense layer, sequential, etc.

net = gluon.nn.Sequential()
# Define the model architecture
with net.name_scope():
 net.add(gluon.nn.Dense(64, activation="relu"))
 net.add(gluon.nn.Dense(32, activation="relu") ) 
 net.add(gluon.nn.BatchNorm()) 
 net.add(gluon.nn.Dense(1, activation="sigmoid"))
# Intitalize parametes of the model
net.collect_params().initialize(mx.init.Uniform())
# Add binary loss function, sigmoid binary cross Entropy
binary_cross_entropy = gluon.loss.SigmoidBinaryCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': LEARNING_R})

The neural network contains four layers. We are using ‘relu’ as an activation function. ReLU rectified linear unit is an activation function aka a ramp function. The third layer is (gluon.nn.BatchNorm() ) batch normalisation layer. Another activation function we have used is ‘sigmoid’. The sigmoid function is another linear activation function having a characteristic of S-shaped curve. In the binary classification, the loss function we used is binary cross entropy. It measures the performance of a model whose output is a probability number between 0 and 1. Below is binary cross entropy loss function mathematical formula.

Then gluon.Trainer() to train the model.

Now training time for the model

for e in range(EPOCHS):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(mx.cpu()).astype('float32')
        label = label.as_in_context(mx.cpu()).astype('float32')
        with autograd.record(): # Start recording the derivatives
            output = net(data) # the forward iteration
            loss = binary_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])
        # Provide stats on the improvement of the model over each epoch
        curr_loss = ndarray.mean(loss).asscalar()
    if e % 20 == 0:
        print("Epoch {}. Current Loss: {}.".format(e, curr_loss))

Look at the above loss function graph, its in S-shape. Let us calculate this

print(accuracy_score(y_test, y_pred_labels))

This is the binary classification problem where we have just observed breast cancer data set with input data set and output is either of the two categories malignant or benign.

Multiclass classification:

We had discussed till linear regression problem, where output is single value and that is also a single rational number, then we have seen some of the categorical problem those aka as classification problems. In Classification problems also there are generally two types of a classification problem.

  1. Binary Classification
  2. MultiClass Classification

Binary classification problem means two categories, such as email is spam or not, breast cancer, and based on weather conditions cricket match will get played or not. In all this scenario the output is either(yes/no) of the categories but there is a real-life scenario where you have more than one category those problems are classified as multiclass classification(more than two classes). MultiClass classification aka multinominal classification. In multiclass classification, classifying observation into one of three or more classes. Don’t be confuse with multi-label classification with multiclass classification.

We went into the grocery shop for shopping at the fruit stall you stopped to buy some fruit, you picked your phone are tried your machine learning algorithm to identify a fruit based on color, shape, etc. Classifies the set of images of fruits which may banana, apple, orange, guava, etc. We will use the same logistic regression algorithm to address this multiclass classification problem. Logistic regression is the classic algorithm to solve the classification problem in supervised learning. As we have seen binary classification is quite useful when We have a dataset with two categories like, use it to predict email spam vs. not spam or breast cancer or not cancer. But this is not for every problem. Sometimes we encounter a problem where each observation could belong to one of the n classes. For example, an image might depict a lion, cat or a dog or a zebra, etc.

Let us dive deeper into the multiclass classification problem for this we will use MNIST (Modified National Institute of Standards and Technology ) dataset. This is the handwritten digits dataset. This dataset is widely used to teach deep learning hello world program. The MNIST dataset contains 60,000 training images and 10,000 testing images. MNIST can be a nice toy dataset for testing new ideas it is like a HelloWorld program for an artificial neural network.

Let us makes our hands dirty with gluon multiclass classification implementation.

from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd
from mxnet import gluon
import numpy as np

Let us import some of the modules that require such as mxnet, gluon, ndArray, autograd for differentiation and numpy.

Set the context, in previous all example we have set is the CPU for simplicity, you can set GPU if you want to execute code on GPU for that you have to install GPU enabled mxnet GLUON API.

( e.g . model_ctx=mx.gpu() ).

data_ctx = mx.cpu()
model_ctx = mx.cpu()

For multiclass classification, we are using the MNIST data set, as part of this we are not explaining what is MNIST data set for more details you can use this link https://en.wikipedia.org/wiki/MNIST_database.

batch_size = 64
num_inputs = 784
num_outputs = 10
num_examples = 60000
def transform(data, label):
    return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                              batch_size, shuffle=False)

Load the dataset number of inputs is 784 and the number of outputs is 10 (number 0,1…,9) with 60000 examples and 64 is the batch size. mx.gluon.data.vision.MNIST module contains the MNIST dataset which is part of gluon API. For training and validation purpose we are splitting data set into two-part testing data set and training data set.

Data is loaded successfully the next step is to define our module. Revise the code of linear regression for binary classification where we defined the Dense layer with the number inputs and outputs. gluon.nn.Dense(num_ouputs) is the defined layer with output shape and gluon inference the input shape from input data.

net = gluon.nn.Dense(num_outputs)

Parameter initialization is the next step but before going to register an initializer for parameters, gluon doesn’t know the shape of the input parameter because we have mentioned the shape of the output parameters. The parameters will get initialized during the first call to the forward method.

net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)

When you need to get the output in probabilities then Softmax cross entropy loss function can be useful

Softmax is an activation layer which allows us to interpret the outputs as probabilities, while cross entropy is we use to measure the error at a softmax layer.

Let us consider below softmax code snippet

# just for understanding.
def softmax(z):
    """Softmax function"""
    return np.exp(z) / np.sum(np.exp(z))

As the name suggests, softmax function is a “soft” version of max function. Instead of selecting one maximum rational value, it breaks the value with maximal element getting the largest portion of the distribution, that’s why it’s very good to get the probabilities of the inputs. From the above code, you will able to get that Softmax function takes an N-dimensional vector of real numbers as an input and transforms it into a vector of real number in range (0,1).

softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()

Now initiate an optimizer with learning rate 0.1. sgd (Stochastic gradient decent)

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})

Now the model is trained, but evaluation of the model is required to identify the accuracy. To do this we are using MxNet built-in metric package. We should have to consider accuracy in the ballpark of .10 because of we initialized model randomly.

def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
    for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(model_ctx).reshape((-1,784))
        label = label.as_in_context(model_ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
    return acc.get()[1]
# call the above function with test data
evaluate_accuracy(test_data,net).

Now execute the training loop with 10 iterations,

epochs = 10
moving_loss = 0.
for e in range(epochs):
    cumulative_loss = 0
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(model_ctx).reshape((-1,784))
        label = label.as_in_context(model_ctx)
        with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
        loss.backward()
        trainer.step(batch_size)
        cumulative_loss += nd.sum(loss).asscalar()
    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
    print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, cumulative_loss/num_examples, train_accuracy, test_accuracy))
# output
Epoch 0. Loss: 2.1415544213612874, Train_acc 0.7918833333333334, Test_acc 0.8015
Epoch 1. Loss: 0.9146347909927368, Train_acc 0.8340666666666666, Test_acc 0.8429
Epoch 2. Loss: 0.7468763765970866, Train_acc 0.8524333333333334, Test_acc 0.861
Epoch 3. Loss: 0.65964135333697, Train_acc 0.8633333333333333, Test_acc 0.8696
Epoch 4. Loss: 0.6039828490893046, Train_acc 0.8695833333333334, Test_acc 0.8753
Epoch 5. Loss: 0.5642358363191287, Train_acc 0.8760166666666667, Test_acc 0.8819
Epoch 6. Loss: 0.5329904221892356, Train_acc 0.8797, Test_acc 0.8849
Epoch 7. Loss: 0.5082313110192617, Train_acc 0.8842166666666667, Test_acc 0.8866
Epoch 8. Loss: 0.4875676867882411, Train_acc 0.8860333333333333, Test_acc 0.8891
Epoch 9. Loss: 0.47050906361341477, Train_acc 0.8895333333333333, Test_acc 0.8902

Visualize the prediction

import matplotlib.pyplot as plt
def model_predict(net,data):
    output = net(data.as_in_context(model_ctx))
    return nd.argmax(output, axis=1)
# let's sample 10 random data points from the test set
sample_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                              10, shuffle=True)
for i, (data, label) in enumerate(sample_data):
    data = data.as_in_context(model_ctx)
    print(data.shape)
    im = nd.transpose(data,(1,0,2,3))
    im = nd.reshape(im,(28,10*28,1))
    imtiles = nd.tile(im, (1,1,3))
    plt.imshow(imtiles.asnumpy())
    plt.show()
    pred=model_predict(net,data.reshape((-1,784)))
    print('model predictions are:', pred)
    break

# output of the above code snippet

(10, 28, 28, 1)
model predictions are: 
[3. 6. 7. 8. 3. 8. 1. 8. 2. 1.]
<NDArray 10 @cpu(0)>

From the output of the above program, we can understand our model is able to solve the multiclass classification problem. Multiclass classification problem solved using linear regression algorithm. The activation function we used here is the softmax activation function that will enforce the output should be in the range of (0,1). That allowed us to interpret these outputs as probabilities. Other common names we can use softmax regression and multinomial regression alternatively. In the above example, we have used sgd (stochastic gradient descent)

def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad

Overfitting and regularization:

Overfitting

Till now we have solved regression and classification algorithm and with three different datasets, we achieved almost approximately 90% accuracy over the testing dataset. Sometimes times a model is too closely fit a limited set of data points that time we say its an overfitting error. The above regression and classification algorithm are working fine in the above examples but those are not working for certain of the datasets and running into overfitting they can cause them to perform very poorly. In this section, I would like to explain to you what is overfitting problem and regularization technique that will allow us to reduce this overfitting problem and get this learning algorithm to perform much better.

I find this joke from “Plato and Platypus Walk Into a Bar” does the best analogy to explain this overfitting problem.

“A man tries on a made-to-order suit and says to the tailor, “I need this sleeve taken in! It’s two inches too long!”

The tailor says, “No, just bend your elbow like this. See, it pulls up the sleeve.”

The man says, “Well, okay, but now look at the collar! When I bend my elbow, the collar goes halfway up the back of my head.”

The tailor says, “So? Raise your head up and back. Perfect.”

The man says, “But now the left shoulder is three inches lower than the right one!”

The tailor says, “No problem. Bend at the waist way over to the left and it evens out.”

The man leaves the store wearing the suit, his right elbow crooked and sticking out, his head up and back, all the while leaning down to the left. The only way he can walk is with a choppy, atonic walk.

This suit is perfectly fit that man but it has been overfitted. This suit would neither be useful to him nor to anyone else. I think this is the best analogy to explain this overfitting problem.

Overfitting and underfitting aka overtraining and undertraining and it occurs when an algorithm captures the noise of the data. Underfitting occurs when the model is not fit well enough. Not every algorithm that performs well on training data will also perform well on test data. To identify the overfitting and underfitting using validation and cross-validation data set. Both overfitting and underfitting lead to a poor prediction on the new observations.

Underfitting occurs if the model shows high bias and low variance. Overfitting occurs if the model shows high variance. If we have too many features, the learned model may fit the training set very well but fail to predicate new observations.

Let us ritual our MNIST data set and see how can things go wrong.

from __future__ import print_function
import mxnet as mx
import mxnet.ndarray as nd
from mxnet import autograd
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
ctx = mx.cpu() 
# load the MNIST data set and split it into the training and testing
mnist = mx.test_utils.get_mnist()
num_examples = 1000
batch_size = 64
train_data = mx.gluon.data.DataLoader(
    mx.gluon.data.ArrayDataset(mnist["train_data"][:num_examples],
                               mnist["train_label"][:num_examples].astype(np.float32)),
                               batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(
    mx.gluon.data.ArrayDataset(mnist["test_data"][:num_examples],
                               mnist["test_label"][:num_examples].astype(np.float32)),
                               batch_size, shuffle=False)

We are using a linear model with softmax. Allocate the parameter and define the model

# weight
W = nd.random_normal(shape=(784,10))
# bias
b = nd.random_normal(shape=10)
params = [W, b]
for param in params:
    param.attach_grad()
def net(X):
    y_linear = nd.dot(X, W) + b
    yhat = nd.softmax(y_linear, axis=1)
    return yhat

Define loss function to calculate average loss and optimizer to optimize the loss. As we have seen this cross entropy loss function and SGD in multiclass classification.

# cross entropy 
def cross_entropy(yhat, y):
    return - nd.sum(y * nd.log(yhat), axis=0, exclude=True)
# stochastic gradient descent 
def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad
def evaluate_accuracy(data_iterator, net):
    numerator = 0.
    denominator = 0.
    loss_avg = 0.
    for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        output = net(data)
        loss = cross_entropy(output, label_one_hot)
        predictions = nd.argmax(output, axis=1)
        numerator += nd.sum(predictions == label)
        denominator += data.shape[0]
        loss_avg = loss_avg*i/(i+1) + nd.mean(loss).asscalar()/(i+1)
    return (numerator / denominator).asscalar(), loss_avg

Plot the loss function and visualize the model using matplotlib.

def plot_learningcurves(loss_tr,loss_ts, acc_tr,acc_ts):
    xs = list(range(len(loss_tr)))
    f = plt.figure(figsize=(12,6))
    fg1 = f.add_subplot(121)
    fg2 = f.add_subplot(122)
    fg1.set_xlabel('epoch',fontsize=14)
    fg1.set_title('Comparing loss functions')
    fg1.semilogy(xs, loss_tr)
    fg1.semilogy(xs, loss_ts)
    fg1.grid(True,which="both")
    fg1.legend(['training loss', 'testing loss'],fontsize=14)
    fg2.set_title('Comparing accuracy')
    fg1.set_xlabel('epoch',fontsize=14)
    fg2.plot(xs, acc_tr)
    fg2.plot(xs, acc_ts)
    fg2.grid(True,which="both")
    fg2.legend(['training accuracy', 'testing accuracy'],fontsize=14)

Let us iterate.

epochs = 1000
moving_loss = 0.
niter=0
loss_seq_train = []
loss_seq_test = []
acc_seq_train = []
acc_seq_test = []

for e in range(epochs):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        with autograd.record():
            output = net(data)
            loss = cross_entropy(output, label_one_hot)
        loss.backward()
        SGD(params, .001)
        ##########################
        # Keep a moving average of the losses
        ##########################
        niter +=1
        moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
        est_loss = moving_loss/(1-0.99**niter)
    test_accuracy, test_loss = evaluate_accuracy(test_data, net)
    train_accuracy, train_loss = evaluate_accuracy(train_data, net)
    # save them for later
    loss_seq_train.append(train_loss)
    loss_seq_test.append(test_loss)
    acc_seq_train.append(train_accuracy)
    acc_seq_test.append(test_accuracy)

    if e % 100 == 99:
        print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" %
              (e+1, train_loss, test_loss, train_accuracy, test_accuracy))

## Plotting the learning curves
plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)
# output
Completed epoch 100. Train Loss: 0.5582709927111864, Test Loss 1.4102623425424097, Train_acc 0.862, Test_acc 0.725
Completed epoch 200. Train Loss: 0.2390711386688053, Test Loss 1.2993220016360283, Train_acc 0.94, Test_acc 0.734
Completed epoch 300. Train Loss: 0.13671867409721014, Test Loss 1.2758532278239725, Train_acc 0.971, Test_acc 0.748
Completed epoch 400. Train Loss: 0.09426628216169773, Test Loss 1.2602066472172737, Train_acc 0.989, Test_acc 0.758
Completed epoch 500. Train Loss: 0.05988468159921467, Test Loss 1.2470015566796062, Train_acc 0.996, Test_acc 0.764
Completed epoch 600. Train Loss: 0.043480587191879756, Test Loss 1.2396155279129744, Train_acc 0.998, Test_acc 0.762
Completed epoch 700. Train Loss: 0.032956544135231525, Test Loss 1.234715297818184, Train_acc 0.999, Test_acc 0.764
Completed epoch 800. Train Loss: 0.0268415825557895, Test Loss 1.2299001738429072, Train_acc 1.0, Test_acc 0.768
Completed epoch 900. Train Loss: 0.022739565349183977, Test Loss 1.2265239153057337, Train_acc 1.0, Test_acc 0.77
Completed epoch 1000. Train Loss: 0.019902906555216763, Test Loss 1.2242997065186503, Train_acc 1.0, Test_acc 0.772

From the above graph, you can easily get how the model is performing. From the above output, you can say at the 700th epoch, the model gives 100% accuracy on a dataset., this means it only able to classify 75% of the test examples accurately and 25% not. This is a clear high variance means overfitting. Methods to avoid overfitting:

  1. Cross-Validation
  2. Drop out
  3. Regularization

Regularization:

In the above section, we can able to identify the problem of overfitting. Now we know the problem and we also know what are the reasons for this. Now let us talk about the solution. In the regularisation, we will keep all the features but reduce the magnitude of parameters. Regularisation keeps the weights small keeping the model simpler to avoid overfitting. The model will have a lesser accurate if it is overfitting.

We have a linear regression to predicate y, given by plenty of x inputs.

y = a1x1 + a2x2  + a3x3 + a4x4 + a5x5.....

In the above equation a1, a2,….. are the coefficients and x1,x2,……are the independent variables to predicate dependent y.

“Regularisation means generalize the model for the better. “

“Mastering the trade-off between bias and variance is necessary to become a machine learning champion.”

Regularization is a scientific technique to discourage the complexity of the model ( reduce magnitude ). It does this by penalizing the loss function. What is mean by penalizing the loss function? Penalizing the weights makes them too small, almost near to zero. It makes those terms near to zero almost negligible and help us to simplify the model

The loss function is the sum of the squared difference between the predicted value and the actual value. ƛ is the regularization parameter which determines how much to penalizes the weights and the right value of ƛ is somewhere between 0 (zero) and large value.

There are few regularisation techniques.

  1. L1 Regularization or Lasso Regularization
  2. L2 Regularization or Ridge Regularization
  3. Dropout
  4. Data Augmentation
  5. Early stopping

We are solving the above overfitting problem using L2 regularisation technique.

Let us implement and solve the overfitting problem.

Penalizes the coefficient

# penalizes the coefficients
def l2_penalty(params):
 penalty = nd.zeros(shape=1)
 for param in params:
 penalty = penalty + nd.sum(param ** 2)
 return penalty

Reinitialize the parameter because for measures.

for param in params:
    param[:] = nd.random_normal(shape=param.shape)

L2 regularised logistic regression,

L2 regularization is the term of the sum of the square of all the features weight. Consider below formula. L2 regularization performs better when all the input features influence the output and all with weights are of approximately equal size.

Let us implement this L2 regularisation.

epochs = 1000
moving_loss = 0.
l2_strength = .1
niter=0
loss_seq_train = []
loss_seq_test = []
acc_seq_train = []
acc_seq_test = []

for e in range(epochs):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        with autograd.record():
            output = net(data)
            loss = nd.sum(cross_entropy(output, label_one_hot)) + l2_strength * l2_penalty(params)
        loss.backward()
        SGD(params, .001)
        ##########################
        # Keep a moving average of the losses
        ##########################
        niter +=1
        moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
        est_loss = moving_loss/(1-0.99**niter)

    test_accuracy, test_loss = evaluate_accuracy(test_data, net)
    train_accuracy, train_loss = evaluate_accuracy(train_data, net)
    # save them for later
    loss_seq_train.append(train_loss)
    loss_seq_test.append(test_loss)
    acc_seq_train.append(train_accuracy)
    acc_seq_test.append(test_accuracy)
    if e % 100 == 99:
        print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" %
              (e+1, train_loss, test_loss, train_accuracy, test_accuracy))

## Plotting the learning curves
plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)

Let us see the graph for more understanding. From the graph, you easily identify the difference between the training loss and testing loss and how values are closer in this graph.

Summary:

This chapter given is bit insight about the gluon API, ndArray along with inbuilt some of the neural network modules from gluon. With the completion of this chapter, you are now know How to create a simple artificial neural network using gluon abstraction. When to use regression and when to use classification technique along with some real-time dataset.

As a machine learning developer, the major problem we face is the overfitting and underfitting and this chapter gives us the regularisation tool to address this overfitting problem. Gluon is very concise, powerful abstraction to help us to design, prototype, built, deploy and test the machine learning module over GPU and CPU. We can now know how to set the context (GPU, CPU). We have solved classification problems such as binary classification and multiclass classification using logistic regression technique. Let us move on to the next adventure.

Overview of Deep learning with Gluon

Overview of Deep learning with Gluon

This chapter introduces a fundamental concept and jargon that every machine learning engineer and data scientist should know. In this chapter, we will discuss some basic concept of machine learning, deep learning, and AI. Then in subsequent chapters of this book, we will dive deeper and makes hand dirty. Deep learning(DL) has been outbreaking technology for all the industries and its booster for the AI adaptation.

Andrew Ng once said Artificial Intelligence is the new electricity!

AI and DL and ML are used interchangeably but there is a substantial difference between these three. We will start with a brief definition of each one. This chapter will cover basic of the machine learning, deep learning and AI and some foundation terminology to understand the deep learning then we will have a glance over gluon API. We will also cover some part of MXnet Deep learning framework along with Gluon API. This book is for any technical person who wants to get up to speed on machine learning and deep learning quickly. And anyone who is a novice to the technology but who is curious about how the machine thinks and act. In this book, we can dig deeper into the Deep learning neural network using Gluon API and underline deep learning framework is MxNet. Gluon is packaged along with MxNet and it is an abstraction layer over Apache MxNet Deep learning framework. Gluon name was given by subatomic particle. A gluon is an elementary particle that acts as the exchange particle. This book is for the data scientist and machine learning Engineer and aspiring data scientist.

This chapter contains below points,

  • Artificial Intelligence
  • Machine learning
  • Deep learning
  • Neural Network Architectures
  • Gluon API overview and environment setup

Artificial Intelligence(AI)

Artificial intelligence is where the machine will think, act, fail, learn and react without human intervention. Artificial intelligence is the hype now in the industry and there are tons of articles available — they teach us, dream us for future and scare us as well but above all AI the revolutionary technology. The progress which we did in the last couple of years was awesome due to the amount of innovation in computation power and a vast amount of data. At the very highest level, AI is about creating machines capable of solving problems like a human. As a human, we learn through reasoning, intuition, cognitive thinking, and creativity. There are several definitions of AI floating around, my favorite one “the science and engineering of making intelligent machines”.

The history of AI:-

During the second world war, the Germans build the Enigma machine to be used in military communications to send messages securely.
Alan Turing and team built the machine that used to decipher enigma messages.
Cracking the enigma code by a human was very challenging due to the different permutation and combination. The journey of the question of whether can machines think and act like a human or not started much earlier than that. In the early days of AI, machines were able to solve problems that were difficult for humans to solve or the mundane industry work.
There are different aspects of human intelligence and AI. We just want how to mimic human and built an intelligent machine.

In 1956, American computer scientist John McCarthy organized the Dartmouth Conference, at which the term ‘Artificial Intelligence’ was coined first. Researchers Allen Newell and Herbert Simon were instrumental in promoting AI as a field of computer science that could transform the world. The father of AI developed the LISP programming language which becomes important in AI. In 1951, a machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. Subsequently, Newell and Simon developed a General Problem Solver algorithm to solve mathematical problems. It was also in the late 1960s that the first mobile decision-making robot capable of various actions was made. Its name was Shakey.
Shakey could create a map of its surroundings prior to moving. The first ‘intelligent’ humanoid robot, was built in Japan in 1972. In the early days of AI, researcher believe AI could be able to solve the problem by hard-coding a rule-based system like a decision tree.
This AI system aka Symbolic AI and it was very successfully to solve well defined logical problems but it was failed to solve complex problems such as natural language understanding, image detection, understanding scene, Object detection, time-based forecasting.
Over the decade of efforts and well funded global efforts, researchers found it incredibly difficult to create intelligent machine due to different reasons unavailable of computing power, lack of data.
In 1997, IBM’s Deep Blue defeated became the first computer to beat a supreme world chess champion, Garry Kasparov. AI technology continued its march, largely thanks to improvements in computer hardware and people used AI methods in a narrow domain instead of general intelligence that help researchers to solve some complex problem.
Exponential gains in computer processing power and storage ability allowed companies to store vast quantities of data. Today’s AI hits on almost every aspect of human life, from the military and entertainment to our cell phone and driverless cars, from real-time voice translation to a vacuum that knows where and how to clean our floor without you, from our own computer to your doctor’s office. An autonomous (driverless) car, facial recognization for authentication So what where is AI going in the future? Is it scary or not. No one can tell you for sure.

AI-powered machines are usually classified into two groups — general and narrow. The narrow AI machines can perform specific tasks very well, sometimes better than humans
The technology used for classifying images on Airbnb is an example of narrow AI.
AI, DL, and ML fit together.

Machine learning:-

Machine learning is a computer science branch that deals with methods and technique to implement an algorithm. Machine learning is inferential leaning from a descriptive data set.
This era is data mining era. Data is the fuel of the 21st century. If you have data(fuel ) then you can develop an AI system that electrifies your business. In generally while doing programming means we have data and rules and we will expect some result from this. This is one of the paradigms we follow as a programmer. We want to write down a program to convert temperature Fahrenheit to Celsius, to do this we need data values in Fahrenheit and formula for conversation, then with the help of this, we will write down code snippet that fulfills this requirement and result of this code snippet is a temperature in Celsius.
Machine learning has shifted this paradigm, Machine learning will take data and Answers as input and in a result return the rules. As we discussed above Fahrenheit to celsius programming, but we will just think this problem in ML context. We will provide both Fahrenheit and Celsius values and ask the ML program will find out the relation between this, that means find out the formula. This is just a simple example but there are many more complex problems addressed with the help of ML.

There are plenty of definitions are articles available over the internet that can explain to you what is machine learning? When I just fire a query to google wiki, this is the very simple definition of a machine learning I come across.

“Machine learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959). It is a subfield of computer science. The idea came from work in artificial intelligence. Machine learning explores the study and construction of algorithms which can learn and make predictions on data.”

more engineering-oriented Definition:

A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.
 — Tom Mitchell, 1997

Likewise in the above example, we discussed your program should explicitly identify the relationship between Fahrenheit and Celsius value and take action accordingly in the future instead of providing an implicite formula for conversion. Machine learning is not only a science but also its an art. In machine learning data is the challenging part, If we have a data for training algorithm to validate that ML algorithm we need testing data set as well, so in ML we need two data set one if to train Algorithm and another for testing.

Types of Machine learning:

Less is More

1. Supervised Machine learning:

Supervised machine learning is the technique of Inferring a rule from the labeled dataset. Supervised machine learning means machine learning with some amount of human supervision, means we have input data along with output label. In the supervised learning data set is available with expected output this aka label. From data, the machine learning algorithm will understand for which input what is the output. Typical supervised learning address classification, regression problem such as spam filter, prediction of house prices. To train these systems we need a huge amount of data set.

Below are some Supervised Machine learning algorithms

  • Linear Regression
  • Logistic Regression
  • Support Vector Machines (SVMs)
  • Decision Trees and Random Forests
  • k-Nearest Neighbors
  • Neural networks2

2. UnSupervised Machine learning:

UnSupervised machine learning is the technique of Inferring a rule and find a meaning full pattern from data set. In this type of machine learning, datasets consisting of input data without labeled result. UnSupervised learning with supervision or learning without a teacher. To train unsupervised algorithm the given data are not annotated that mean only input values provided. This technique is useful to group the data or do the clustering and find the common pattern from the data.

Some unsupervised Algorithm

  • Clustering:
    k-Means
    Hierarchical Cluster Analysis (HCA)
    Expectation Maximization
  • Visualization and dimensionality reduction:
    Principal Component Analysis (PCA) — Kernel PCA
    Locally-Linear Embedding (LLE)
    t-distributed Stochastic Neighbor Embedding (t-SNE)
  • Association rule learning:
    Apriori
    Eclat

3. Self-supervised learning:

Self Supervised learning is a very recent technique of machine learning. This is supervised learning but instead of providing labeled data by human as an input, the data set is auto labeled. Self-supervised learning technique as a potential to solve a problem which is not addressed by supervised learning. As I mentioned earlier in the machine learning data set is the challenging thing. To provide a huge amount of labeled data is a very crucial task.

Self-supervised learning is autonomous supervised learning. It is a representation learning approach that eliminates human supervision to label data. Self-supervised learning is very relevant to human because we learn a few things in a supervised manner and few unsupervised ways but we learn from very few examples and generalize exceptionally well.

4. ReInforcement Leaning:

ReInforcement learning is another technique in machine learning. Have you visited a circus ever, in circus ringmaster train the tiger? For tigers, positive behavior ring master can reward him and for negative behavior can be punishment. The way we learn in academia.
Reinforcement learning means the agent will learn to reinforce the way in a particular environment, can be rewarded for positive behavior and get punished for negative behavior.
Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is useful in gaming, it’s goal-oriented leaning where an agent can learn how to behave in the environment by performing actions and accumulate maximize reward to reach to the goal.

This is a very interesting analogy used by Yann LeCun to understand this.

“ Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. “

Deep Leaning:

Deep-learning is the inferring learning approach that human beings use to gain knowledge. Over the past couple of year, deep learning revolutionized the many aspects of research and industry including things like an autonomous driving vehicle, healthcare, reinforcement learning, generative modeling, NLP, robotics, fintech. Deep-learning technique is the part of a broader family of machine learning. Deep learning technique taking inference of how the human brain works. Deep neural networks are in deep structured and hierarchical each level of a hierarchy represent a different level of abstraction. Deep-learning is now become hype because of advancement in hardware and software. Artificial Neural network is the core part of deep-learning. ANN is the inference of taken from human brain neuron. In a human brain, there are millions of neuron present and they are interconnected and there structure and hierarchy are very deep and complex. Deep-learning neural networks are taken inference from the human brain such how human understand the scene or how our cortex work to identify the object, CNN (Convolutional Neural network) is the best example for this one. Before deep-learning technique to Object detection or to detect human face is a very crucial one, you need to extract feature and create a template for the same, such as the detect nose, the left eye, right eye means you need to define every single step to reach an outcome but with the help of deep learning we can understand any scene and object detection has become very easy. The deep neural network has a deep level of neural network in a hierarchical and abstract way to understand the things and finally combine the result.
Deep-learning is a subset of machine learning which takes ML one step further to process and understand data and find meaningful insight.

Our brain consists of a large network of interconnected neurons, which act as a roadway for information to be transmitted from point A to point B. To send different kinds of information from A to B, the brain activates a different set of neurons, and so essentially uses a different route to get from A to B. Biological neurons are interconnected they understand things by an alteration of sending signals. The cell consists of a cell body, with dendrites acting as connecting wires for other neurons to connect to. In most cases, a neuron has one axon capable of transmitting electric currents actively to other connecting cells. The connections between neurons are established using synapses located at the end of the axon. These synapses are responsible for a lot of the magic of computing and memory in the nervous system. The ANN model is modeled after the biological neural network. In the above diagram Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, the artificial neuron has a number of input channels, a processing stage, and one output that can fan out to multiple other artificial neurons.

Bit history:

Deep-learning evolved all industry but in general deep learning addressed major problems such as speech recognization system, image recognization, Object detection from the image.

The word “deep learning” was first used when talking about Artificial Neural Networks (ANNs) by Igor Aizenberg and colleagues in or around 2000. In deep-learning deep refers to the number layers typically.

1960s: Shallow neural networks
1960–70s: Backpropagation emerges
1974–80: First AI Winter
1980s: Convolution emerges
1987–93: Second AI Winter
1990s: Unsupervised deep learning
1990s-2000s: Supervised deep learning back in vogue
2006s-present: Modern deep learning

Neural Network architectures:

A neural network is designed to solve a complex task, some tasks are more complex to solve but not impossible such as write down a recommendation system based on shopping history. As a programmer, we can write down some sort of hardcoded rules to fulfill this requirement but this is mundane work, so come with machine learning algorithms will help us to explore data and find meaningful insight pattern. Machine learning or AI system comes into the picture where there is more uncertainty, such as

  1. It’s hard to identify the fraudulent transaction in digital money transfer where the end user is not in front of a system its virtual one.
  2. It’s very hard for a machine to detect the pedestrian.

Artificial Neural networks are the first class model to predicate this uncertain result. ANN is the inference of the human brain. ANN is a simulation of the human brain. Neural network architecture is very complex and they are very adaptive and do parallel computation.

Neural network research is motivated by two desires,

  1. Understand the human brain better way,
  2. Mimic human activity and intelligence in computers that can deal with a complex problem.

There is a different architecture of Neural network will address domain specific problem. Human intelligence is generally intelligent. It’s very tough to develop artificial general intelligence to address almost all or some problem. Neural Network architectures are consist of three major layers: the input layer, hidden layers, and the output layer. The number of hidden layers defines the depth of the Neural network architecture.

Below we will check some brief introduction about some ANN.

  1. Perceptrons
  2. Hopfield Neural Network
  3. CNN
  4. Recurrent Neural Networks

Gluon API

Overview

Gluon API is the high-level simple, concise and efficient deep learning API. Amazon and Microsoft research group developed Gluon API specification. This is the product of joint effort taken by both leading tech companies to generalize AI for any developer. Gluon is open source deep learning interface, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this helps us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. Gluon offers an interface that allows developers to prototype, build, and train deep learning models.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure. Seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

Gluon is imperative for developing but symbolic for deploying.

Before we dive dipper into the Gluon API, we should know at least one of the underline framework on Gluon is rely upon. Gluon is the abstraction layer for deep learning framework such as MxNet.

Distinct Advantages:

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Build graphs on the fly
  4. High-performance operators for training

MxNet:

MXNet is open source deep learning library by Amazon. This founded by U.Washington and Carnegie Mellon U. This is a portable, efficient and scalable deep learning framework. This will support python, javascript, Scala, Julia, and R. The best thing about MXNet is, it allows both imperative(define by run) and symbolic programming. It has a vibrant community backed by Amazon.

Installing Gluon on MacOS:

The Gluon specification has already been implemented in Apache MXNet so we need to install apache MxNet to setup environment. It’s easy to set up an environment for Gluon API using different options such as docker, pip, virtual environment. MxNet is supporting different languages along with different OS platform. I will show here installation for Mac OS.

You can refer this link to do the installation for your respective platform. (https://mxnet.incubator.apache.org/versions/master/install/index.html?platform=MacOS&language=Python&processor=CPU)

By default, MxNet gets installed with CPU but you can also do the installation for GPU enabled mode.

Pip mode

$ pip install mxnet

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Using Docker

Docker images with MXNet are available at Docker Hub(https://hub.docker.com/).

Step 1 Install Docker on your machine. For more detail (https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac)

Step 2 Pull the MXNet docker image.

$ docker pull mxnet/python

Very your docker pull command

$ docker images

I recommend using Python version 3.3 or greater and setup environment using a Jupyter Notebook

# I used minicoda and virtual environment 
# source activate gulons
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user
Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.
pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Validate the installation:

To validate the installation this is the simple steps

Pip installation validation, start the terminal and type

$ python
$ import mxnet as mx
$ from mxnet import gluon

The same way you can do the validation for Docker setup and starting docker image and executing a bash command.

MXNet should work on any cloud provider’s CPU-only instances. You can Also do setup Gluon and MxNet over any cloud platform. It’s easy to set up over Amazon AWS.

AWS Deep learning AMI (Amazon Machine Images) — Preinstalled Conda environments for Python 2 or 3 with MXNet and MKL-DNN.

Also, MxNet supports different edge platforms such as Raspberry Pi and NVIDIA Jetson Devices.

The architecture of MxNet:

In the above diagram, you can explore the key modules of the MxNet framework and their relation. As stated in the above diagram solid arrow indicate the concrete dependency and dotted line indicate light dependency. In the above modules, lower modules indicated by bluish color are the system modules and high-level modules indicate user-facing modules this is the actual API where the programmer will do interaction. The modules are

KVStore:- key-value store interface for parameter synchronization

Data Loading:- Efficient distributed data loading and augmentation

NDArray:- Dynamic, asynchronous n-dimensional arrays

Symbolic Execution:- Static symbolic graph executor

Symbolic Construction:- provides a way to construct a computation graph

Operators:-Operators that define static forward and gradient calculation

Storage Allocator:- Allocates and recycles memory blocks

Runtime Dependency Engine:-Schedules and executes the operations

Resource Manager:- Manages global resources

Gluon Package

Gluon package comes with four key modules.

  1. Parameter:- Parameter is a basic component. A parameter can hold the weight of blocks. There are two standard API one is Parameter and to manage a set of parameters we have ParameterDict
  2. Containers:- Containers the blocks that will help you to build neural network Containers are the blocks which hold the parameters.
  3. Trainer:- Trainer helps you to do parameter optimization. Trainer applies optimizer over parameters in the containers.
  4. Utilities:- Utilities contains small utils that help us in certain operations, such as split, and rescale dataset for data parallelism.

Gluon APIs:

Gluon API contains below APIs.

  1. Gluon Neural Network Layers API:- Gluon Neural network layer API provides you building blocks of neural network. It contains API to directly add blocks in a neural network, such as Dense layers, Convolution layers, Activation function layer and Max Pooling layer.
  2. Gluon Recurrent Neural Network API:- This API provides building blocks to define the Recurrent Neural Network. This can help us to define RNN with LSTM.
  3. Gluon Loss API:- This API contains different loss function which is required while building a different neural network. This API can help you to calculate mean squared loss or mean absolute loss.
  4. Gluon Data API:- This API is very useful API for people who want to make hands dirty but don’t have a dataset. This API contains dataset utilities and common public datasets.
  5. Gluon Model Zoo:- Gluon model zoo contains pre-trained and pre-defined models that will help us to bootstrap our development.
  6. Gluon Contrib API:- This is for the whom who had mastery in Gluon and Who want to contribute into Gluon API. This API is for the community who wanted to try out some new features and get feedback.

Deep learning Programming style:

One of my favorite things about Gluon API is that it offers multiple levels of abstraction so you can choose the right one for your project. Gluon offers two styles to create your neural network. First one is symbolic style or Declarative style and the second one is imperative style.
These are the two-deep learning programming style. Each one has there own pros and cons, that’s why almost all the deep learning framework offers both styles of programming.

Imperative Programming:

Imperative programming means define by run means dynamic programming. The part of the computation graph constructed at the run time. Imperative programming is flexible and straightforward. In this programming, we can take advantage of language native features such as iteration, condition, debugger, etc.
Imperative style is nothing new for you the way you are writing Numpy code is the imperative style of programming. Imperative style programs perform operations directly.
Most of the Python code shows an imperative form, for example, the following Numpy code. In this style of programming, the state of the program is getting changed.

import numpy as np
a = np.ones(20)
b = np.ones(20) * 2
c = b * a
d = c + 1

Here is above code snippet When we issue c = b * a command to run the program, the actual operation is getting executed.

PROS:

  1. straightforward and flexible because of execution flow with a programming language.
  2. Take advantage of native language features

Cons:

  1. Manual optimization
  2. Not efficient in terms of memory usage and speed.

Symbolic Style of programming:

Symbolic programming aka declarative programming it’s contrary to imperative programming style. In this style of programming execution performed after the computational process fully defined. In this paradigm you need to first define and then run, this is a status computation graph. This is the immutable graphs this is not changing at run time. Symbolic-style programs include compilation steps either explicitly or implicitly, this converts the graph into the function that actually getting called any time. In this style of programming, we can just define a function with a placeholder value and after this, you can compile the function and evaluate it with the actual input. Below is a code snippet, converting above imperative code to symbolic code In the symbolic programming generally requires three steps:

#Step 1:- Define the computation graph.
a = Variable('A')
b = Variable('B')
c = b * a
d = c + Constant(1)
#Step 2:- Compile the computation process into an executable program.
f = compile(d)
#Step 3:- Provide the required inputs and call on the compiled #program for execution.
g = f(a=np.ones(20), b=np.ones(20)*2)

In this code snippet, the c = b * a does not actually perform the operation, instead, this will generate the computation graph that represents this computation process.
Following computation, a graph is generated for operation d.

PROS:

  1. Infer optimization automatically from the dependency graph.
  2. Memory reuse opportunities.
  3. More efficient and easier to port.

Cons:

  1. Less flexible

Hybrid Programming style:

Gluon comes up with hybrid programming style and its the positive point for this, in the above description you can not conclude which programming style is good in deep learning.
Gluons hybrid approach give us more flexibility to harness the benefits of both imperative and symbolic programming. User should imperative programming to build and test a prototype on the fly and while deploying or serving in production, we can convert a program into symbolic programming to achieve product level computing performance.
This was possible due to gluon API hybrid programming.

In the hybrid programming, we can build models using either the HybridBlock or the HybridSequential Gluon API classes. By default, Gluon API uses the Block or Sequential Block classes same that is used in imperative programming. When we call hybridize function,
then Gluon will convert programs execution into symbolic programming style.

Let us take a small example of Hybrid programming.

#imperative
import mxnet as mx
from mxnet import nd
a = mx.nd.zeros((120,60))
b = mx.nd.zeros((120,60))
c = a + b
c += 1
print (c)
#Symbolic
improt mxnet as mx
from mxnet import nd
net = mx.sym.Vairable('data')
net = mx.sym.FullyConnected(data=net, num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net)
texec = mx.module.Module(net)
texec.forword(data=c)
texec.backward()

The NDArray API:

In this section, we will introduce the NDArray API. In the MxNet NDArray API is the primary tool to store, transform and manipulate data. This is the core data structure for all computation. NDArray is the multi-dimensional array similar to a Numpy. The NDArray represent the multi-dimensional, fixed size homogenous array. Basically, NDArray provides API to imperative tensor operations. The mxnet.ndarray is similar to numpy.ndarray but not very similar there is some difference.

Array creation:-

We can create NDArray using python tuple or list with NDArray array function.

import mxnet as mx 
from mxnet import nd
# create a 1D array with a python list 
x = mx.nd.array([4,3,9]) 
# create a 2D array with a nested python list 
z = mx.nd.array([[4,3,6], [5,1,8]]) 
#display the array
{'x.shape':x.shape, 'z.shape':z.shape}

We can also create NDArray using numpy.array API.

# import numpy package
import numpy as np
from mxnet import nd
# create numpy array
d = np.arange(15).reshape(3,5)
# create a 2D array from a numpy.ndarray object
y = mx.nd.array(d)
# display array
{'y.shape':y.shape}

We can specify data which is optional dtype while creating of NDArray. By default, float32 is used. We can also create NDAaray with placeholder with the help of different function such as zeros, ones, etc. NDArray also offers generally all the API that are required to manipulate the data such as slicing, indexing, shape, basic arithmetic, copies, reduce, etc.

# basic operatiosn of NDArray
# float32 is used by default
a = mx.nd.array([1,2,3])
# create a 16-bit float array
c = mx.nd.array([1.2, 2.3], dtype=np.float16)
(a.dtype, c.dtype)
# create empty array
d = mx.nd.empty((2,3))
# create array with all zeros
e = mx.nd.zeros((2,3))
# create array with all 5
f = mx.nd.full((2,3),5)
# we can also perform some basic operations
# elementwise plus
g = a+ b
# elementwise minus
h = c-d
i = -e
# we can use sum or mean 
j = mx.nd.sum(e)
# exponential
j.exp()
# transpose matrix
nd.dot(a,c.T)
#indexing
j[1,2]
# for advanced way
j[:,1:2]

NDArray has some key advantages First, NDArrays support asynchronous mathematical computation on CPU, GPU, and distributed cloud architectures. Second, they provide support for automatic differentiation. These properties make NDArray vital choice for deep learning. As we saw we can create vector, matrix, and tensor and manipulate with the help of NDArray.

We can convert NDArray to Numpy if you have some scenarios and instead of NDArray if you want to use Numpy array we can use, the conversion is easy.

Note:- converted array does not share memory.

# convert x into numpy z array
z = x.asnumpy()
# display type of z for verification (type(z), z)
# display numpy array as a NDArray.
nd.array(z)

The Symbol API:

In the previous section, we learned about the NDArray to store and manipulate the data. In this section, we will be exploring the symbol API. Symbol API is the basic interface for symbolic programming. Symbolic API are following declarative approach, instead of executing program step by step you need to first define computation graph, computation graph contains the placeholder for inputs and desired output. Gluon API taking advantage of this approach under the hood before hybridization. Your computation graph is a composition of symbols, operators, network layers. With the symbolic API, we can optimize the computation graph. Symbolic API uses a small memory footprint because we can recycle memory from intermediate steps. NDArray allows writing a program in an imperative fashion but symbolic API allows writing a program in a declarative fashion. But most of the operators supported by NDArray also supports symbol API. A symbol means a multi-output symbolic expression

We will just build a simple example of a+b its symbolic API we need to declare placeholder for this using mx.sym.Variable, give them name as a and b respectively.

import mxnet as mx
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = 3 * a + b
type(c)
# output 
mxnet.symbol.symbol.Symbol

Symbol API also supports a rich set of neural network API with the help of those we can define neural networks as well.

First Gluon Example:

Create a simple neural network layer using the gluon nn package.

# import ndarray module from mxnet package
from mxnet import nd
# import gluon package
from mxnet.gluon import nn
# let us define layer Dense is a subclass of Block to define layer
layer = nn.Dense(2)
layer
# we need to initialise the weight [-0.7,0.7]
layer.initialize(ctx=mx.cpu(0))
# random (3,4) matrix range from -1 to 1
x = nd.random.uniform(-1,1,shape=(3,4))
layer(x)
#  print weight data
layer.weight.data()
# collect the parameters
layer.collect_params()
# type of params collected from layer
type(layer.collect_params())

In this example, we just saw How to define simple layer using Gluon API.

Summary:

In this chapter, we introduced some of the fundamental concepts such as Artificial intelligence, deep learning, machine learning, and Gluon API along with MxNet.

It consists of different machine learning types and deep learning techniques and most recent research in machine learning such as self-supervised learning. Deep learning is achieved by just adding more layers as a hidden layer this is possible because of the availability of huge data and advancement is computation. With the help of different deep learning framework and cloud computing now these techniques are available to any software engineer on a fingertip.

In this chapter, we begin our journey into deep learning using Gluon API. Introduction of Gluon API with different deep learning programming paradigm. This chapter ended with the installation of Gluon, environment setup and few small API examples. Let us ready with Gluon API tool to conquer the deep learning world.

Dynamic Computation Graphs(DCG) with Tensorflow Fold!!

Dynamic Computation Graphs(DCG) with Tensorflow Fold!!

Google has introduced a new tool under TensorFlow umbrella i.e TensorFlow Fold.

If you are familiar with the deep learning libraries such as TensorFlow, chainer, theano, caffee and many more. Everyone has a unique approach to building the graph-based computation. But some How almost all machine learning/deep learning frameworks operate on static computation graphs and can’t handle dynamic computation graphs. (PyTorch, Dynet, and Chainer are exceptions).

Tensorflow fold is based on deep Learning with Dynamic Computation Graphs. What an idea!!!

Ref from research.googleblog.com

Why tensorflow fold?

As we already have one beautiful tool suite case tensorflow which is addressing some cool problem. But it has some limitation in terms of dynamic graph computation. Tensorflow uses static graph computation. Batch processing of dynamic graphs is a very common technique for a variety of applications, such as computer vision and natural language processing. However, due to the varieties of type and shapes between distinct data, batch processing with a static graph over such data set is almost impossible with a current tensorflow framework.

Tensorflow fold is not another deep-learning framework. This is the extension to tensorflow that provides a tensorFlow implementation of the dynamic batching algorithm. Dynamic batching is an execution strategy for dynamic computation graphs.

Computations over data-flow graphs is a popular trend for deep learning with neural networks, especially in the field of cheminformatics and understanding natural language. In most frameworks, such as TensorFlow, the graphs are static, which means the batch processing is only available for a set of data with the same type and shape. However, in most original data sets, each data has its own type or shape, which leads to a problem because the neural networks cannot batch these data with a static graph.

To overcome the above problem tensorflow fold has introduced.

Getting started!!!

Fold runs under linux; Python 2.7 and Python3.3 are recommended. Install either using virtualenv or pip.

Please note that Fold requires TensorFlow 1.0; it is not compatible with earlier versions due to breaking API changes.

First install Python, pip, and Virtualenv:

sudo apt-get install python-pip python-dev python-virtualenv
#create virtualenv
virtualenv foo             # for Python 2.7
virtualenv -p python3 foo  # for Python 3.3+
#Activate environment
source ./foo/bin/activate      # if using bash
source ./foo/bin/activate.csh  # if using csh
#  Install the pip package for TensorFlow. For Python 2.7 CPU-only, this will be:
pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-1.0.0rc0-cp27-none-linux_x86_64.whl
#For Python 3.3+ and/or GPU, see here for the full list of available TF binaries.
#Check that TensorFlow can load:
python -c 'import tensorflow'
#  Now install tensoflow fold
#Install the pip package for Fold. For Python 2.7, this will be:
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-cp27-none-linux_x86_64.whl
#for python 3.3
pip install https://storage.googleapis.com/tensorflow_fold/tensorflow_fold-0.0.1-py3-none-linux_x86_64.whl
#Test is installed successfully or not
python -c 'import tensorflow_fold'

If everything goes well. then test below example.

Next one

  1. Quickstart notebook
  2. Tensorflow fold Documentation
  3. TensorFlow: Concepts, Tools, and Techniques

There are other libraries and framework which are also supporting dynamic graph computation. Tensorflow fold is tensorflow based and it has its own approach to tackle this problem.

In this paper, Google introduced a new algorithm called ‘Dynamic Batching’, and developed a Tensorflow-based library called ‘TensorFlow Fold’, which solved the DCGs problem in both theoretical and empirical fields.
This is the experimental implementations, they proved that their method is effective and more efficient and concise than previous works.

Paper is here for more details.

Moral of the story is tensorflow is not only supporting tensors any more!!!!

Let us apply thoughts and let me know your experience.

By clapping more or less, you can signal to us which stories really stand out.

If you enjoyed this article, please don’t forget to Clap.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Gluon -API for Deep learning.

Gluon -API for Deep learning.

Amazon Web Services and Microsoft’s AI and Research Group this morning announced a new open-source deep learning interface called Gluon, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps,” according to an announcement.

https://mli.github.io

Gluon is a clear, concise, simple yet powerful and efficient API for deep learning. Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this help us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. You can any kind of framework such as MxNet, tensorflow, pytorch.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure.

More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed.

Gluon is imperative for developing but simple for deploying.

It is imperative means flexible but may be slow. It is also efficient and portable but hard to use.

Sample code with MxNet:-

Distinct Advantages:-

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Dynamic networks
  4. High-performance operators for training

Setup environment:-

The Gluon specification has already been implemented in Apache MXNet, so you can start using the Gluon interface by following these easy steps for installing the latest master version of MXNet. I recommend using Python version 3.3 or greater and implementing this example using a Jupyter notebook.

# I used minicoda and virtual environment 
# source activate gulon
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user

Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.

pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Multilayer Perceptron using gluon

Using gluon, we only need two additional lines of code to transform our logistic regression model into a multilayer perceptron.

from __future__ import print_function
import mxnet as mx
import numpy as np
from mxnet import nd, autograd
from mxnet import gluon

You can compute this with the help of CPU or GPU

ctx = mx.cpu() # or GPU mx.gpu(0)

The most popular deep learning hello world dataset is MNIST.

mnist = mx.test_utils.get_mnist()
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                                     batch_size, shuffle=False)

Then here is you Model

num_hidden = 256 . 
net = gluon.nn.Sequential()
#Relu to activate
with net.name_scope():
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))         
    net.add(gluon.nn.Dense(num_outputs))
#initialization of the parameters 
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
#calculate cross entropy loss
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
#evaluate accuracy
def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
return acc.get()[1]

Everything is ready now then

epochs = 15
smoothing_constant = .01

for e in range(epochs):
for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])

##########################
#  Keep a moving average of the losses
##########################
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)

    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
          (e, moving_loss, train_accuracy, test_accuracy))

This is just a simple idea. Apply this and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.