Colab getting started!!

Train deep neural network free using google colaboratory.

GPU and TPU compute for free? Are you kidding?

Google Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.

With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. If you don’t have money to procure GPU and want to train neural network or want to makes hands dirty with zero investment then this if for you. Colab is a Google internal research tool for data science.

You can use GPU as a backend for free for 12 hours at a time.

It supports Python 2.7 and 3.6, but not R or Scala yet.

Many people want to train some machine learning model or deep learning model but playing with this requires GPU computation and huge resources that blocking many people to try out these things and make hands dirty.

Google Colab is nothing but cloud-hosted jupyter notebook.

Colaboratory is a free Jupyter notebook environment provided by Google where you can use free GPUs and TPUs which can solve all these issues. The best thing about colab is TPUs (tensor processing unity) the special hardware designed by google to process tensor.

Let’s Start:- 

 To start with this you should know jupyter notebook and should have a google account. 

http://colab.research.google.com/

Click on the above link to access google colaboratory. This is not only a static page but an interactive environment that lets you write and execute code in Python and other languages. You can create a new Jupyter notebook by File →New python3 notebook. clicking New Python3 Notebook or New Python2 Notebook.

We will create one python3 notebook and it will create one for us save it on google drive. 

Colab is an ideal way to start everything from improving your Python coding skills to working with deep learning frameworks, like PyTorch, Keras, and TensorFlow and you can install any Python package which is require for your python coding like from simple sklearn, numpy too TensorFlow. 

You can create notebooks in Colab, upload existing notebooks, store notebooks, share notebooks with anyone, mount your Google Drive and use whatever you’ve got stored in there, import most of your directories, upload notebooks directly from GitHub, upload Kaggle files, download your notebooks, and do whatever your doing with your local jupyter notebook.

On the top right you can choose to connect to hosted runtime or connect to local runtime

Set up GPU or TPU:-

It’s very simple and straight forward as going to the “runtime” dropdown menu, selecting “change runtime type” and selecting GPU/TPU in the hardware accelerator drop-down menu!

Now you can start coding and start executing your code !!

How to install a framework or libraries?

It’s as simple as writing import statement in python!.

!pip install fastai

use normal pip install command to install different packages like TensorFlow or PyTorch and start playing with it.

For more details and information

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=gJr_9dXGpJ05

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=-Rh3-Vt9Nev9

Implementing neural networks using Gluon API

Implementing neural networks using Gluon API

In the previous chapter, we discussed the basics of deep learning and over of Gluon API and MxNet. This chapter explains how to use Gluon API to create a different neural network by exploring Gluon API.

Gluon API is the abstraction over the mathematical computation deep learning framework MxNet. As we have discussed in the last chapter different types of machine learning and different algorithms to implement each method. As part of this chapter, we will look into linear regression, binary classification and multiclass classification using Gluon.

Neural Network using Gluon:

Gluon has a hybrid approach in deep learning programming. It supports both symbolic as well as the imperative style of programming. There are different machine learning algorithms to address different problems. As we stated in the last chapter artificial neural network is a mathematical computation representation of a human brain. It’s mathematical computation to deal with that we need to do a matrix or tensor manipulation and to do that we have Gluon API, NDArray API. Artificial Neural network contains nodes and each node has some weight and bias and the data will get transform from layer to layer up to the output layer. The middle layers between the input layer and the output layer are called hidden layers. Let us explore something:

Linear Regression:

Linear regression is a very basic algorithm in the field of machine learning. Everyone will come across to this algorithm whether you are a novice or expert machine learning engineer or data scientist. Linear regression is categorized under supervised machine learning. As the name state, linear regression used to identify the relationship between two continuous variables. In this case, there are two variables, one is predictor (independent) and another one is the dependent (response) variable. We will be able to model the relationship between two variables by fitting them in a linear equation.

A linear regression line has an equation of the form Y = a + bX, where X is the explanatory variable and Y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0). This is in a mathematical way.

In the above diagram, you can able to see there is a linear equation to fit these data points. On X-axis we have some data points and on Y-axis we have some data. The data points plotted in this diagram states the linear equation between two variable one is dependent and another one is independent.

To understand this, let us take one small example. There is some real-life example like predict the sale of products based on buying history. Predict the house price based on the size of a house, location of a property, amenities, demand and historical records.

There are two different types of Linear regression.

  1. Simple Linear Regression:– Simple linear regression where there are two variables one is dependent and another is an independent variable. X is used to predicate the dependent variable Y. Like Predicate the total fuel expense based on the distance in kilometers.
  2. Multiple(Multi-Variable) Linear Regression:- Multiple linear regression where we have one dependent variable and two or more independent variable. In certain case, we have two or number features that could affect the dependent variable. Like in the case of predicate the house price depends on the size of a house, location of a house, construction year, etc. There are (X1, X2,..) dependent variable to predict Y.

Let us consider the problem when we are in academics we are predicting marks based on how we solved the paper. Let us take one small example you are planning a road trip to Shimla (the city from India) as you recently watched Tripling (India Web series ) with your two siblings. You started from Pune and the total distance have to travel is 1790 km. Its long journey so you have to plan each and every expense such as fuel, meal, and halt, etc. We will take a blank paper you will put when to start and stop and how much fuel is required? how much money need to reserve for meal and hotel charges and to follow these questions you will list out those things and based on your travel car mileage and current fuel prices you can predict total paid for fuel. So, it’s a simple linear relationship between two variables, If I drive for 1790 km, how much will I pay for fuel? If you want to predict the overall expense of a trip then you can convert this simple linear regression into the complex linear regression model. Add more independent variable such as meal cost, lodging charges, other expenses, and historical data in the last trips.

This is the way we can forecast the current trip charges and plan accordingly. The core idea is to obtain a line that best fits the data. Linear Regression is the simplest and far most popular method in machine learning for problem-solving.

Linear regression using Gluon:

Linear regression is the entry pass to the journey of machine learning, given that it is a very straight forward problem and we can solve this using Gluon API. A linear equation is y=Wx+b by constructing the above graph that learns the gradient of the slope (W) and bias (b) through a number of iterations. The target of each iteration to reduce the loss between actual y and predicated y and to achieve this we want to modify the W and b, so inputs of x will give us the y we want. Let us take one small example, implement linear regression using Gluon API, In this example, we are not developing each and everything from scratch but we will take advantage of gluon API to form our implementation,

# let is importantss
import numpy as np
import mxnet as mx
from mxnet import nd, autograd, gluon
# this is for nural layers
from mxnet.gluon import nn, Trainer
# this is for data loading
from mxnet.gluon.data import DataLoader, ArrayDataset

Here is the above code black we have just imported required modules. If you observed carefully gluon API is the part of the mxnet package. We have imported ndarray to numerical tensor processing and autograd for automatic differentiation of a graph of NDArray operations. mxnet.gluon.data is the module which contains API that can help us to load and process the common public dataset such as MNIST.

from mxnet.gluon import nn, Trainer

Gluon provides nn API to define different layers of neural network and Trainer API help us to train the defined neural network. Data is an important part let us build the data set.

We start by generating our dataset, one is for

# set context for optimisation
data_ctx = mx.cpu()
model_ctx = mx.cpu()
# to generate random data
number_inputs = 2
number_outputs = 1
number_examples = 10000
def real_fn(X):
    return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
#generate randome records of 10000
X = nd.random_normal(shape=(number_examples, number_inputs))
noise = 0.01 * nd.random_normal(shape=(number_examples,))
y = real_fn(X) + noise

The above code can generate the dataset for the problem.

Now data is ready, load the data using DataLoader API.

batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
                                      batch_size=batch_size, shuffle=True)

Let us build a Neural network with two input and one output layer as we defined this using nn.Dense(1, in_units=2). It’s called a dense layer because every node in the input is connected to every node in the subsequent layer.

net = gluon.nn.Dense(1, in_units=2)
# dense layer with 2 inputs and 1 output layer
# print just weight and bias for neural network
print(net.weight)
print(net.bias)
# output of above print statements
Parameter dense6_weight (shape=(1, 2), dtype=float32)
Parameter dense6_bias (shape=(1,), dtype=float32)
mxnet.gluon.parameter.ParameterDict

The output of this weight and bias are actually not a ndArrays. They are an instance of Parameter class. We are using Parameter over NDArray for distinct reasons. Parameters can be associated with multiple contexts unlike NDArray. As we discussed in the first chapter Block is the basic building block of neural network in the Gluon, Block will take input and generate output. We can collect all parameters using net.collect_params() irrespective of how complex the neural network is. This method will return the dictionary of parameters.

Next step would be to initialization of parameter of a neural network. The initialization step is very important. In this step, we can access contexts, data and also we can feed data to a neural network.

net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)
# Deferred initialization
example_data = nd.array([[4,7]])
net(example_data) 
# access the weight and bias data
print(net.weight.data())
print(net.bias.data())
net = gluon.nn.Dense(1)
net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)

let us observe the difference net = gluon.nn.Dense(1) and the first layer code net = gluon.nn.Dense(1, in_units=2), Gluon inference the shape on parameters.

square_loss = gluon.loss.L2Loss()

Now need to optimize the neural network, Implementing Stochastic gradient descent from scratch to optimize the neural network every time better we can reuse the code gluon.Trainer, pass a parameter dictionary to optimize the network.

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.0001})

SGD is Stochastic gradient descent implementation given by Gluon, the learning rate is 0.0001 and passing a dictionary of parameters to optimize the neural network. Now we have actual y and y-pred, we want to know how far the predicted y is away from our generated y. The difference between this two y is called as a loss function and to reduce this loss we are using SGD.

epochs = 10
loss_sequence = []
num_batches = num_examples / batch_size
for e in range(epochs):
    cumulative_loss = 0
    # inner loop
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(model_ctx)
        label = label.as_in_context(model_ctx)
        with autograd.record():
            output = net(data)
            loss = square_loss(output, label)
        loss.backward()
        trainer.step(batch_size)
        cumulative_loss += nd.mean(loss).asscalar()
    print("Epoch %s, loss: %s" % (e, cumulative_loss / num_examples))
    loss_sequence.append(cumulative_loss)

Let us visualize the learning loss.

# plot the convergence of the estimated loss function 
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.figure(num=None,figsize=(8, 6))
plt.plot(loss_sequence)
# Adding some bells and whistles to the plot
plt.grid(True, which="both")
plt.xlabel('epoch',fontsize=14)
plt.ylabel('average loss',fontsize=14)

SGD learns the linear regression model by plotting the learning curve. The graph indicates the average loss over each epoch. Loss is getting reduced over each iteration.

Now our model is ready and everything working as expected but we need to do some sanity testing for validation purpose.

params = net.collect_params()
print('The type of "params" is a ',type(params))
# A ParameterDict is a dictionary of Parameter class objects
# we will iterate over the dictionary and print the parameters.
for param in params.values():
    print(param.name,param.data())

From this example, we can say that Gluon can help us to build quick, easy prototyping.

In this example, we used a few API that helps to build a neural network without writing everything from scratch. Gluon provides us a more concise way to express model. API is too powerful to prototype, build model quick and easy. Linear regression we can use in many real-life scenarios,

  1. Predicate the house price
  2. Predicate the weather conditions
  3. Predicate the stock price

These are just a few scenarios where you can apply linear regression to predicate the values. The predicted values in linear regression are continuous values.

Binary Classification:

In the above section, we explored linear regression with sample code. When we implemented this linear regression the output value is continuous values, but there are few real-life examples where we don’t have continuous values but we need to classification such email is spam or not or which party will be getting elected in the next elections, customer should buy an insurance policy or not. The classification problem may be binary or multiclass classification where you have more than two classes. In this type of problem, the output neurons are two or more. In classification problem the prediction values are categorical. Logistic regression is the machine learning technique used to solve such classification problems. Basically, logistic regression is an algorithm to solve a binary classification problem.

Let us consider a problem we will provide an image as an input to the neural network and output could be labeled as to whether its dog(1) or non dog(0). In supervised learning there are two types of a problem one is regression and another one is classification problem. In regression problems, the output is a rational number whereas in classification problems the output is categorical. There are different algorithms available to solve such type of classification problems such as support vector machine, discriminant analysis, naive Bayes, nearest neighbor, and logistic regression. Classification problem-solving means identifying in which of the category a new observation.

In the above diagram, you can easily categories data into two classes, one is circled another one is a cross sign. This called binary classification.

Binary classification using logistic regression:

Logistic regression is a very popular and powerful machine learning technique to solve the classification problem. Logistic regression measures the relationship between the categorical dependent variable and one or more independent variables. Logistic regression will answer the question like how likely is it?. Then you will get the question of why are not using linear regression? We have tumor cancer dataset and each of one is malignant or not denoted by zero or one. If we use linear regression then we can construct a line to best fit for an equation y =wx +b then we can decide all values left to the line are non-malignant and values right of the line are malignant based on a threshold (ex. 0.5), what if there is an outlier means some positive class values into the negative class. we need a way to deal with outlier and logistic regression will give us that power. Logistic regression does not try to predict the rational value of a given a set of inputs. Instead, the output is a probability that the given input point belongs to a certain category and based on the threshold we can easily categorize the input observation. Logistic Regression is a type of classification algorithm involving a linear discriminant. The linear discriminant means the input space is separated into two regions by a linear boundary and model will be able to differentiate between points belonging to different category.

Logistic regression technique is useful when several independent variables on a single outcome variable. Let us consider we are watching cricket world cup matches, we want to predicate whether the match will be getting scheduled or not based on weather conditions

OutlookTemperatureHumidityWindyPlaysunnyhothighfalsenosunnyhothightruenoovercasthothighfalseyesrainymildhighfalseyesrainycoolnormalfalseyesrainycoolnormaltruenoovercastcoolnormaltrueyessunnymildhighfalsenosunnycoolnormalfalseyesrainymildnormalfalseyessunnymildnormaltrueyesovercastmildhightrueyesovercasthotnormalfalseyesrainymildhightrueno

In the above dataset, the output is yes(1) or no(0). Here the output is categorical with two output classes that’s why this is aka as binary classification.

Let us start some code, for this example, we are considering the (https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html ) with total sample 569 with 30 dimensions and two classes.

Import the required modules. Here we need sklearn python library which contains breast cancer data inbuild, we can use this dataset and apply logistic regression for binary classification.

import mxnet as mx
from mxnet import gluon, autograd, ndarray
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

Load the data set and use the pandas data frame to hold the data for further processing.

# the dataset is part of below module
from sklearn.datasets import load_breast_cancer 
# load data 
data = load_breast_cancer()
# use pandas data frame to hold the dataset
df = pd.DataFrame(data.data, columns=data.feature_names)
y = data.target
X = data.data
# print first five records
df.head()
# display record shape means number for rows and cloumns
df.shape
# number of dimentions
df.ndim

Now data is available but this data is human readable format and to train neural network it won’t be useful. Before start train our neural network we need to normalize the data. To normalize the data we are using pandas. We can also use gluon to normalize the dataset.

df_norm = (df - df.mean()) / (df.max() - df.min())

Before training any machine learning algorithm the critical part is the dataset, We need to split the dataset into training and testing dataset. Let us do that

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state=12345)

Tuning the hyperparameters is another important aspect in training the artificial neural network.

BATCH_SIZE = 32
LEARNING_R = 0.001
EPOCHS = 150

Let us prepare the data for according to gluon API, so that we can feed that data to network and train. To do that we can use mx.gluon.data module

train_dataset = mx.gluon.data.ArrayDataset(X_train.as_matrix(),y_train)
test_dataset = mx.gluon.data.ArrayDataset(X_test.as_matrix(),y_test)
train_data = mx.gluon.data.DataLoader(train_dataset,
                                      batch_size=BATCH_SIZE, shuffle=True)
test_data = mx.gluon.data.DataLoader(test_dataset,
                                     batch_size=BATCH_SIZE, shuffle=False)

Let us use gluons plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers. It has some predefined layers such Dense layer, sequential, etc.

net = gluon.nn.Sequential()
# Define the model architecture
with net.name_scope():
 net.add(gluon.nn.Dense(64, activation="relu"))
 net.add(gluon.nn.Dense(32, activation="relu") ) 
 net.add(gluon.nn.BatchNorm()) 
 net.add(gluon.nn.Dense(1, activation="sigmoid"))
# Intitalize parametes of the model
net.collect_params().initialize(mx.init.Uniform())
# Add binary loss function, sigmoid binary cross Entropy
binary_cross_entropy = gluon.loss.SigmoidBinaryCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': LEARNING_R})

The neural network contains four layers. We are using ‘relu’ as an activation function. ReLU rectified linear unit is an activation function aka a ramp function. The third layer is (gluon.nn.BatchNorm() ) batch normalisation layer. Another activation function we have used is ‘sigmoid’. The sigmoid function is another linear activation function having a characteristic of S-shaped curve. In the binary classification, the loss function we used is binary cross entropy. It measures the performance of a model whose output is a probability number between 0 and 1. Below is binary cross entropy loss function mathematical formula.

Then gluon.Trainer() to train the model.

Now training time for the model

for e in range(EPOCHS):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(mx.cpu()).astype('float32')
        label = label.as_in_context(mx.cpu()).astype('float32')
        with autograd.record(): # Start recording the derivatives
            output = net(data) # the forward iteration
            loss = binary_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])
        # Provide stats on the improvement of the model over each epoch
        curr_loss = ndarray.mean(loss).asscalar()
    if e % 20 == 0:
        print("Epoch {}. Current Loss: {}.".format(e, curr_loss))

Look at the above loss function graph, its in S-shape. Let us calculate this

print(accuracy_score(y_test, y_pred_labels))

This is the binary classification problem where we have just observed breast cancer data set with input data set and output is either of the two categories malignant or benign.

Multiclass classification:

We had discussed till linear regression problem, where output is single value and that is also a single rational number, then we have seen some of the categorical problem those aka as classification problems. In Classification problems also there are generally two types of a classification problem.

  1. Binary Classification
  2. MultiClass Classification

Binary classification problem means two categories, such as email is spam or not, breast cancer, and based on weather conditions cricket match will get played or not. In all this scenario the output is either(yes/no) of the categories but there is a real-life scenario where you have more than one category those problems are classified as multiclass classification(more than two classes). MultiClass classification aka multinominal classification. In multiclass classification, classifying observation into one of three or more classes. Don’t be confuse with multi-label classification with multiclass classification.

We went into the grocery shop for shopping at the fruit stall you stopped to buy some fruit, you picked your phone are tried your machine learning algorithm to identify a fruit based on color, shape, etc. Classifies the set of images of fruits which may banana, apple, orange, guava, etc. We will use the same logistic regression algorithm to address this multiclass classification problem. Logistic regression is the classic algorithm to solve the classification problem in supervised learning. As we have seen binary classification is quite useful when We have a dataset with two categories like, use it to predict email spam vs. not spam or breast cancer or not cancer. But this is not for every problem. Sometimes we encounter a problem where each observation could belong to one of the n classes. For example, an image might depict a lion, cat or a dog or a zebra, etc.

Let us dive deeper into the multiclass classification problem for this we will use MNIST (Modified National Institute of Standards and Technology ) dataset. This is the handwritten digits dataset. This dataset is widely used to teach deep learning hello world program. The MNIST dataset contains 60,000 training images and 10,000 testing images. MNIST can be a nice toy dataset for testing new ideas it is like a HelloWorld program for an artificial neural network.

Let us makes our hands dirty with gluon multiclass classification implementation.

from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd
from mxnet import gluon
import numpy as np

Let us import some of the modules that require such as mxnet, gluon, ndArray, autograd for differentiation and numpy.

Set the context, in previous all example we have set is the CPU for simplicity, you can set GPU if you want to execute code on GPU for that you have to install GPU enabled mxnet GLUON API.

( e.g . model_ctx=mx.gpu() ).

data_ctx = mx.cpu()
model_ctx = mx.cpu()

For multiclass classification, we are using the MNIST data set, as part of this we are not explaining what is MNIST data set for more details you can use this link https://en.wikipedia.org/wiki/MNIST_database.

batch_size = 64
num_inputs = 784
num_outputs = 10
num_examples = 60000
def transform(data, label):
    return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                              batch_size, shuffle=False)

Load the dataset number of inputs is 784 and the number of outputs is 10 (number 0,1…,9) with 60000 examples and 64 is the batch size. mx.gluon.data.vision.MNIST module contains the MNIST dataset which is part of gluon API. For training and validation purpose we are splitting data set into two-part testing data set and training data set.

Data is loaded successfully the next step is to define our module. Revise the code of linear regression for binary classification where we defined the Dense layer with the number inputs and outputs. gluon.nn.Dense(num_ouputs) is the defined layer with output shape and gluon inference the input shape from input data.

net = gluon.nn.Dense(num_outputs)

Parameter initialization is the next step but before going to register an initializer for parameters, gluon doesn’t know the shape of the input parameter because we have mentioned the shape of the output parameters. The parameters will get initialized during the first call to the forward method.

net.collect_params().initialize(mx.init.Normal(sigma=1.), ctx=model_ctx)

When you need to get the output in probabilities then Softmax cross entropy loss function can be useful

Softmax is an activation layer which allows us to interpret the outputs as probabilities, while cross entropy is we use to measure the error at a softmax layer.

Let us consider below softmax code snippet

# just for understanding.
def softmax(z):
    """Softmax function"""
    return np.exp(z) / np.sum(np.exp(z))

As the name suggests, softmax function is a “soft” version of max function. Instead of selecting one maximum rational value, it breaks the value with maximal element getting the largest portion of the distribution, that’s why it’s very good to get the probabilities of the inputs. From the above code, you will able to get that Softmax function takes an N-dimensional vector of real numbers as an input and transforms it into a vector of real number in range (0,1).

softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()

Now initiate an optimizer with learning rate 0.1. sgd (Stochastic gradient decent)

trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.1})

Now the model is trained, but evaluation of the model is required to identify the accuracy. To do this we are using MxNet built-in metric package. We should have to consider accuracy in the ballpark of .10 because of we initialized model randomly.

def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
    for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(model_ctx).reshape((-1,784))
        label = label.as_in_context(model_ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
    return acc.get()[1]
# call the above function with test data
evaluate_accuracy(test_data,net).

Now execute the training loop with 10 iterations,

epochs = 10
moving_loss = 0.
for e in range(epochs):
    cumulative_loss = 0
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(model_ctx).reshape((-1,784))
        label = label.as_in_context(model_ctx)
        with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
        loss.backward()
        trainer.step(batch_size)
        cumulative_loss += nd.sum(loss).asscalar()
    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
    print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, cumulative_loss/num_examples, train_accuracy, test_accuracy))
# output
Epoch 0. Loss: 2.1415544213612874, Train_acc 0.7918833333333334, Test_acc 0.8015
Epoch 1. Loss: 0.9146347909927368, Train_acc 0.8340666666666666, Test_acc 0.8429
Epoch 2. Loss: 0.7468763765970866, Train_acc 0.8524333333333334, Test_acc 0.861
Epoch 3. Loss: 0.65964135333697, Train_acc 0.8633333333333333, Test_acc 0.8696
Epoch 4. Loss: 0.6039828490893046, Train_acc 0.8695833333333334, Test_acc 0.8753
Epoch 5. Loss: 0.5642358363191287, Train_acc 0.8760166666666667, Test_acc 0.8819
Epoch 6. Loss: 0.5329904221892356, Train_acc 0.8797, Test_acc 0.8849
Epoch 7. Loss: 0.5082313110192617, Train_acc 0.8842166666666667, Test_acc 0.8866
Epoch 8. Loss: 0.4875676867882411, Train_acc 0.8860333333333333, Test_acc 0.8891
Epoch 9. Loss: 0.47050906361341477, Train_acc 0.8895333333333333, Test_acc 0.8902

Visualize the prediction

import matplotlib.pyplot as plt
def model_predict(net,data):
    output = net(data.as_in_context(model_ctx))
    return nd.argmax(output, axis=1)
# let's sample 10 random data points from the test set
sample_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                              10, shuffle=True)
for i, (data, label) in enumerate(sample_data):
    data = data.as_in_context(model_ctx)
    print(data.shape)
    im = nd.transpose(data,(1,0,2,3))
    im = nd.reshape(im,(28,10*28,1))
    imtiles = nd.tile(im, (1,1,3))
    plt.imshow(imtiles.asnumpy())
    plt.show()
    pred=model_predict(net,data.reshape((-1,784)))
    print('model predictions are:', pred)
    break

# output of the above code snippet

(10, 28, 28, 1)
model predictions are: 
[3. 6. 7. 8. 3. 8. 1. 8. 2. 1.]
<NDArray 10 @cpu(0)>

From the output of the above program, we can understand our model is able to solve the multiclass classification problem. Multiclass classification problem solved using linear regression algorithm. The activation function we used here is the softmax activation function that will enforce the output should be in the range of (0,1). That allowed us to interpret these outputs as probabilities. Other common names we can use softmax regression and multinomial regression alternatively. In the above example, we have used sgd (stochastic gradient descent)

def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad

Overfitting and regularization:

Overfitting

Till now we have solved regression and classification algorithm and with three different datasets, we achieved almost approximately 90% accuracy over the testing dataset. Sometimes times a model is too closely fit a limited set of data points that time we say its an overfitting error. The above regression and classification algorithm are working fine in the above examples but those are not working for certain of the datasets and running into overfitting they can cause them to perform very poorly. In this section, I would like to explain to you what is overfitting problem and regularization technique that will allow us to reduce this overfitting problem and get this learning algorithm to perform much better.

I find this joke from “Plato and Platypus Walk Into a Bar” does the best analogy to explain this overfitting problem.

“A man tries on a made-to-order suit and says to the tailor, “I need this sleeve taken in! It’s two inches too long!”

The tailor says, “No, just bend your elbow like this. See, it pulls up the sleeve.”

The man says, “Well, okay, but now look at the collar! When I bend my elbow, the collar goes halfway up the back of my head.”

The tailor says, “So? Raise your head up and back. Perfect.”

The man says, “But now the left shoulder is three inches lower than the right one!”

The tailor says, “No problem. Bend at the waist way over to the left and it evens out.”

The man leaves the store wearing the suit, his right elbow crooked and sticking out, his head up and back, all the while leaning down to the left. The only way he can walk is with a choppy, atonic walk.

This suit is perfectly fit that man but it has been overfitted. This suit would neither be useful to him nor to anyone else. I think this is the best analogy to explain this overfitting problem.

Overfitting and underfitting aka overtraining and undertraining and it occurs when an algorithm captures the noise of the data. Underfitting occurs when the model is not fit well enough. Not every algorithm that performs well on training data will also perform well on test data. To identify the overfitting and underfitting using validation and cross-validation data set. Both overfitting and underfitting lead to a poor prediction on the new observations.

Underfitting occurs if the model shows high bias and low variance. Overfitting occurs if the model shows high variance. If we have too many features, the learned model may fit the training set very well but fail to predicate new observations.

Let us ritual our MNIST data set and see how can things go wrong.

from __future__ import print_function
import mxnet as mx
import mxnet.ndarray as nd
from mxnet import autograd
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
ctx = mx.cpu() 
# load the MNIST data set and split it into the training and testing
mnist = mx.test_utils.get_mnist()
num_examples = 1000
batch_size = 64
train_data = mx.gluon.data.DataLoader(
    mx.gluon.data.ArrayDataset(mnist["train_data"][:num_examples],
                               mnist["train_label"][:num_examples].astype(np.float32)),
                               batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(
    mx.gluon.data.ArrayDataset(mnist["test_data"][:num_examples],
                               mnist["test_label"][:num_examples].astype(np.float32)),
                               batch_size, shuffle=False)

We are using a linear model with softmax. Allocate the parameter and define the model

# weight
W = nd.random_normal(shape=(784,10))
# bias
b = nd.random_normal(shape=10)
params = [W, b]
for param in params:
    param.attach_grad()
def net(X):
    y_linear = nd.dot(X, W) + b
    yhat = nd.softmax(y_linear, axis=1)
    return yhat

Define loss function to calculate average loss and optimizer to optimize the loss. As we have seen this cross entropy loss function and SGD in multiclass classification.

# cross entropy 
def cross_entropy(yhat, y):
    return - nd.sum(y * nd.log(yhat), axis=0, exclude=True)
# stochastic gradient descent 
def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad
def evaluate_accuracy(data_iterator, net):
    numerator = 0.
    denominator = 0.
    loss_avg = 0.
    for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        output = net(data)
        loss = cross_entropy(output, label_one_hot)
        predictions = nd.argmax(output, axis=1)
        numerator += nd.sum(predictions == label)
        denominator += data.shape[0]
        loss_avg = loss_avg*i/(i+1) + nd.mean(loss).asscalar()/(i+1)
    return (numerator / denominator).asscalar(), loss_avg

Plot the loss function and visualize the model using matplotlib.

def plot_learningcurves(loss_tr,loss_ts, acc_tr,acc_ts):
    xs = list(range(len(loss_tr)))
    f = plt.figure(figsize=(12,6))
    fg1 = f.add_subplot(121)
    fg2 = f.add_subplot(122)
    fg1.set_xlabel('epoch',fontsize=14)
    fg1.set_title('Comparing loss functions')
    fg1.semilogy(xs, loss_tr)
    fg1.semilogy(xs, loss_ts)
    fg1.grid(True,which="both")
    fg1.legend(['training loss', 'testing loss'],fontsize=14)
    fg2.set_title('Comparing accuracy')
    fg1.set_xlabel('epoch',fontsize=14)
    fg2.plot(xs, acc_tr)
    fg2.plot(xs, acc_ts)
    fg2.grid(True,which="both")
    fg2.legend(['training accuracy', 'testing accuracy'],fontsize=14)

Let us iterate.

epochs = 1000
moving_loss = 0.
niter=0
loss_seq_train = []
loss_seq_test = []
acc_seq_train = []
acc_seq_test = []

for e in range(epochs):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        with autograd.record():
            output = net(data)
            loss = cross_entropy(output, label_one_hot)
        loss.backward()
        SGD(params, .001)
        ##########################
        # Keep a moving average of the losses
        ##########################
        niter +=1
        moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
        est_loss = moving_loss/(1-0.99**niter)
    test_accuracy, test_loss = evaluate_accuracy(test_data, net)
    train_accuracy, train_loss = evaluate_accuracy(train_data, net)
    # save them for later
    loss_seq_train.append(train_loss)
    loss_seq_test.append(test_loss)
    acc_seq_train.append(train_accuracy)
    acc_seq_test.append(test_accuracy)

    if e % 100 == 99:
        print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" %
              (e+1, train_loss, test_loss, train_accuracy, test_accuracy))

## Plotting the learning curves
plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)
# output
Completed epoch 100. Train Loss: 0.5582709927111864, Test Loss 1.4102623425424097, Train_acc 0.862, Test_acc 0.725
Completed epoch 200. Train Loss: 0.2390711386688053, Test Loss 1.2993220016360283, Train_acc 0.94, Test_acc 0.734
Completed epoch 300. Train Loss: 0.13671867409721014, Test Loss 1.2758532278239725, Train_acc 0.971, Test_acc 0.748
Completed epoch 400. Train Loss: 0.09426628216169773, Test Loss 1.2602066472172737, Train_acc 0.989, Test_acc 0.758
Completed epoch 500. Train Loss: 0.05988468159921467, Test Loss 1.2470015566796062, Train_acc 0.996, Test_acc 0.764
Completed epoch 600. Train Loss: 0.043480587191879756, Test Loss 1.2396155279129744, Train_acc 0.998, Test_acc 0.762
Completed epoch 700. Train Loss: 0.032956544135231525, Test Loss 1.234715297818184, Train_acc 0.999, Test_acc 0.764
Completed epoch 800. Train Loss: 0.0268415825557895, Test Loss 1.2299001738429072, Train_acc 1.0, Test_acc 0.768
Completed epoch 900. Train Loss: 0.022739565349183977, Test Loss 1.2265239153057337, Train_acc 1.0, Test_acc 0.77
Completed epoch 1000. Train Loss: 0.019902906555216763, Test Loss 1.2242997065186503, Train_acc 1.0, Test_acc 0.772

From the above graph, you can easily get how the model is performing. From the above output, you can say at the 700th epoch, the model gives 100% accuracy on a dataset., this means it only able to classify 75% of the test examples accurately and 25% not. This is a clear high variance means overfitting. Methods to avoid overfitting:

  1. Cross-Validation
  2. Drop out
  3. Regularization

Regularization:

In the above section, we can able to identify the problem of overfitting. Now we know the problem and we also know what are the reasons for this. Now let us talk about the solution. In the regularisation, we will keep all the features but reduce the magnitude of parameters. Regularisation keeps the weights small keeping the model simpler to avoid overfitting. The model will have a lesser accurate if it is overfitting.

We have a linear regression to predicate y, given by plenty of x inputs.

y = a1x1 + a2x2  + a3x3 + a4x4 + a5x5.....

In the above equation a1, a2,….. are the coefficients and x1,x2,……are the independent variables to predicate dependent y.

“Regularisation means generalize the model for the better. “

“Mastering the trade-off between bias and variance is necessary to become a machine learning champion.”

Regularization is a scientific technique to discourage the complexity of the model ( reduce magnitude ). It does this by penalizing the loss function. What is mean by penalizing the loss function? Penalizing the weights makes them too small, almost near to zero. It makes those terms near to zero almost negligible and help us to simplify the model

The loss function is the sum of the squared difference between the predicted value and the actual value. ƛ is the regularization parameter which determines how much to penalizes the weights and the right value of ƛ is somewhere between 0 (zero) and large value.

There are few regularisation techniques.

  1. L1 Regularization or Lasso Regularization
  2. L2 Regularization or Ridge Regularization
  3. Dropout
  4. Data Augmentation
  5. Early stopping

We are solving the above overfitting problem using L2 regularisation technique.

Let us implement and solve the overfitting problem.

Penalizes the coefficient

# penalizes the coefficients
def l2_penalty(params):
 penalty = nd.zeros(shape=1)
 for param in params:
 penalty = penalty + nd.sum(param ** 2)
 return penalty

Reinitialize the parameter because for measures.

for param in params:
    param[:] = nd.random_normal(shape=param.shape)

L2 regularised logistic regression,

L2 regularization is the term of the sum of the square of all the features weight. Consider below formula. L2 regularization performs better when all the input features influence the output and all with weights are of approximately equal size.

Let us implement this L2 regularisation.

epochs = 1000
moving_loss = 0.
l2_strength = .1
niter=0
loss_seq_train = []
loss_seq_test = []
acc_seq_train = []
acc_seq_test = []

for e in range(epochs):
    for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1,784))
        label = label.as_in_context(ctx)
        label_one_hot = nd.one_hot(label, 10)
        with autograd.record():
            output = net(data)
            loss = nd.sum(cross_entropy(output, label_one_hot)) + l2_strength * l2_penalty(params)
        loss.backward()
        SGD(params, .001)
        ##########################
        # Keep a moving average of the losses
        ##########################
        niter +=1
        moving_loss = .99 * moving_loss + .01 * nd.mean(loss).asscalar()
        est_loss = moving_loss/(1-0.99**niter)

    test_accuracy, test_loss = evaluate_accuracy(test_data, net)
    train_accuracy, train_loss = evaluate_accuracy(train_data, net)
    # save them for later
    loss_seq_train.append(train_loss)
    loss_seq_test.append(test_loss)
    acc_seq_train.append(train_accuracy)
    acc_seq_test.append(test_accuracy)
    if e % 100 == 99:
        print("Completed epoch %s. Train Loss: %s, Test Loss %s, Train_acc %s, Test_acc %s" %
              (e+1, train_loss, test_loss, train_accuracy, test_accuracy))

## Plotting the learning curves
plot_learningcurves(loss_seq_train,loss_seq_test,acc_seq_train,acc_seq_test)

Let us see the graph for more understanding. From the graph, you easily identify the difference between the training loss and testing loss and how values are closer in this graph.

Summary:

This chapter given is bit insight about the gluon API, ndArray along with inbuilt some of the neural network modules from gluon. With the completion of this chapter, you are now know How to create a simple artificial neural network using gluon abstraction. When to use regression and when to use classification technique along with some real-time dataset.

As a machine learning developer, the major problem we face is the overfitting and underfitting and this chapter gives us the regularisation tool to address this overfitting problem. Gluon is very concise, powerful abstraction to help us to design, prototype, built, deploy and test the machine learning module over GPU and CPU. We can now know how to set the context (GPU, CPU). We have solved classification problems such as binary classification and multiclass classification using logistic regression technique. Let us move on to the next adventure.

CNN (Convolutional Neural network) using Gluon

CNN (Convolutional Neural network) using Gluon

Introduction:

Convolutional Neural Network is deep learning networks, which have achieved an excellent result on images recognition, images classifications. objects detections, face recognition, etc. CNN is everywhere and its most popular deep learning architecture. CNN is majorly used in solving the image data challenge and video analytics too. Any data that has spatial relationships are ripe for applying CNN.

In the previous chapter, we covered the basic machine learning techniques or algorithms to solve regression and classification problem. In this chapter, we will explore the deep learning architecture such as CNN (Convolutional Neural Network). CNN’s are a biologically inspired variant of MLPs. CNN aka ConvNet, in this chapter we will use this term interchangeably. In this chapter, we will explore the below points.

  • Introduction of CNN
  • CNN architecture
  • Gluon API for CNN
  • CNN implementation with gluon
  • Image segmentation using CNN

CNN Architecture:

CNN’s are regularised version of multilayer perceptrons. MLPs are the fully connected neural networks, means each neuron in one layer has a connection to all neuron in the next layer. CNN’s design inspired by vision processing of living organisms. Without conscious effort, we make predictions about everything we see and act upon them. When we see something, we label every object based on what we have learned in the past

Hubel and Wiesel in the 1950s and 1960s showed that the How cat’s visual cortex work. The animal visual cortex is the most powerful visual processing system in existence. As we all know that the visual cortex contains a complex arrangement of cells. These cells are sensitive to small sub-regions of the visual field, called a receptive field. The sub-regions are tiled to cover the entire visual field. These cells act as your local filters over the input space and are well-suited to exploit the strong spatially local correlation present in natural images. This is just a higher level intro of How cortex work. CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing.

Now let us make things simple, think about how our brain thinks and the human brain is a very powerful machine. Everyone works differently and it’s clear that we all have our own ways of learning and taking in new information. “A picture is worth a thousand words” is an English language adage. It refers to the notion that a complex idea can be conveyed with just a single picture, this picture conveys its meaning or essence more effectively than a description does. We see plenty of images every day, our brain is processing them and store them. But what about the machine, how a machine can understand, process and store meaningful insight from that image. In simple term, each image is an arrangement of a pixel, arranged in a special order. If some order or color get changed that effect the image as well. From the above explanation, you can understand that images in machine represent and processed in the form of pixels. Before CNN’s comes into the form it’s very hard to do image processing. Scientists around the world have been trying to find different ways to make computers to extract meaning from visual data(image, video) for about 60+ years from now, and the history of CV (Computer Vision), which is deeply fascinating.

The most fascinating paper was published by two neurophysiologists — David Hubel and Torsten Wiesel — in 1959 as I mentioned above the paper titled was “Receptive fields of single neurons in the cat’s striate cortex”. This duo ran pretty experiments over a cat. They placed electrodes into the primary visual cortex area of an anesthetized cat’s brain and observed, or at least tried to, the neuronal activity in that region while showing the animal various images. Their first efforts were fruitless; they couldn’t get the nerve cells to respond to anything. After a few months of research, they noticed accidentally they caught that one neuron fired as they were slipping a new slide into the projector. Hubel and Wiesel realized that what got the neuron excited was the movement of the line created by the shadow of the sharp edge of the glass slide.

[Image Source: https://commons.wikimedia.org/wiki/File:Human_visual_pathway.svg]

The researchers observed, through their experimentation, that there are simple and complex neurons in the primary visual cortex and that visual processing always starts with simple structures such as oriented edges. This is the much simpler and familiar explanation. The invention does not happen overnight it took years and its evolutionary process to get the groundbreaking the result.

After Hubel and Wiesel there is nothing happen groundbreaking on their idea for a long time. In 1982, David Marr, a British neuroscientist, published another influential paper — “Vision: A computational investigation into the human representation and processing of visual information”. David gave us the next important insight i.e. vision is hierarchical. David introduced a framework for a vision where low-level algorithms that detect edges, curves, corners, etc., and that are used as stepping stones towards to form a high-level understanding of the image.

David Marr’s representational framework:

  • A Primal Sketch of an image, where edges, bars, boundaries, etc., are represented (inspired by Hubel and Wiesel’s research);
  • A 2½D sketch representation where surfaces, information about depth and discontinuities on an image are pieced together;
  • A 3D model that is hierarchically organized in terms of surface and volumetric primitives.

Davids framework was very abstract and high-level and there is no mathematical modeling was given that could be used in artificial learning. It’s a hypothesis. At the same time, Japanese computer scientist, Kunihiko Fukushima, also developed a framework inspired by Hubel and Wiesel. This method is a self-organizing artificial network of simple and complex cells that could recognize patterns and be unaffected by position shifts. The network is Neocognitron included several convolutional layers and whose receptive fields had weight. Fukushima’s Neocognitron the first ever deep neural network and it is a grandfather of today’s convents. And a few years later in 1989, a French scientist Yann LeCun applied a backpropagation style learning algorithm to Fukushima’s neocognitron architecture. After a few more trails and error and Yann released LeNet-5. LeCun applied his architecture and developed and released a commercial product for reading zip codes. Around 1999, scientist and researchers trying to do visual data analysis using Marr’s proposed method instead of feature-based object recognition.

This is just a brief overview and important milestones we have covered that will help us to understand How CNN was evolved. Let us talk about CNN’s architecture, like an every artificial neural network architecture this also having input, hidden layers and output layer. The hidden layers consist of a series of convolutional layers that convolve with multiplication or other dot product. CNN’s are a specialized kind of neural network for processing data that has a grid like a topology, like time series data, which can be thought as one-dimensional array (vector) grid taking samples at regular time intervals but image data can be thought of as a 2-D grid of pixels (matrix). The name “Convolutional neural network” indicates that the network employs a mathematical operation called convolution. Arranging the image in the 2-D grid of pixels is depending on the whether we are looking at a black and white or color image, we might have either one or multiple numerical values corresponding to each pixel. CNN-based neural network architectures now dominate the field of computer vision to such a level that hardly anyone these days would develop a commercial application or enter a competition or hackathon related to image recognition, object detection, or semantic segmentation, without basing their approach on them. There are so many modern CNN networks owe their designs to inspirations from biology. CNNs are very good in strong predictive performance and tend to be computationally efficient because easy to parallelize and has very fewer inputs as compared to a dense layer. If we use a fully connected neural network to deal with the image recognization then we need a huge number of parameters and hidden layers to address this. let us consider we have an image of 28*28*3 then the total number of weights in the hidden layer will be 2352 and it will lead to overfitting that’s why we are not using a fully connected neural network to process image data.

In the convolutional neural network, the neuron in the layer will be connected to a small region of the layer. CNN the neuron in a layer will only be connected small region of the layer before it, instead of all the neuron in a fully connected network.

The above fig shows the general architecture of CNNs. CNN is a type of feed forward artificial neural network in which the connectivity pattern between the neurons inspired by the animal visual cortex. The basic idea is that some of the neurons from the cortex will fire when exposed horizontal and some cortex will fire when exposed vertically and similarly some will fire when exposed diagonal edges and this the motivation behind the connectivity pattern.

In general, CNN has four layers.

  1. Convolution layer
  2. Max Pooling layer
  3. ReLU layer
  4. Fully connected

The main problem with image data is, images won’t always have the same images. There can be certain deformations. Similarly to how a child recognize objects, we can show a child a dog with black color and we told him this is a dog and on the next day when some other pet with black color comes to our house with four legs He has recognized with dog but actual that is not a dog and its goat. Similarly, we have to show some samples to find a common pattern to identify the objects. We have to show millions of pictures to an algorithm to understand the data and detect the object, with the help of these millions of a records algorithm can generalize the inputs and make predictions for the new observations.

Machine see in a different way than humans do. Their world consists of only 0’s and 1’s. CNNs have a different architecture than regular artificial neural networks. In the regular full connected neural network, we putting the input through the series of hidden layers and reach to the fully connected output layer that represents the predictions. CNNs following a bit different approach. All the layers of CNNs are organized in 3 dimensions: width, height, and depth and neurons in the one layer do not connect to all neurons in the next layer but only the small portion of it and the output layer will be the reduced to a single vector of probability scores, organized along the depth dimension. Below fig, illustrate NN(neural network) vs CNN.

As we said earlier, the output can be a single class or a probability of classes that best describes the image. Now, the hard part is understanding what each of these layers does. Let us understand this.

CNNs have two components

  1. Feature extraction part (The hidden layers): The hidden layer perform a series of convolutions and pooling operations during which the features are detected. If you had a picture of a human face, this is the part of where the network would recognize two eyes, nose, lips, and nose, etc.
  2. The Classification part (Fully connected output layer): As we said last classification layer is fully connected layers will serve as a classifier on top of extracted features.

Convolution layer:

Convolution layer is the main building blocks of CNN, as we said convolution refers to the combination of two mathematical functions to produce a third function. Convolution is performed on the input data with the use of filters or kernels ( filters or kernels term people use interchangeably). Apply filters over the input data to produce a feature map. Convolution is sliding over the input. At each and every location, matrix multiplication is performed and sums the result into the feature map.

Note that in the above example an image is 2 dimensional with width and height (black and white image). If the image is colored, it is considered to have one more dimension for RGB color. For that reason, 2-D convolutions are usually used for black and white images, while 3-D convolutions are used for colored images. Let us start with (5*5) input image with no padding and we use a (3*3) convolution filter to get an output image. In the first step, the filter sliding over the matrix and in the filter each element is multiplied with an element in the corresponding location. Then you sum all the results, which is one output value. Then, you repeat this process the same step by moving the filter by one column. And you get the second output. The step size as the filter slides across the image is called a stride. In this example Here, the stride is 1. The same operation is repeated to get the third output. A stride size greater than 1 will always downsize the image. If the size is 1, the size of the image will stay the same. In the above operation, we have shown you the operation in 2D, but in real life applications mostly, convolutions are performed in a 3D matrix with a dimension for width, height, width. Depth is a dimension because of the colors channels used in an image (Red Green Blue).

We perform a number of convolutions on our input matrix and for each operation uses a different kernel (filter), the result does store in feature maps. All feature maps put into a bucket together as a final output of convolutional layer. CNNs uses ReLU is the activation function and output of the convolution passed through the activation function. As I mentioned early in the paragraph the convolution filter can slide over the input matrix. Stride is the decisive steps in a specified direction. Stride is the size of the step the convolution filter moves each time. In general, people refer to stride value as 1, meaning the filter slides pixel by pixel.

The animation above shows stride size 1. Increasing the stride size, your filter is sliding over the input with a larger gap and thus has less overlap between the cells. The size of the feature map is always less than the input matrix and this leads to shrinking our feature map. To prevent shrinking of our feature map matrix we use padding. Padding means a layer of zero value pixels is added to surround the input with zeros. Padding helps us to improve performance, makes sure the kernel and stride size will fit in the input and also keeping the spatial size constant after performing convolution.

Max Pooling layer:

After the convolution operation, the next operation is pooling layer. Max pooling is a sample-based discretization process. If you can see the first diagram in that after every convolution layer there is max pooling layer. Max pooling layer is useful to controls the overfitting and shortens the training time. The pooling function continuously reduce the dimensionality to reduce the number of parameters and number of computation in the network. Max pooling is done by applying a max filter to usually non-overlapping subregions of the initial representation. It reduces the computational cost by reducing the number of parameters to learn and provides basic translation invariance to the internal representation.

Let’s say we have a 4×4 matrix representing our initial input.
Let’s say, as well, that we have a 2×2 filter that we’ll run over our input. We’ll have a stride of 2 (meaning the (dx, dy) for stepping over our input will be (2, 2)) and won’t overlap regions. For each of the regions represented by the filter, we will take the max of that region and create a new, output matrix where each element is the max of a region in the original input.

Max Pooling takes the maximum value in each window. These window sizes need to be specified beforehand. This decreases the feature map size while at the same time keeping the significant information.

ReLU layer:

The Rectified Linear Unit(ReLU) has become very popular in the last few years. ReLU is activation function similarly we have been using different activation function is a different artificial neural network. Activation function aka transfer function. The ReLU is the most used activation function in the world right now. Since it is used in almost all the convolutional neural networks or deep learning.

The ReLU function is ?(?)=max(0,?). As you can see, the ReLU is half rectified (from bottom). f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to zero.

The ReLUs range is from 0 to infinity. ReLUs improve neural networks is by speeding up training. ReLU is idempotent. ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant. ReLU is executed after the convolution and therefore a nonlinear activation function like tanh or sigmoid. Each activation function takes a single number and performs a certain fixed mathematical operation on it. In simple words, the rectifier function does to an image like this is remove all the black elements from it keeping only positive value. We expect that any positive value will be returned unchanged whereas an input value of 0 or a negative value will be turned as the value 0. ReLU can allow your model to account for non-linearities and interactions so well. In gluon API we can use ReLU as inbuild implementation from Gluon.

net.add(gluon.nn.Dense(64, activation="relu"))

We can use a simple sample code of the ReLU function.

# rectified linear function
def rectified(x):
  return max(0.0, x)

Fully connected layer:

The fully connected layer is the fully connected neural network layer. This is also referred to as the classification layer. After completion of convolutional, ReLU and max-pooling layers, the classification part consists of a few fully connected layers. The fully connected layers can only accept 1 -Dimensional data. To convert our 3-D data to 1-D, we use the function in Python. This essentially arranges our 3-D volume into a 1-D vector.

This layer gives or returns us the output which is probabilistic value.

Types of CNN Architectures:

In the above section, we explained CNN general architecture but there are different flavors of CNN based some different combinations of layers. Let us try to explore those some useful and famous CNNs architectural style to solve some complex problem. CNNs are designed o recognize the visual patterns with minimal preprocessing from pixel images. The ImageNet project is a large visual database designed for object recognization research. This project runs an annual software contest the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where software programmer, researcher compete to correctly detect objects. In this section, we are exploring CNN architectures of ILSVRC top competitors.

Let us look into this picture this will give you a broad overview of how evaluation happen.

1. LeNet-5 — Leun et al

LeNet-5 is a 7 layer Convolutional neural network by LeCun et al in 1998. This was deployed in real life financial banking project to recognize handwritten digits on cheques. Image digitized in 32×32 pixel greyscale input images. The ability to process higher resolution images requires larger and more convolutional layers, so this technique is constrained by the availability of computing resources. At that time, the computational capacity was limited and hence the technique wasn’t scalable to large scale images.

2. AlexNet — Krizhevsky et al

AlexNet is a Convolutional neural network by Krizhevsky et al in 2012. It is outperformed significantly in all the prior competitors and won the ILSVRC challenge by reducing the top-5 error loss from 26% to 15.3%. The network was very similar to LeNet but was much more deeper with more filters per layer and had around 60 million parameters.

It consisted of 11×11, 5×5,3×3, convolutions, max pooling, dropout, data augmentation, ReLU activations, SGD with momentum. ReLU activation layer is attached after each every convolutional & fully connected layer except the last softmax layer. The figure certainly looks a bit scary. This is because the network was split into two halves, each trained simultaneously on two different GPUs. AlexNet was trained for 6 days simultaneously on two Nvidia Geforce GTX 580 GPUs. AlexNet was designed by the SuperVision group, consisting of Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever. More simple picture

In AlexNet consist of 5 Convolutional Layers and 3 Fully Connected Layers. These 8 layers combined with two new concepts at that time — MaxPooling and ReLU activation gave their model edge results.

3. ZFNet –

The ILSVRC 2013 winner was also a CNN which is known as ZFNet. It achieved a top-5 error rate of 14.8% which is now already half of the prior mentioned non-neural error rate. They achieved this by tweaking the hyper-parameters of AlexNet while maintaining the same structure with additional Deep Learning elements. As this is similar to AlexNet and have some additional deep learning elements such as dropout, augmentation and Stochastic Gradient Descent with momentum with tweaking the hyperparameters.

4. VGGNet — Simonyan et al

The runner up of 2014 ILSVRC challenge is named VGGNet, because of the simplicity of its uniform architecture, it appeals to a simpler form of a deep convolutional neural network. VGGNet was developed by Simonyan and Zisserman. VGGNet consists of 16 convolutional layers and is very appealing because of its very uniform architecture. The architecture is very much similar to AlexNet with only 3×3 convolutions, but lots of filters. VGGNet Trained on 4 GPUs for 2–3 weeks. The weight configuration of the VGGNet is publicly available and has been used in many other applications and challenges as a baseline feature extractor. VGGNet consists of 138 million parameters, which can be a bit challenging to handle. As the weight configurations are available publicly so, this network is one of the most used choices for feature extraction from images.

VGGNet has 2 simple rules

  1. Each Convolutional layer has configuration — kernel size = 3×3, stride = 1×1, padding = same. The only thing that differs is a number of filters.
  2. Each Max Pooling layer has configuration — windows size = 2×2 and stride = 2×2. Thus, we half the size of the image at every Pooling layer.

5. GoogLeNet/Inception –

The winner of the 2014 ILSVRC competition GoogleNet (Inception v1). achieved a top-5 error rate of 6.67% loss. GoogleNet used an inception module, a novel concept, with smaller convolutions that allowed the reduction of the number of parameters to a mere 4 million. GoogleNet was very close to the human level performance which the organizers of the challenge were now forced to evaluate. Googlenet was inspired by CNN LeNet but implemented a novel element which is nickname an inception module. It is used in batch normalization, image distortions, and RMSprop.

There are two diagrams which are here to understand and visualize GoogleNet very well.

5. ResNet — Kaiming He et al

The 2015 ILSVRC competition brought about a top-5 error rate of 3.57%, which is lower than the human error on top-5. The ResNet (Residual Network) model used by Kaiming He et al at the competition. The network introduced a novel approach called skip connections. Skip connections are also known as gated units or gated recurrent units. this technique they were able to train a NN with 152 layers while still having lower complexity than VGGNet.

It achieves a top-5 error rate of 3.57% which beats human-level performance on this dataset. ResNet has residual connections. The idea came out as a solution to an observation — Deep neural networks perform worse as we keep on adding a layer. The observation brought about a hypothesis: direct mappings are hard to learn. So instead of learning mapping between the output of the layer and its input, learn the difference between them learn the residual.

The Residual neural network uses 1×1 convolutions to increase and decrease the dimensionality of the number of channels.

CNN using Gluon:

As part of this example, we are exploring MNIST data set using CNN. This is the best example to make our hands dirty with Gluon API layer to build CNNs. There four important part we have always consider while building any CNNs.

  1. The kernel size
  2. The filter count (i.e how many filters do we want to use)
  3. Stride (how big steps of the filters)
  4. Padding

Let us deep dive into MNIST using CNN. Recognize handwritten digits using Gluon API using CNNs.

To start with the example we need MNIST data set and need to import some python, gluon module.

import mxnet as mx
import numpy as np
import mxnet as mx
from mxnet import nd, gluon, autograd
from mxnet.gluon import nn
# Select a fixed random seed for reproducibility
mx.random.seed(42)
def data_xform(data):
    """Move channel axis to the beginning, cast to float32, and normalize to [0, 1]."""
    return nd.moveaxis(data, 2, 0).astype('float32') / 255
train_data = mx.gluon.data.vision.MNIST(train=True).transform_first(data_xform)
val_data = mx.gluon.data.vision.MNIST(train=False).transform_first(data_xform)

The above code can download MNIST data set at the default location (this could be.mxnet/datasets/mnist/ in the home directory) and creates Dataset objects, training data set (train_data), and validation data set (val_data) for training and validation we need both two datasets. We can use transform_first() method, to moves the channel axis of the images to the beginning ((28, 28, 1) → (1, 28, 28)) and cast them into the float32 and rescales them from [0,255] to [0,1]. The MNIST dataset is very small that’s why we loaded that in memory.

set the context

ctx = mx.gpu(0) if mx.context.num_gpus() > 0 else mx.cpu(0)

Then we need a training data set and validation data set with batch size 1 and shuffle the training set and non-shuffle validation dataset.

conv_layer = nn.Conv2D(kernel_size=(3, 3), channels=32, in_channels=16, activation='relu')
print(conv_layer.params)

define the convolutional layer in this example we considering 2-D dataset so, this one is 2-D convolutional with ReLU activation function. CNN is a more structured weight representation. Instead of connecting all inputs to all outputs, the characteristic,

# define like a alias
metric = mx.metric.Accuracy()
loss_function = gluon.loss.SoftmaxCrossEntropyLoss()

We are using softmax cross-entropy as a loss function.

lenet = nn.HybridSequential(prefix='LeNet_')
with lenet.name_scope():
    lenet.add(
        nn.Conv2D(channels=20, kernel_size=(5, 5), activation='tanh'),
        nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
        nn.Conv2D(channels=50, kernel_size=(5, 5), activation='tanh'),
        nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
        nn.Flatten(),
        nn.Dense(500, activation='tanh'),
        nn.Dense(10, activation=None),
    )

Filters can learn to detect small local structures like edges, whereas later layers become sensitive to more and more global structures. Since images often contain a rich set of such features, it is customary to have each convolution layer employ and learn many different filters in parallel, so as to detect many different image features on their respective scales. It’s good to have a more than one filter and do apply filters in parallel. The above code defines a CNN architecture called LeNet. The LeNet architecture is a popular network known to work well on digit classification tasks. We will use a version that differs slightly from the original in the usage of tanh activations instead of sigmoid.

Likewise, input can already have multiple channels. In the above example, the convolution layer takes an input image with 16 channels and maps it to an image with 32 channels by convolving each of the input channels with a different set of 32 filters and then summing over the 16 input channels. Therefore, the total number of filter parameters in the convolution layer is channels * in_channels * prod(kernel_size), which amounts to 4608 in the above example. Another characteristic feature of CNNs is the usage of pooling, means summarizing patches to a single number. This step lowers the computational burden of training the network, but the main motivation for pooling is the assumption that it makes the network less sensitive to small translations, rotations or deformations of the image. Popular pooling strategies are max-pooling and average-pooling, and they are usually performed after convolution.

lenet.initialize(mx.init.Xavier(), ctx=ctx)
lenet.summary(nd.zeros((1, 1, 28, 28), ctx=ctx))

the summary() method can be a great help, it requires the network parameters to be initialized, and an input array to infer the sizes.

output:- 
--------------------------------------------------------------------------------
        Layer (type)                                Output Shape         Param #
================================================================================
               Input                              (1, 1, 28, 28)               0
        Activation-1                <Symbol eNet_conv0_tanh_fwd>               0
        Activation-2                             (1, 20, 24, 24)               0
            Conv2D-3                             (1, 20, 24, 24)             520
         MaxPool2D-4                             (1, 20, 12, 12)               0
        Activation-5                <Symbol eNet_conv1_tanh_fwd>               0
        Activation-6                               (1, 50, 8, 8)               0
            Conv2D-7                               (1, 50, 8, 8)           25050
         MaxPool2D-8                               (1, 50, 4, 4)               0
           Flatten-9                                    (1, 800)               0
       Activation-10               <Symbol eNet_dense0_tanh_fwd>               0
       Activation-11                                    (1, 500)               0
            Dense-12                                    (1, 500)          400500
            Dense-13                                     (1, 10)            5010
================================================================================
Parameters in forward computation graph, duplicate included
   Total params: 431080
   Trainable params: 431080
   Non-trainable params: 0
Shared params in forward computation graph: 0
Unique parameters in model: 431080

First conv + pooling layer in LeNet.

Now we train LeNet with similar hyperparameters as learning rate 0.04, etc. Note that it is advisable to use a GPU if possible since this model is significantly more computationally demanding to evaluate and train.

trainer = gluon.Trainer(
    params=lenet.collect_params(),
    optimizer='sgd',
    optimizer_params={'learning_rate': 0.04},
)
metric = mx.metric.Accuracy()
num_epochs = 10
for epoch in range(num_epochs):
    for inputs, labels in train_loader:
        inputs = inputs.as_in_context(ctx)
        labels = labels.as_in_context(ctx)
        with autograd.record():
            outputs = lenet(inputs)
            loss = loss_function(outputs, labels)
        loss.backward()
        metric.update(labels, outputs)
        trainer.step(batch_size=inputs.shape[0])
    name, acc = metric.get()
    print('After epoch {}: {} = {}'.format(epoch + 1, name, acc))
    metric.reset()
for inputs, labels in val_loader:
    inputs = inputs.as_in_context(ctx)
    labels = labels.as_in_context(ctx)
    metric.update(labels, lenet(inputs))
print('Validaton: {} = {}'.format(*metric.get()))
assert metric.get()[1] > 0.985

Let us visualize the network accuracy. Some wrong predictions on the training and validation set.

def get_mislabeled(loader):
    """Return list of ``(input, pred_lbl, true_lbl)`` for mislabeled samples."""
    mislabeled = []
    for inputs, labels in loader:
        inputs = inputs.as_in_context(ctx)
        labels = labels.as_in_context(ctx)
        outputs = lenet(inputs)
        # Predicted label is the index is where the output is maximal
        preds = nd.argmax(outputs, axis=1)
        for i, p, l in zip(inputs, preds, labels):
            p, l = int(p.asscalar()), int(l.asscalar())
            if p != l:
                mislabeled.append((i.asnumpy(), p, l))
    return mislabeled
import numpy as np
sample_size = 8
wrong_train = get_mislabeled(train_loader)
wrong_val = get_mislabeled(val_loader)
wrong_train_sample = [wrong_train[i] for i in np.random.randint(0, len(wrong_train), size=sample_size)]
wrong_val_sample = [wrong_val[i] for i in np.random.randint(0, len(wrong_val), size=sample_size)]
import matplotlib.pyplot as plt
fig, axs = plt.subplots(ncols=sample_size)
for ax, (img, pred, lbl) in zip(axs, wrong_train_sample):
    fig.set_size_inches(18, 4)
    fig.suptitle("Sample of wrong predictions in the training set", fontsize=20)
    ax.imshow(img[0], cmap="gray")
    ax.set_title("Predicted: {}\nActual: {}".format(pred, lbl))
    ax.xaxis.set_visible(False)
    ax.yaxis.set_visible(False)
fig, axs = plt.subplots(ncols=sample_size)
for ax, (img, pred, lbl) in zip(axs, wrong_val_sample):
    fig.set_size_inches(18, 4)
    fig.suptitle("Sample of wrong predictions in the validation set", fontsize=20)
    ax.imshow(img[0], cmap="gray")
    ax.set_title("Predicted: {}\nActual: {}".format(pred, lbl))
    ax.xaxis.set_visible(False)
    ax.yaxis.set_visible(False)

The Cambridge Analytica Scandal. How to prevent?

The Cambridge Analytica Scandal. How to prevent?

For more stories.

In this post, we are not discussing Cambridge analytical scandal instead How we will prevent this with the help of blockchain technology. Cambridge analytical and Facebook is at the center of an ongoing dispute over the alleged harvesting and use of personal data.

data security and data leakage.

‘If you are not part of the solution, you must be part of the problem’?

I don’t want to be part of the problem. I always want to find some sort of solution either you have some preventive solution or some full proof concrete solution. We shouldn’t be part of a problem, but we should have some strong understanding of the problem then you will come to know How to solve this one? A solution wouldn’t be found in one step. Finding a solution is an iterative process.

“The first rule on breaking a rule is to know everything about the rule.” — Nuno Roque

A dark side of social media:-

I don’t want to blame social media. There are really great and good impact and change happen due to extensive use of social media. The role of social media, in the Arab Spring, has been extensively debated. There are huge political, social changes happen in the world due to social media.

Social networks these days are anything but the opposite of the decentralized approach. From time to time Social networks have also shown us the dark and ugly side of it. Though most social media are free to sign up and use it, it comes with a price. Some serious issues with social media right now are unethical social experiments, Identity theft, Privacy issues, and few more. Users hand over their personal details on it willingly and social media also track cookies, locations, behaviors and search preferences mostly without users’ knowledge. This might be very destructive in the wrong hands.

As an Indian businessman want our own social networking company. Maybe everyone wants there own social networking site. Data is a precise resource. Data is a fuel of the 21st century, but who is the owner and how we will use this data. AI/ ML process is fueled by data, but which set of stakeholder own this data.

It’s not clear which parties would be held accountable when something goes wrong with a data. Who bears the risk? Who is responsible and who pays for damages?. If something is free didn’t mean that it is healthy. The owner of a data is only the accountable one.

Remember that social networking sites are owned by private businesses and that they make their money by collecting data about individuals and selling that data on, particularly to third party advertisers. When you enter a social networking site, you are leaving the freedoms of the internet behind and are entering a network that is governed and ruled by the owners of the site. Privacy settings are only meant to protect you from other members of the social network, but they do not shield your data from the owners of the service. Essentially you are giving all your data over to the owners and trusting them with it.

BlockChain for social networking:-

It is a game-changing technology!! Here we will discuss How this will be fit for social networking. As blockchain is based on a decentralized approach, there is no central entity controlling or collecting data from the other connections on the network. With the absence of a centralized server monitoring over every activity, users can now make use, share and communicate more freely knowing that the network is at its most secure. This is the peer to peer network. The ownership of a data is the only user.

We can combine social networking with blockchain and give ownership of data to the individual user that solves major data leakage problem.

Here are some blockchain based popular social networking sites.

  1. Obsidian Messenger
  2. Nexus
  3. Indorse
  4. Synereo
  5. Steemit
  6. Golos
  7. E-chat
  8. Akasha

The future of social media is decentralization. Changing the business model of social media will be an interesting challenge. Blockchain technology can effectively remove the “middlemen” from the equation. Additionally, the users themselves will retain access to their data at all times, which is well worth looking into. There is no longer a need for centralized servers.

Social networks are now so well established, here are some figures of social networks in 2017.

  • Global users: 2.8 billion
  • Advertising revenue: 41 billion (USD)
  • Time spent: 2 hours and 19 minutes (daily)
  • Snapchat hits: 10 billion daily video views
  • Social media ad spend surpasses TV

The statics shows that social media has a huge impact on humans daily life.

Data is a precious thing and will last longer than the systems themselves. Data is everything and the race between the big tech companies are based on the data that they have. Data privacy rules are there but rules are breakable. I think blockchain based social networking is the best solution.

Greatest value of an online company lay in the consumer data it collected.

There is a lot more to explore in blockchain based social networking, here is only the outline of it.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Gluon -API for Deep learning.

Gluon -API for Deep learning.

Amazon Web Services and Microsoft’s AI and Research Group this morning announced a new open-source deep learning interface called Gluon, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps,” according to an announcement.

https://mli.github.io

Gluon is a clear, concise, simple yet powerful and efficient API for deep learning. Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this help us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. You can any kind of framework such as MxNet, tensorflow, pytorch.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure.

More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed.

Gluon is imperative for developing but simple for deploying.

It is imperative means flexible but may be slow. It is also efficient and portable but hard to use.

Sample code with MxNet:-

Distinct Advantages:-

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Dynamic networks
  4. High-performance operators for training

Setup environment:-

The Gluon specification has already been implemented in Apache MXNet, so you can start using the Gluon interface by following these easy steps for installing the latest master version of MXNet. I recommend using Python version 3.3 or greater and implementing this example using a Jupyter notebook.

# I used minicoda and virtual environment 
# source activate gulon
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user

Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.

pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Multilayer Perceptron using gluon

Using gluon, we only need two additional lines of code to transform our logistic regression model into a multilayer perceptron.

from __future__ import print_function
import mxnet as mx
import numpy as np
from mxnet import nd, autograd
from mxnet import gluon

You can compute this with the help of CPU or GPU

ctx = mx.cpu() # or GPU mx.gpu(0)

The most popular deep learning hello world dataset is MNIST.

mnist = mx.test_utils.get_mnist()
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                                     batch_size, shuffle=False)

Then here is you Model

num_hidden = 256 . 
net = gluon.nn.Sequential()
#Relu to activate
with net.name_scope():
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))         
    net.add(gluon.nn.Dense(num_outputs))
#initialization of the parameters 
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
#calculate cross entropy loss
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
#evaluate accuracy
def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
return acc.get()[1]

Everything is ready now then

epochs = 15
smoothing_constant = .01

for e in range(epochs):
for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])

##########################
#  Keep a moving average of the losses
##########################
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)

    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
          (e, moving_loss, train_accuracy, test_accuracy))

This is just a simple idea. Apply this and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.