gluon framework

Gluon -API for Deep learning.

Gluon -API for Deep learning.

Amazon Web Services and Microsoft’s AI and Research Group this morning announced a new open-source deep learning interface called Gluon, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps,” according to an announcement.

https://mli.github.io

Gluon is a clear, concise, simple yet powerful and efficient API for deep learning. Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this help us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. You can any kind of framework such as MxNet, tensorflow, pytorch.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure.

More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed.

Gluon is imperative for developing but simple for deploying.

It is imperative means flexible but may be slow. It is also efficient and portable but hard to use.

Sample code with MxNet:-

Distinct Advantages:-

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Dynamic networks
  4. High-performance operators for training

Setup environment:-

The Gluon specification has already been implemented in Apache MXNet, so you can start using the Gluon interface by following these easy steps for installing the latest master version of MXNet. I recommend using Python version 3.3 or greater and implementing this example using a Jupyter notebook.

# I used minicoda and virtual environment 
# source activate gulon
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user

Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.

pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Multilayer Perceptron using gluon

Using gluon, we only need two additional lines of code to transform our logistic regression model into a multilayer perceptron.

from __future__ import print_function
import mxnet as mx
import numpy as np
from mxnet import nd, autograd
from mxnet import gluon

You can compute this with the help of CPU or GPU

ctx = mx.cpu() # or GPU mx.gpu(0)

The most popular deep learning hello world dataset is MNIST.

mnist = mx.test_utils.get_mnist()
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                                     batch_size, shuffle=False)

Then here is you Model

num_hidden = 256 . 
net = gluon.nn.Sequential()
#Relu to activate
with net.name_scope():
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))         
    net.add(gluon.nn.Dense(num_outputs))
#initialization of the parameters 
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
#calculate cross entropy loss
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
#evaluate accuracy
def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
return acc.get()[1]

Everything is ready now then

epochs = 15
smoothing_constant = .01

for e in range(epochs):
for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])

##########################
#  Keep a moving average of the losses
##########################
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)

    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
          (e, moving_loss, train_accuracy, test_accuracy))

This is just a simple idea. Apply this and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.