Gluon -API for Deep learning.

Gluon -API for Deep learning.

Amazon Web Services and Microsoft’s AI and Research Group this morning announced a new open-source deep learning interface called Gluon, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps,” according to an announcement.

https://mli.github.io

Gluon is a clear, concise, simple yet powerful and efficient API for deep learning. Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this help us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. You can any kind of framework such as MxNet, tensorflow, pytorch.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure.

More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed.

Gluon is imperative for developing but simple for deploying.

It is imperative means flexible but may be slow. It is also efficient and portable but hard to use.

Sample code with MxNet:-

Distinct Advantages:-

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Dynamic networks
  4. High-performance operators for training

Setup environment:-

The Gluon specification has already been implemented in Apache MXNet, so you can start using the Gluon interface by following these easy steps for installing the latest master version of MXNet. I recommend using Python version 3.3 or greater and implementing this example using a Jupyter notebook.

# I used minicoda and virtual environment 
# source activate gulon
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user

Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.

pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Multilayer Perceptron using gluon

Using gluon, we only need two additional lines of code to transform our logistic regression model into a multilayer perceptron.

from __future__ import print_function
import mxnet as mx
import numpy as np
from mxnet import nd, autograd
from mxnet import gluon

You can compute this with the help of CPU or GPU

ctx = mx.cpu() # or GPU mx.gpu(0)

The most popular deep learning hello world dataset is MNIST.

mnist = mx.test_utils.get_mnist()
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return data.astype(np.float32)/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
                                      batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
                                     batch_size, shuffle=False)

Then here is you Model

num_hidden = 256 . 
net = gluon.nn.Sequential()
#Relu to activate
with net.name_scope():
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))
    net.add(gluon.nn.Dense(num_hidden, activation="relu"))         
    net.add(gluon.nn.Dense(num_outputs))
#initialization of the parameters 
net.collect_params().initialize(mx.init.Xavier(magnitude=2.24), ctx=ctx)
#calculate cross entropy loss
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
#evaluate accuracy
def evaluate_accuracy(data_iterator, net):
    acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
        output = net(data)
        predictions = nd.argmax(output, axis=1)
        acc.update(preds=predictions, labels=label)
return acc.get()[1]

Everything is ready now then

epochs = 15
smoothing_constant = .01

for e in range(epochs):
for i, (data, label) in enumerate(train_data):
        data = data.as_in_context(ctx).reshape((-1, 784))
        label = label.as_in_context(ctx)
with autograd.record():
            output = net(data)
            loss = softmax_cross_entropy(output, label)
            loss.backward()
        trainer.step(data.shape[0])

##########################
#  Keep a moving average of the losses
##########################
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)

    test_accuracy = evaluate_accuracy(test_data, net)
    train_accuracy = evaluate_accuracy(train_data, net)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" %
          (e, moving_loss, train_accuracy, test_accuracy))

This is just a simple idea. Apply this and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Micro service architecture for legacy code base.

Micro service architecture for legacy code base.

For more stories.

Refactoring the legacy code is art.

There is no prerequisite to understanding this article. But if you go through the below article it could be advantageous Manage Big Ball of Mud.

As new features and new functionality are introduced the complexity of this application can increase dramatically and harder to maintain code base and add new features too. This application becomes the Big Ball of Mud. Teams have been struggling to maintain the complex application and some suggest replace the complete application with new technology, new hosting and or new architecture pattern. Replace the complete solution is really hard.

There have been big buzzwords in the industry about micro service, serverless architecture. Starting the greenfield development is a quite easy task. What about legacy one?

Strangulation

Michiel Rook’s blog Gradually convert Monolith into micro-service app

Martin Fowler describes the Strangler Application:

One of the natural wonders of this area are the huge strangler vines. They seed in the upper branches of a fig tree and gradually work their way down the tree until they root in the soil. Over many years they grow into fantastic and beautiful shapes, meanwhile strangling and killing the tree that was their host.

This has been a long journey to convert your monolith application into micro-service or nano service architecture or any other architectural pattern that is well suitable for our application context.

Solution

In a 2004 article on his website, Martin Fowler defined the Strangler Application pattern as a way of handling the release of refactored code in a large web application. The fundamental strategy is EventInterception, which can be used to gradually move functionality to the strangler and to enable AssetCapture.

The Strangler Application is based on an inference to a vine that strangles a tree that it’s wrapped around. The idea is that you use the structure of an application — the fact that large apps are built out of individual URIs that map functionally to different dimensions of a business domain — to divide an application into different functional domains, and replace those domains with a new micro-services-based implementation one domain at a time. This creates two separate applications that live side by side in the same URI space. Over time, the newly refactored application “strangles” or replaces the original application until finally, you can shut off the monolithic application. Thus, the Strangler Application steps are

  1. Choose a particular specific piece of functionality which is solo within an application.
  2. Modify it or refactor and rebuild as a service.
  3. Deploy while deploying use proxy pattern to bypass the traffic. This will help us to run code for both legacy and new user. or Create simple facade to intercept or filter the requests going to the backend legacy system.
  4. Repeat above steps until an application is fully migrated.

The great thing about applying this pattern is that it creates incremental value in a much faster timeframe than if you tried a “big bang” migration in which you update all the code of your application before you release any of the new functionality. It also gives you an gradually approach for adopting micro-services — one where, if you find that the approach doesn’t work in your environment, you have a simple way to change direction if needed.

Strangler Pattern address the following problems

  1. Legacy code or Big ball of mud.
  2. complicated and complex architecture.
  3. Monolithic design
  4. Fragmented business rules
  5. Painful deployment process

There are certain aspects which are specific to those particular applications. If it is really old code then we need to figure out and refactor accordingly into some functional level separation and then apply the strangler pattern.

There are two sides either complain about the things and be a part of the problem or take this up find solution and be a part of solution.

For more ref:-

https://docs.microsoft.com/en-us/azure/architecture/patterns/strangler

This is just simple idea. Apply and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.