Kubernetes 5 Free Learning resources.

If you don’t know docker read here.

Kubernetes(k8) is an orchestration platform to manage containers.

Kubernetes is the buzzword in the market because of the boom of containerization and microservices. We can have a microservices architecture with its pros and cons and plenty of containers but the question is how to manage those containers and the source of truth is K8.

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.

The definition is from the k8 official website. 

In this article, we will list down 5 free resources to which will help you to learn k8.

Kubernetes is a container orchestration software

  1. Learn K8 basics:- Learn Kubernetes Basics is the official documentation by the developers of Kubernetes. This tutorial will help you to understand the basics step by step guide of k8 cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts and includes an interactive online tutorial. I think this the source of truth to start learning k8.
  2. Learning Path kubernetes:- This course can give us from Kubernetes basics to advanced networking and workloads. This tutorial series introduced by IBM and its very good resource to deep dive into k8. If you’re new to Kubernetes and container orchestration and want to begin learning about it, this learning path covers everything from basic prerequisites to more advanced skills needed for containerization. This course will give you brief the idea about a container up to advance k8 concepts. After completion of this course, you will be able to understand the basics of containers, build containerized applications and deploy them onto Kubernetes, understand the advantages of a deployment that uses Helm with Kubernetes, deploy various microservices with Kubernetes, understand basic networking for applications that are running in Kubernetes and much more.
  3. A Tutorial Introduction to Kubernetes:- A Tutorial Introduction to Kubernetes is provided by Ulaş Türkmen on his blog. In this tutorial series, you will learn how to use Kubernetes using Minikube, how to configure kubectl, understanding nodes and namespaces, how to use the dashboard, deploying various container images in order to demonstrate Kubernetes feature, running service, etc.
  4. Coursera K8:- Architecting with Google Kubernetes Engine Specialisation this is the course name. Actually, this course is design and developed by Google. In this k8 course you will learn, how to implement solutions using Google Kubernetes Engine, or GKE, including building, scheduling, load balancing, and monitoring workloads, as well as providing for the discovery of services, managing role-based access control and security, and providing persistent storage to these applications.
  5. Fundamentals of Containers, Kubernetes, and Red Hat OpenShift:- This course will provide you with an introduction to container and container orchestration technology using Docker, Kubernetes, and Red Hat OpenShift Container Platform. You will learn how to containerize applications and services, test them using Docker, and deploy them on a Kubernetes cluster using Red Hat OpenShift. Additionally, you will build and deploy an application from source code using the Source-to-Image facility of Red Hat OpenShift. After the completion of this course, you will be able to create containerized services, manage containers and container images, create custom container images and deploy containerized applications on Red Hat OpenShift.

Kubernetes — Explained Like You’re Five

Summary:- Modern applications are increasingly built using containers, microservices packaged with their dependencies and configurations. K8s is open-source orchestration software for deploying and managing those containers at scale. With K8s, you can build, deploy, deliver and scale containerized apps faster and smoother.

This is just a list of the sources from where we can start our journey and makes hands dirty. There is a more complete list of concepts available on the Kubernetes website. I suggest you give them a quick look if you want to get a better grasp of who does what and what goes where.

This is it, for now, please let me know if you know more resources which are very simple and effective.

JUnit5: Parameterized Tests

As we studied in Junit5 in part1 and part2. Junit5 is very impressive in the extension model and architectural style, also the Junit5 Assumptions. The another best aspect about junit5 is compatibility with a lambda expression. In this section let us start looking for Junit5 parameterized tests.

The term parameter is often used to refer to the variable as found in the function definition, while argument refers to the actual input passed.

Maven Dependency:

<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-params</artifactId>
<version>5.4.2</version>
<scope>test</scope>
</dependency>

Gradle Dependency:

testCompile("org.junit.jupiter:junit-jupiter-params:5.4.2")

The basic difference while annotating the method instead of start by declaring a test method on @ParameterizedTest instead of @Test for a parameterised test. There are few scenarios where we want to pass values dynamically as method argument and a unit test that for this type of a scenario parameterized test cases are useful.

It looks below code is incomplete. From where this word value will come. how would JUnit know which arguments the parameter word should take? And indeed, Jupiter engine does not execute the test and instead throw a PreconditionViolationException.

@ParameterizedTest
void parameterizedTest(String word) {
 assertNotNull(word);
}
Configuration error: You must provide at least
one argument for this @ParameterizedTest

Let us start correcting the above exception.

@ParameterizedTest
@ValueSource(strings = {"JUnit5 ParamTest" , "Welcome"})
void withValueSource(String word) {
 assertNotNull(word);
}

Now above code will successfully get executed. This is just a simple use case but in real life project, you need more tools for that purpose you should know in detail of @ValueSource annotation.

ValueSource:

@ValueSource annotation is to provide a source of argument to the parameterized test method. The source can be anything like single value, an array of values, null source, CSV file, etc. As we have seen in the above example @ValueSource annotation, we can pass an array of literal values to the test method.

public class Strings {
     public static boolean isEmptyString(String str) {
         return str == null || str.trim().isEmpty();
     }
 }
// Test case for the above method could be 
@ParameterizedTest
@ValueSource(strings = {"", "  ","Non Empty"})
 void isEmptyStringReturnTrueForNullOrBlankStrings(String str) {
     assertTrue(Strings.isEmptyString(str));
 }

Limitations of value sources:

1. It only support the following data types.

2. We can pass only one argument to the test method each time.

3. We can not pass null as a argument to the test method.

  • short (with the shorts attribute)
  • byte (with the bytes attribute)
  • int (with the ints attribute)
  • long  (with the longs attribute)
  • float (with the floats attribute)
  • double (with the doubles attribute)
  • char (with the chars attribute)
  • java.lang.String (with the strings attribute)
  • java.lang.Class (with the classes attribute)

@NullSource and @EmptySource:

We can pass a single null value to a parameterized test method using @NullSource and its not for the primitive data types.

@EmptySource passes a single empty argument and you can use empty source for collection types and for array too.

In order to pass both null and empty values, we can use the composed @NullAndEmptySource annotation

@ParameterizedTest
@NullAndEmptySource
@ValueSource(strings = {” “, “\t”, “\n”})
void isEmptyStringReturnTrueForAllTypesOfBlankStrings(String input) {
assertTrue(Strings.isEmptyString(input));
}

EnumSource:

The name implies its self, if we want to test different values from an enumeration, we can use @EnumSource.

@ParameterizedTest
@EnumSource(WeekDay.class)
void getValueForADay_IsAlwaysBetweenOneAndSeven(WeekDay day) {
int dayNumber = day.getValue();
assertTrue(dayNumber >= 1 && dayNumber <= 7);
}

We can filter out a few days by using the names attribute of enum. @EnumSource annotation has option to select enum constant mode, you can either include and exclude too using EnumSource.Mode.EXCLUDE.

We can pass a string literal and regular expression both to the names attribute

Reference Document

CsvSource:

We need argument sources capable of passing multiple arguments. As we know @ValueSource and @EnumSource are only allowing one argument each time. In real life project we want to read row input values manipulate those and unit test those for that purpose @CsvSource

@ParameterizedTest
@CsvSource(value = {“juniT:junit”, “MaN:man”, “Java:java”}, delimiter = ‘:’)
void toLowerCaseValue(String input, String expected) {
String actualValue = input.toLowerCase();
assertEquals(expected, actualValue);
}

In the above example you have key value pair with colon as a delimiter, we can also pass CSV file as a resource argument too:

//CSV file
input,expected
Ram,RAM
tYpE,TYPE
Java,JAVA
koTliN,KOTLIN

@ParameterizedTest
@CsvFileSource(resources = “/testdata.csv”, numLinesToSkip = 1)
void toUpperCaseValueCSVFile(String input, String expected) {
String actualValue = input.toUpperCase();
assertEquals(expected, actualValue);
}

resources attribute represents the CSV file resources on the classpath and we can pass multiple CSV files too. Let us take few more examples

@ParameterizedTest
@CsvSource({
“2019-09-21, 2018-09-21”,
“null, 2018-08-15”,
“2017-04-01, null”
})
void shouldCreateValidDateRange(LocalDate startDate, LocalDate endDate) {
new DateRange(startDate, endDate);
}

@ParameterizedTest
@CsvSource({
“2019-09-21, 2017-09-21”,
“null, null”
})
void shouldNotCreateInvalidDateRange(LocalDate startDate, LocalDate endDate) {
assertThrows(IllegalArgumentException.class, () -> new DateRange(startDate, endDate));
}

When you are executing above programming you will end up getting exception.

org.junit.jupiter.api.extension.ParameterResolutionException: Error converting parameter at index 0: Failed to convert String “null” to type java.time.LocalDate

The null value isn’t accepted in @ValueSource or@CsvSource.

Method Source:

The @ValueSource and @EnumSource and pretty simple and has one limitation they won’t support complex types. MethodSource allows providing complex argument source. MehtodSource annotation takes the name of a method as an argument needs to match an existing method that returns Steam type.

@ParameterizedTest
@MethodSource(“wordsWithLength”)
void withMethodSource(String word, int length) { }

private static Stream wordsWithLength() {
return Stream.of(
Arguments.of(“JavaTesting”, 10),
Arguments.of(“JUnit 5”, 7));
}

When we won’t provide a name for the @MethodSource, JUnit will search for a source method with the same name as the parameterized test  method.

@ParameterizedTest
@MethodSource(“wordsWithLength”)
void wordsWithLength(String word, int length) { }

Custom Argument Provider:

As of now, we have covered inbuilt argument provider but in few scenarios, this doesn’t work for you then Junit provides custom argument provider. You can create your own source, argument provider. To achieve this we have to implement an interface called ArgumentsProvider.

public interface ArgumentsProvider {
Stream<? extends Arguments> provideArguments(
    ContainerExtensionContext context) throws Exception;
}

Example:

For this, we have just test with a custom empty String provider.

class EmptyStringsArgumentProvider implements ArgumentsProvider {

@Override
public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
    return Stream.of( 
      Arguments.of(""), 
      Arguments.of("   "),
      Arguments.of((String) null) 
    );
}
}

We can use this custom argument a provider using @ArgumentSource annotation.

@ParameterizedTest
@ArgumentsSource(EmptyStringsArgumentProvider.class)
void isEmptyStringsArgProvider(String input) {
assertTrue(Strings.isBlank(input));
}

Summery:

As part of this article, we have discussed Parameterised test cases and argument provider in bits and pieces and some level of custom argument provider using ArgumentsProvider interface and @ArgumentsSource.

There are different source provider from primitive to CSV and MethodSource provider. This is for now.

Junit5 Assumptions

Assumptions are used to run tests only if certain conditions are met. This is typically used for external conditions that are required for the test to execute properly, but which are not directly related to whatever is being unit tested.

If the assumeTrue() condition is true, then run the test, else aborting the test.

If the assumeFalse() condition is false, then run the test, else aborting the test.

The assumingThat() is much more flexible, it allows part of the code to run as a conditional test.

When the assumption is false, a TestAbortedException is thrown and the test is aborting execution.

@Test
void trueAssumption() {
    assumeTrue(6 > 2);
    assertEquals(6 + 2, 8);
}

@Test
void falseAssumption() {
    assumeFalse(4 < 1);
    assertEquals(4 + 2, 6);
}

@Test
void assumptionThat() {
    String str = "a simple string";
    assumingThat(
        str.equals("a simple string"),
        () -> assertEquals(3 + 2, 1)
    );
}

https://www.techwasti.com/junit5-tutorial-part-1/
https://www.techwasti.com/junit5-part2/

Junit5 tutorial: Part2

Before exploring this part, please read first part 1 is here.

Junit5 for beginners. In this tutorial let us make our hands dirty and have practical experience. If you are using Maven or Gradle for either of the build tool you can use dependency.

Maven dependency for Junit 5.0:

Add below dependency in pom.xml.

<dependency>
<groupId>org.junit.jupiter</groupId>
<artifactId>junit-jupiter-engine</artifactId>
<version>5.1.0</version>
<scope>test</scope>
</dependency>

Junit 5.0 with Gradle:

Add below dependency in build.gradle file. We can start by supplying the unit test platform to the build tool. Now we have specified test platform as Junit.

test {useJUnitPlatform()}

Now after the above steps, we need to provide Junit5 dependencies here is the difference between Junit4 and junit5. As we have discussed in previous article Junit5 is modular so we have three different modules and each one has a different purpose.

dependencies 
{
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.3.1'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.3.1'
}

In JUnit 5, though, the API is separated from the runtime, meaning two dependencies have to provide testImplementation and timeRuntimeOnly respectively.

The API is manifest with junit-jupiter-api. The runtime is junit-jupiter-engine for JUnit 5, and junit-vintage-engine for JUnit 3 or 4.

1. JUnit Jupiter:

In the first article, we have explored the difference between Junit4 and Juni5. There are new annotations introduced as part of this module. JUnit 5 new annotations in comparison to JUnit 4 are:

@Tag — Mark tags to test method or test classes for filtering tests.

@ExtendWith —Register custom extensions.

@Nested — Used to create nested test classes.

@TestFactory — Mark or denotes a method is a test factory for dynamic tests.

@BeforeEach — The method annotated with this annotation will be run before each test method in the test class. (Similar to Junit4 @Before )

@AfterEach — The method annotated with this annotation will be executed after each test method. (Similar to Junit4 @After )

@BeforeAll — The method annotated with this annotation will be executed before all test methods in the current class. (Similar to Junit4 @BeforeClass )

@AfterAll — The method annotated with this annotation will be executed after all test methods in the current class. (Similar to Junit4 @AfterClass )

@Disable — This is used to disable a test class or method (Similar to Junit4 @Ignore)

@DisplayName — This annotation defines a custom display name for a test class or a test method.

2. JUnit Vintage:

As part of this module, there is no new annotation has introduced but the purpose of this module is to supports running JUnit 3 and JUnit 4 based tests on the JUnit 5 platform.

Let us deep dive and do some coding:

@DisplayName and @Disabled:

As you can capture from the below code snipept you can DisplayName I have just given two flavors of one method. This way you can provide your own custom display name to identify test easily.

@DisplayName("Happy Scenario")
@Test
void testSingleSuccessTest() 
{
System.out.println("Happy Scenario");
}

@Test
@DisplayName("Failure scenario")
void testFailScenario() {
System.out.println("Failure scenario")
}

To disable test cases which implementation not yet completed or some other reason to skip that

@Test
@Disabled("Under constructution")
void testSomething() {
}

@BeforeAll and @BeforeEach :

BeforeAll is like a setup method this will get invoked once before all test methods in this test class and BeforeEach get invoked before each test method in this class.

@BeforeAll annotation must be, static and it’s run once before any test method is run.

@BeforeAll
static void setup() {
System.out.println("@BeforeAll: get executes once before all test methods in this class");
System.out.println("This is like setup for tests methods");
}

@BeforeEach
void init() {
System.out.println("@BeforeEach: get executes before each test method in this class");
System.out.println("initialisation before each test method");
}

@AfterEach and @AfterAll:

AfterEach gets invoked after each test method in this class and AfterAll get invoked after all test cases get invoked. Afterall like finalization task.

@AfterAll annotation must be, static and it’s run once after all test methods have been run.

@AfterEach
void tearDown() {
System.out.println("@AfterEach: get executed after each test method.");
}
@AfterAllstatic void finish() {
System.out.println("@AfterAll: get executed after all test methods.");
}

Assertions and Assumptions:

Assertions and assumptions are the base of unit testing. Junit5 taking full advantage of Java8 features such as lambda to make assertions simple and effective.

Assertions:

Junit5 assertions are part of a org.junit.jupiter.api.Assertions API and improvision have significantly as Java 8 is the base of Junit5 you can leverage all the features of Java8 primarily lambada expression. Assertions help in validating the expected output with the actual output of a test case.

@Test
void testLambdaExpression() {
assertTrue(Stream.of(4, 5, 9)
.stream()
.mapToInt(i -> i)
.sum() > 18, () -> "Sum should be greater than 18");
}

As you are aware of using lambda expression because of lambda expression

All JUnit Jupiter assertions are static methods

@Test
void testCase()
{
//Pass
Assertions.assertNotEquals(3, Calculator.add(2, 2));

//Fail
Assertions.assertNotEquals(4, Calculator.add(2, 2), "Calculator.add(2, 2) test failed");

//Fail
Supplier<String> messageSupplier = ()-> "Calculator.add(2, 2) test failed";
Assertions.assertNotEquals(4, Calculator.add(2, 2), messageSupplier);
}

With assertAll()which will report any failed assertions within the group with a MultipleFailuresError

Junit5 tutorial: Part-1

Junit5 tutorials for beginner.

Junit is the Java’s most popular unit testing library, recently has released a new version 5. JUnit 5 is a combination of several modules from three different sub-projects. 

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

JUnit is an open-source Unit Testing Framework for JAVA. It is useful for Java Developers to write and run unit tests. Erich Gamma and Kent Beck initially develop it.

JUnit 5 is an evolution of JUnit 4, and did some further improves the testing experience. As we are aware Junit5 is the major version release and its aims to adapt java 8 styles of coding and to be more robust and flexible than the previous releases. 

JUnit 5 was to completely rewrite JUnit 4

Advantages:- Below are the few advancements in this version over older once.

  1. The entire framework was contained in a single jar library.
  2. In JUnit 5, we get more granularity and can import only what is necessary.
  3. JUnit 5 makes good use of Java 8 styles of programming and features.
  4. JUnit 5 allows multiple runners to work simultaneously.
  5. The best thing about Junit5 is backward Compatibility for JUnit 4.

Note:- JUnit 5 requires Java 8 (or higher) at runtime.

Moving from Junit 4 to Junit5:-

In this section let us explore the motivation behind the Junit5.

  1. Junit4 was developed a decade ago, now context has bit changed and programming style too.
  2. Junit4 is not compatible with JDK8 and its new functional programming paradigm.
  3. Junit4 is not modular. A single jar is a dependency for everything. 
  4. Test discovery and execution are tightly coupled in Junit4.
  5. The most important thing nowadays developer not only want unit testing but also they want integration testing and system testing.

These are few reasons to rewrite junit5 from scratch using java8 and introduced some new features.

You can still execute Junit3 and Junit4 unit test cases using Vintage module in Junit5.

Architecture:-

Junit5 is modular architecture and the main three components are Platform, Jupiter and vintage.

Let us understand above this three module.

  1. Platform:- Platform, which serves as a foundation for launching testing frameworks on the JVM. It also provides an API to launch tests from either the console, IDEs, or build tools.
  2. Jupiter:- Jupiter is the combination of the new programming model and extension model for writing tests and extensions in JUnit 5. The name has been chosen from the 5th planet of our Solar System, which is also the largest one.
  3. Vintage:- Vintage in general something from the past. Vintage provides a test engine for running JUnit 3 and JUnit 4 based tests on the platform, ensuring the necessary backward compatibility.

As part of this tutorial, we are able to understand what is new Junit5 and what is it. In the next tutorial, we will explore more and we will take some examples. 

Colab getting started!!

Train deep neural network free using google colaboratory.

GPU and TPU compute for free? Are you kidding?

Google Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.

With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. If you don’t have money to procure GPU and want to train neural network or want to makes hands dirty with zero investment then this if for you. Colab is a Google internal research tool for data science.

You can use GPU as a backend for free for 12 hours at a time.

It supports Python 2.7 and 3.6, but not R or Scala yet.

Many people want to train some machine learning model or deep learning model but playing with this requires GPU computation and huge resources that blocking many people to try out these things and make hands dirty.

Google Colab is nothing but cloud-hosted jupyter notebook.

Colaboratory is a free Jupyter notebook environment provided by Google where you can use free GPUs and TPUs which can solve all these issues. The best thing about colab is TPUs (tensor processing unity) the special hardware designed by google to process tensor.

Let’s Start:- 

 To start with this you should know jupyter notebook and should have a google account. 

http://colab.research.google.com/

Click on the above link to access google colaboratory. This is not only a static page but an interactive environment that lets you write and execute code in Python and other languages. You can create a new Jupyter notebook by File →New python3 notebook. clicking New Python3 Notebook or New Python2 Notebook.

We will create one python3 notebook and it will create one for us save it on google drive. 

Colab is an ideal way to start everything from improving your Python coding skills to working with deep learning frameworks, like PyTorch, Keras, and TensorFlow and you can install any Python package which is require for your python coding like from simple sklearn, numpy too TensorFlow. 

You can create notebooks in Colab, upload existing notebooks, store notebooks, share notebooks with anyone, mount your Google Drive and use whatever you’ve got stored in there, import most of your directories, upload notebooks directly from GitHub, upload Kaggle files, download your notebooks, and do whatever your doing with your local jupyter notebook.

On the top right you can choose to connect to hosted runtime or connect to local runtime

Set up GPU or TPU:-

It’s very simple and straight forward as going to the “runtime” dropdown menu, selecting “change runtime type” and selecting GPU/TPU in the hardware accelerator drop-down menu!

Now you can start coding and start executing your code !!

How to install a framework or libraries?

It’s as simple as writing import statement in python!.

!pip install fastai

use normal pip install command to install different packages like TensorFlow or PyTorch and start playing with it.

For more details and information

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=gJr_9dXGpJ05

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=-Rh3-Vt9Nev9

Github Package Registry

Ship your software like a Pro!!!

For more stories

Github recently announced Github package registry to publish and consume packages over GitHub. One-stop solution for all open source project.

Why Github Package Registry?

GitHub Package Registry is a software package hosting service, similar to npmjs.org, rubygems.org, or hub.docker.com, that allows you to host your packages and code in one place. You can host software packages privately or publicly and use them as dependencies in your projects.

Over the last decade, we are using Github to maintain open-source projects. There are millions of public and private repositories are there on GitHub. Software development is a collaborative activity its teamwork. Irrespective of a language we have to publish that source code as a bundle so any other user can consume it as a dependency to do this we are always relying on different registry such as maven, Gradle, npm, and docker, etc. You can manage source code as well as your different package under one umbrella.

Github is committed to serving developers and given them different tools to improve the developer experience.

It’s your code, your packages, and one login.

Some developer did collaboration in open source either way either they will commit the code in some repository or they will import open source packages into there project. This is very critical to find out the open source packages that we can trust and import in the dependency graph. We need someone on whom we can rely on for a trust. Like while using open source packages we are always considering different aspects such as trust, community, support in terms of new features or in terms of compliance.

Github package Registry Goals:-

Github package registry launched with three main goals.

  1. Sharing:- You can share and manage your packages the way you are managing your code
  2. Productivity:- Improve your productivity, while managing software development lifecycle.
  3. Trust:– Develop, maintain and store your packages in the same secure environment with a single login.

Features:-

“A picture is worth a thousand words

GitHub Package Registry is free for all repositories during the beta. And it will always be free for public and open source repositories.

To explore more please refer to this link.

Managing packages with GitHub Package Registry – GitHub HelpConfiguring Docker for use with GitHub Package Registryhelp.github.com

Enabling proguard for Android.

For more stories.

How to de-obfuscate stack Trace is here?

Enabling proguard in an android studio is a really easy task but I really encounter with question seamlessly and I am getting this question frequently on StackOverflow, this motivates me to write this simple article.

How to Enabling ProGuard obfuscation in Android Studio?

Questionis here on stackoverflow. I have already provided my answer here but now I want to explore a little bit what is proguard? and How to enable it?

What is this proguard…..?

Ref – Wikipedia.

ProGuard is an open source command-line tool that shrinks, optimizes and obfuscates Java code. It is able to optimize bytecode as well as detect and remove unused instructions. ProGuard is open source software.

Proguard was developed by Eric P.F. Lafortune.

As per the definition proguard will help you to not only obfuscation but also it optimizes, shrinks and remove unused instructions as well.

I asked many developers, why you want to apply proguard to your project, and I come across to only one answer, we used only for security purpose. Proguard is not only providing security to your code but also it provides many more features. Let us understand what is proguard and how to use it?

Features of Proguard.

  1. ProGuard also optimizes the bytecode, removes unused code instructions, and obfuscates the remaining classes, fields, and methods with short names.
  2. The obfuscated code makes your APK difficult to reverse engineer, which is especially valuable when your app uses security-sensitive features, such as licensing verification.

We will look more here How to enable it in the android studio instead of exploring how proguard works?

To enable ProGuard in Android Studio.

Below is the sample how to enable default ProGuard in Android Studio.

  1. Go to the build.gradlefile of the app
  2. enables the proguard minifyEnabledtrue
  3. enable shrinkResources true to reduce the APK size by shrinking resources.
  4. proguardFiles getDefaultProguardFile('proguard-android.txt') to enable the default one. If you want to use your own proguard file then use the below rules.
buildTypes {
release {
debuggable false
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}

debug {
debuggable true
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}

The link contains ProGuard settings for Android and other library settings also:

This is simple try to say about enabling proguard in an android studio.

I have been following this path, love to hear from you.

Overview of Deep learning with Gluon

Overview of Deep learning with Gluon

This chapter introduces a fundamental concept and jargon that every machine learning engineer and data scientist should know. In this chapter, we will discuss some basic concept of machine learning, deep learning, and AI. Then in subsequent chapters of this book, we will dive deeper and makes hand dirty. Deep learning(DL) has been outbreaking technology for all the industries and its booster for the AI adaptation.

Andrew Ng once said Artificial Intelligence is the new electricity!

AI and DL and ML are used interchangeably but there is a substantial difference between these three. We will start with a brief definition of each one. This chapter will cover basic of the machine learning, deep learning and AI and some foundation terminology to understand the deep learning then we will have a glance over gluon API. We will also cover some part of MXnet Deep learning framework along with Gluon API. This book is for any technical person who wants to get up to speed on machine learning and deep learning quickly. And anyone who is a novice to the technology but who is curious about how the machine thinks and act. In this book, we can dig deeper into the Deep learning neural network using Gluon API and underline deep learning framework is MxNet. Gluon is packaged along with MxNet and it is an abstraction layer over Apache MxNet Deep learning framework. Gluon name was given by subatomic particle. A gluon is an elementary particle that acts as the exchange particle. This book is for the data scientist and machine learning Engineer and aspiring data scientist.

This chapter contains below points,

  • Artificial Intelligence
  • Machine learning
  • Deep learning
  • Neural Network Architectures
  • Gluon API overview and environment setup

Artificial Intelligence(AI)

Artificial intelligence is where the machine will think, act, fail, learn and react without human intervention. Artificial intelligence is the hype now in the industry and there are tons of articles available — they teach us, dream us for future and scare us as well but above all AI the revolutionary technology. The progress which we did in the last couple of years was awesome due to the amount of innovation in computation power and a vast amount of data. At the very highest level, AI is about creating machines capable of solving problems like a human. As a human, we learn through reasoning, intuition, cognitive thinking, and creativity. There are several definitions of AI floating around, my favorite one “the science and engineering of making intelligent machines”.

The history of AI:-

During the second world war, the Germans build the Enigma machine to be used in military communications to send messages securely.
Alan Turing and team built the machine that used to decipher enigma messages.
Cracking the enigma code by a human was very challenging due to the different permutation and combination. The journey of the question of whether can machines think and act like a human or not started much earlier than that. In the early days of AI, machines were able to solve problems that were difficult for humans to solve or the mundane industry work.
There are different aspects of human intelligence and AI. We just want how to mimic human and built an intelligent machine.

In 1956, American computer scientist John McCarthy organized the Dartmouth Conference, at which the term ‘Artificial Intelligence’ was coined first. Researchers Allen Newell and Herbert Simon were instrumental in promoting AI as a field of computer science that could transform the world. The father of AI developed the LISP programming language which becomes important in AI. In 1951, a machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. Subsequently, Newell and Simon developed a General Problem Solver algorithm to solve mathematical problems. It was also in the late 1960s that the first mobile decision-making robot capable of various actions was made. Its name was Shakey.
Shakey could create a map of its surroundings prior to moving. The first ‘intelligent’ humanoid robot, was built in Japan in 1972. In the early days of AI, researcher believe AI could be able to solve the problem by hard-coding a rule-based system like a decision tree.
This AI system aka Symbolic AI and it was very successfully to solve well defined logical problems but it was failed to solve complex problems such as natural language understanding, image detection, understanding scene, Object detection, time-based forecasting.
Over the decade of efforts and well funded global efforts, researchers found it incredibly difficult to create intelligent machine due to different reasons unavailable of computing power, lack of data.
In 1997, IBM’s Deep Blue defeated became the first computer to beat a supreme world chess champion, Garry Kasparov. AI technology continued its march, largely thanks to improvements in computer hardware and people used AI methods in a narrow domain instead of general intelligence that help researchers to solve some complex problem.
Exponential gains in computer processing power and storage ability allowed companies to store vast quantities of data. Today’s AI hits on almost every aspect of human life, from the military and entertainment to our cell phone and driverless cars, from real-time voice translation to a vacuum that knows where and how to clean our floor without you, from our own computer to your doctor’s office. An autonomous (driverless) car, facial recognization for authentication So what where is AI going in the future? Is it scary or not. No one can tell you for sure.

AI-powered machines are usually classified into two groups — general and narrow. The narrow AI machines can perform specific tasks very well, sometimes better than humans
The technology used for classifying images on Airbnb is an example of narrow AI.
AI, DL, and ML fit together.

Machine learning:-

Machine learning is a computer science branch that deals with methods and technique to implement an algorithm. Machine learning is inferential leaning from a descriptive data set.
This era is data mining era. Data is the fuel of the 21st century. If you have data(fuel ) then you can develop an AI system that electrifies your business. In generally while doing programming means we have data and rules and we will expect some result from this. This is one of the paradigms we follow as a programmer. We want to write down a program to convert temperature Fahrenheit to Celsius, to do this we need data values in Fahrenheit and formula for conversation, then with the help of this, we will write down code snippet that fulfills this requirement and result of this code snippet is a temperature in Celsius.
Machine learning has shifted this paradigm, Machine learning will take data and Answers as input and in a result return the rules. As we discussed above Fahrenheit to celsius programming, but we will just think this problem in ML context. We will provide both Fahrenheit and Celsius values and ask the ML program will find out the relation between this, that means find out the formula. This is just a simple example but there are many more complex problems addressed with the help of ML.

There are plenty of definitions are articles available over the internet that can explain to you what is machine learning? When I just fire a query to google wiki, this is the very simple definition of a machine learning I come across.

“Machine learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959). It is a subfield of computer science. The idea came from work in artificial intelligence. Machine learning explores the study and construction of algorithms which can learn and make predictions on data.”

more engineering-oriented Definition:

A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.
 — Tom Mitchell, 1997

Likewise in the above example, we discussed your program should explicitly identify the relationship between Fahrenheit and Celsius value and take action accordingly in the future instead of providing an implicite formula for conversion. Machine learning is not only a science but also its an art. In machine learning data is the challenging part, If we have a data for training algorithm to validate that ML algorithm we need testing data set as well, so in ML we need two data set one if to train Algorithm and another for testing.

Types of Machine learning:

Less is More

1. Supervised Machine learning:

Supervised machine learning is the technique of Inferring a rule from the labeled dataset. Supervised machine learning means machine learning with some amount of human supervision, means we have input data along with output label. In the supervised learning data set is available with expected output this aka label. From data, the machine learning algorithm will understand for which input what is the output. Typical supervised learning address classification, regression problem such as spam filter, prediction of house prices. To train these systems we need a huge amount of data set.

Below are some Supervised Machine learning algorithms

  • Linear Regression
  • Logistic Regression
  • Support Vector Machines (SVMs)
  • Decision Trees and Random Forests
  • k-Nearest Neighbors
  • Neural networks2

2. UnSupervised Machine learning:

UnSupervised machine learning is the technique of Inferring a rule and find a meaning full pattern from data set. In this type of machine learning, datasets consisting of input data without labeled result. UnSupervised learning with supervision or learning without a teacher. To train unsupervised algorithm the given data are not annotated that mean only input values provided. This technique is useful to group the data or do the clustering and find the common pattern from the data.

Some unsupervised Algorithm

  • Clustering:
    k-Means
    Hierarchical Cluster Analysis (HCA)
    Expectation Maximization
  • Visualization and dimensionality reduction:
    Principal Component Analysis (PCA) — Kernel PCA
    Locally-Linear Embedding (LLE)
    t-distributed Stochastic Neighbor Embedding (t-SNE)
  • Association rule learning:
    Apriori
    Eclat

3. Self-supervised learning:

Self Supervised learning is a very recent technique of machine learning. This is supervised learning but instead of providing labeled data by human as an input, the data set is auto labeled. Self-supervised learning technique as a potential to solve a problem which is not addressed by supervised learning. As I mentioned earlier in the machine learning data set is the challenging thing. To provide a huge amount of labeled data is a very crucial task.

Self-supervised learning is autonomous supervised learning. It is a representation learning approach that eliminates human supervision to label data. Self-supervised learning is very relevant to human because we learn a few things in a supervised manner and few unsupervised ways but we learn from very few examples and generalize exceptionally well.

4. ReInforcement Leaning:

ReInforcement learning is another technique in machine learning. Have you visited a circus ever, in circus ringmaster train the tiger? For tigers, positive behavior ring master can reward him and for negative behavior can be punishment. The way we learn in academia.
Reinforcement learning means the agent will learn to reinforce the way in a particular environment, can be rewarded for positive behavior and get punished for negative behavior.
Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is useful in gaming, it’s goal-oriented leaning where an agent can learn how to behave in the environment by performing actions and accumulate maximize reward to reach to the goal.

This is a very interesting analogy used by Yann LeCun to understand this.

“ Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. “

Deep Leaning:

Deep-learning is the inferring learning approach that human beings use to gain knowledge. Over the past couple of year, deep learning revolutionized the many aspects of research and industry including things like an autonomous driving vehicle, healthcare, reinforcement learning, generative modeling, NLP, robotics, fintech. Deep-learning technique is the part of a broader family of machine learning. Deep learning technique taking inference of how the human brain works. Deep neural networks are in deep structured and hierarchical each level of a hierarchy represent a different level of abstraction. Deep-learning is now become hype because of advancement in hardware and software. Artificial Neural network is the core part of deep-learning. ANN is the inference of taken from human brain neuron. In a human brain, there are millions of neuron present and they are interconnected and there structure and hierarchy are very deep and complex. Deep-learning neural networks are taken inference from the human brain such how human understand the scene or how our cortex work to identify the object, CNN (Convolutional Neural network) is the best example for this one. Before deep-learning technique to Object detection or to detect human face is a very crucial one, you need to extract feature and create a template for the same, such as the detect nose, the left eye, right eye means you need to define every single step to reach an outcome but with the help of deep learning we can understand any scene and object detection has become very easy. The deep neural network has a deep level of neural network in a hierarchical and abstract way to understand the things and finally combine the result.
Deep-learning is a subset of machine learning which takes ML one step further to process and understand data and find meaningful insight.

Our brain consists of a large network of interconnected neurons, which act as a roadway for information to be transmitted from point A to point B. To send different kinds of information from A to B, the brain activates a different set of neurons, and so essentially uses a different route to get from A to B. Biological neurons are interconnected they understand things by an alteration of sending signals. The cell consists of a cell body, with dendrites acting as connecting wires for other neurons to connect to. In most cases, a neuron has one axon capable of transmitting electric currents actively to other connecting cells. The connections between neurons are established using synapses located at the end of the axon. These synapses are responsible for a lot of the magic of computing and memory in the nervous system. The ANN model is modeled after the biological neural network. In the above diagram Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, the artificial neuron has a number of input channels, a processing stage, and one output that can fan out to multiple other artificial neurons.

Bit history:

Deep-learning evolved all industry but in general deep learning addressed major problems such as speech recognization system, image recognization, Object detection from the image.

The word “deep learning” was first used when talking about Artificial Neural Networks (ANNs) by Igor Aizenberg and colleagues in or around 2000. In deep-learning deep refers to the number layers typically.

1960s: Shallow neural networks
1960–70s: Backpropagation emerges
1974–80: First AI Winter
1980s: Convolution emerges
1987–93: Second AI Winter
1990s: Unsupervised deep learning
1990s-2000s: Supervised deep learning back in vogue
2006s-present: Modern deep learning

Neural Network architectures:

A neural network is designed to solve a complex task, some tasks are more complex to solve but not impossible such as write down a recommendation system based on shopping history. As a programmer, we can write down some sort of hardcoded rules to fulfill this requirement but this is mundane work, so come with machine learning algorithms will help us to explore data and find meaningful insight pattern. Machine learning or AI system comes into the picture where there is more uncertainty, such as

  1. It’s hard to identify the fraudulent transaction in digital money transfer where the end user is not in front of a system its virtual one.
  2. It’s very hard for a machine to detect the pedestrian.

Artificial Neural networks are the first class model to predicate this uncertain result. ANN is the inference of the human brain. ANN is a simulation of the human brain. Neural network architecture is very complex and they are very adaptive and do parallel computation.

Neural network research is motivated by two desires,

  1. Understand the human brain better way,
  2. Mimic human activity and intelligence in computers that can deal with a complex problem.

There is a different architecture of Neural network will address domain specific problem. Human intelligence is generally intelligent. It’s very tough to develop artificial general intelligence to address almost all or some problem. Neural Network architectures are consist of three major layers: the input layer, hidden layers, and the output layer. The number of hidden layers defines the depth of the Neural network architecture.

Below we will check some brief introduction about some ANN.

  1. Perceptrons
  2. Hopfield Neural Network
  3. CNN
  4. Recurrent Neural Networks

Gluon API

Overview

Gluon API is the high-level simple, concise and efficient deep learning API. Amazon and Microsoft research group developed Gluon API specification. This is the product of joint effort taken by both leading tech companies to generalize AI for any developer. Gluon is open source deep learning interface, jointly developed by the companies to let developers “prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps.

Gluon is an API, not another deep learning framework, they provided some concise and clear API abstraction layer this helps us to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. Gluon offers an interface that allows developers to prototype, build, and train deep learning models.

Developers who are new to machine learning will find this interface more familiar to traditional code since machine learning models can be defined and manipulated just like any other data structure. Seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

Gluon is imperative for developing but symbolic for deploying.

Before we dive dipper into the Gluon API, we should know at least one of the underline framework on Gluon is rely upon. Gluon is the abstraction layer for deep learning framework such as MxNet.

Distinct Advantages:

  1. Friendly API Simple, Easy-to-Understand Code
  2. Flexible, Imperative Structure
  3. Build graphs on the fly
  4. High-performance operators for training

MxNet:

MXNet is open source deep learning library by Amazon. This founded by U.Washington and Carnegie Mellon U. This is a portable, efficient and scalable deep learning framework. This will support python, javascript, Scala, Julia, and R. The best thing about MXNet is, it allows both imperative(define by run) and symbolic programming. It has a vibrant community backed by Amazon.

Installing Gluon on MacOS:

The Gluon specification has already been implemented in Apache MXNet so we need to install apache MxNet to setup environment. It’s easy to set up an environment for Gluon API using different options such as docker, pip, virtual environment. MxNet is supporting different languages along with different OS platform. I will show here installation for Mac OS.

You can refer this link to do the installation for your respective platform. (https://mxnet.incubator.apache.org/versions/master/install/index.html?platform=MacOS&language=Python&processor=CPU)

By default, MxNet gets installed with CPU but you can also do the installation for GPU enabled mode.

Pip mode

$ pip install mxnet

MXNet offers MKL pip packages that will be much faster when running on Intel hardware.

Using Docker

Docker images with MXNet are available at Docker Hub(https://hub.docker.com/).

Step 1 Install Docker on your machine. For more detail (https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac)

Step 2 Pull the MXNet docker image.

$ docker pull mxnet/python

Very your docker pull command

$ docker images

I recommend using Python version 3.3 or greater and setup environment using a Jupyter Notebook

# I used minicoda and virtual environment 
# source activate gulons
# optional: update pip to the newest version
sudo pip install --upgrade pip
# install jupyter
pip install jupyter --user
# install the nightly built mxnet
pip install mxnet --pre --user
Default MxNet is come up with CPU you can install GPU as well If you have GPU availability.
pip install mxnet-cu75 --pre --user  # for CUDA 7.5
# for CUDA 8.0 use this mxnet-cu80 --pre --user
#start notebook and enjoy coding
jupyter notebook

Validate the installation:

To validate the installation this is the simple steps

Pip installation validation, start the terminal and type

$ python
$ import mxnet as mx
$ from mxnet import gluon

The same way you can do the validation for Docker setup and starting docker image and executing a bash command.

MXNet should work on any cloud provider’s CPU-only instances. You can Also do setup Gluon and MxNet over any cloud platform. It’s easy to set up over Amazon AWS.

AWS Deep learning AMI (Amazon Machine Images) — Preinstalled Conda environments for Python 2 or 3 with MXNet and MKL-DNN.

Also, MxNet supports different edge platforms such as Raspberry Pi and NVIDIA Jetson Devices.

The architecture of MxNet:

In the above diagram, you can explore the key modules of the MxNet framework and their relation. As stated in the above diagram solid arrow indicate the concrete dependency and dotted line indicate light dependency. In the above modules, lower modules indicated by bluish color are the system modules and high-level modules indicate user-facing modules this is the actual API where the programmer will do interaction. The modules are

KVStore:- key-value store interface for parameter synchronization

Data Loading:- Efficient distributed data loading and augmentation

NDArray:- Dynamic, asynchronous n-dimensional arrays

Symbolic Execution:- Static symbolic graph executor

Symbolic Construction:- provides a way to construct a computation graph

Operators:-Operators that define static forward and gradient calculation

Storage Allocator:- Allocates and recycles memory blocks

Runtime Dependency Engine:-Schedules and executes the operations

Resource Manager:- Manages global resources

Gluon Package

Gluon package comes with four key modules.

  1. Parameter:- Parameter is a basic component. A parameter can hold the weight of blocks. There are two standard API one is Parameter and to manage a set of parameters we have ParameterDict
  2. Containers:- Containers the blocks that will help you to build neural network Containers are the blocks which hold the parameters.
  3. Trainer:- Trainer helps you to do parameter optimization. Trainer applies optimizer over parameters in the containers.
  4. Utilities:- Utilities contains small utils that help us in certain operations, such as split, and rescale dataset for data parallelism.

Gluon APIs:

Gluon API contains below APIs.

  1. Gluon Neural Network Layers API:- Gluon Neural network layer API provides you building blocks of neural network. It contains API to directly add blocks in a neural network, such as Dense layers, Convolution layers, Activation function layer and Max Pooling layer.
  2. Gluon Recurrent Neural Network API:- This API provides building blocks to define the Recurrent Neural Network. This can help us to define RNN with LSTM.
  3. Gluon Loss API:- This API contains different loss function which is required while building a different neural network. This API can help you to calculate mean squared loss or mean absolute loss.
  4. Gluon Data API:- This API is very useful API for people who want to make hands dirty but don’t have a dataset. This API contains dataset utilities and common public datasets.
  5. Gluon Model Zoo:- Gluon model zoo contains pre-trained and pre-defined models that will help us to bootstrap our development.
  6. Gluon Contrib API:- This is for the whom who had mastery in Gluon and Who want to contribute into Gluon API. This API is for the community who wanted to try out some new features and get feedback.

Deep learning Programming style:

One of my favorite things about Gluon API is that it offers multiple levels of abstraction so you can choose the right one for your project. Gluon offers two styles to create your neural network. First one is symbolic style or Declarative style and the second one is imperative style.
These are the two-deep learning programming style. Each one has there own pros and cons, that’s why almost all the deep learning framework offers both styles of programming.

Imperative Programming:

Imperative programming means define by run means dynamic programming. The part of the computation graph constructed at the run time. Imperative programming is flexible and straightforward. In this programming, we can take advantage of language native features such as iteration, condition, debugger, etc.
Imperative style is nothing new for you the way you are writing Numpy code is the imperative style of programming. Imperative style programs perform operations directly.
Most of the Python code shows an imperative form, for example, the following Numpy code. In this style of programming, the state of the program is getting changed.

import numpy as np
a = np.ones(20)
b = np.ones(20) * 2
c = b * a
d = c + 1

Here is above code snippet When we issue c = b * a command to run the program, the actual operation is getting executed.

PROS:

  1. straightforward and flexible because of execution flow with a programming language.
  2. Take advantage of native language features

Cons:

  1. Manual optimization
  2. Not efficient in terms of memory usage and speed.

Symbolic Style of programming:

Symbolic programming aka declarative programming it’s contrary to imperative programming style. In this style of programming execution performed after the computational process fully defined. In this paradigm you need to first define and then run, this is a status computation graph. This is the immutable graphs this is not changing at run time. Symbolic-style programs include compilation steps either explicitly or implicitly, this converts the graph into the function that actually getting called any time. In this style of programming, we can just define a function with a placeholder value and after this, you can compile the function and evaluate it with the actual input. Below is a code snippet, converting above imperative code to symbolic code In the symbolic programming generally requires three steps:

#Step 1:- Define the computation graph.
a = Variable('A')
b = Variable('B')
c = b * a
d = c + Constant(1)
#Step 2:- Compile the computation process into an executable program.
f = compile(d)
#Step 3:- Provide the required inputs and call on the compiled #program for execution.
g = f(a=np.ones(20), b=np.ones(20)*2)

In this code snippet, the c = b * a does not actually perform the operation, instead, this will generate the computation graph that represents this computation process.
Following computation, a graph is generated for operation d.

PROS:

  1. Infer optimization automatically from the dependency graph.
  2. Memory reuse opportunities.
  3. More efficient and easier to port.

Cons:

  1. Less flexible

Hybrid Programming style:

Gluon comes up with hybrid programming style and its the positive point for this, in the above description you can not conclude which programming style is good in deep learning.
Gluons hybrid approach give us more flexibility to harness the benefits of both imperative and symbolic programming. User should imperative programming to build and test a prototype on the fly and while deploying or serving in production, we can convert a program into symbolic programming to achieve product level computing performance.
This was possible due to gluon API hybrid programming.

In the hybrid programming, we can build models using either the HybridBlock or the HybridSequential Gluon API classes. By default, Gluon API uses the Block or Sequential Block classes same that is used in imperative programming. When we call hybridize function,
then Gluon will convert programs execution into symbolic programming style.

Let us take a small example of Hybrid programming.

#imperative
import mxnet as mx
from mxnet import nd
a = mx.nd.zeros((120,60))
b = mx.nd.zeros((120,60))
c = a + b
c += 1
print (c)
#Symbolic
improt mxnet as mx
from mxnet import nd
net = mx.sym.Vairable('data')
net = mx.sym.FullyConnected(data=net, num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net)
texec = mx.module.Module(net)
texec.forword(data=c)
texec.backward()

The NDArray API:

In this section, we will introduce the NDArray API. In the MxNet NDArray API is the primary tool to store, transform and manipulate data. This is the core data structure for all computation. NDArray is the multi-dimensional array similar to a Numpy. The NDArray represent the multi-dimensional, fixed size homogenous array. Basically, NDArray provides API to imperative tensor operations. The mxnet.ndarray is similar to numpy.ndarray but not very similar there is some difference.

Array creation:-

We can create NDArray using python tuple or list with NDArray array function.

import mxnet as mx 
from mxnet import nd
# create a 1D array with a python list 
x = mx.nd.array([4,3,9]) 
# create a 2D array with a nested python list 
z = mx.nd.array([[4,3,6], [5,1,8]]) 
#display the array
{'x.shape':x.shape, 'z.shape':z.shape}

We can also create NDArray using numpy.array API.

# import numpy package
import numpy as np
from mxnet import nd
# create numpy array
d = np.arange(15).reshape(3,5)
# create a 2D array from a numpy.ndarray object
y = mx.nd.array(d)
# display array
{'y.shape':y.shape}

We can specify data which is optional dtype while creating of NDArray. By default, float32 is used. We can also create NDAaray with placeholder with the help of different function such as zeros, ones, etc. NDArray also offers generally all the API that are required to manipulate the data such as slicing, indexing, shape, basic arithmetic, copies, reduce, etc.

# basic operatiosn of NDArray
# float32 is used by default
a = mx.nd.array([1,2,3])
# create a 16-bit float array
c = mx.nd.array([1.2, 2.3], dtype=np.float16)
(a.dtype, c.dtype)
# create empty array
d = mx.nd.empty((2,3))
# create array with all zeros
e = mx.nd.zeros((2,3))
# create array with all 5
f = mx.nd.full((2,3),5)
# we can also perform some basic operations
# elementwise plus
g = a+ b
# elementwise minus
h = c-d
i = -e
# we can use sum or mean 
j = mx.nd.sum(e)
# exponential
j.exp()
# transpose matrix
nd.dot(a,c.T)
#indexing
j[1,2]
# for advanced way
j[:,1:2]

NDArray has some key advantages First, NDArrays support asynchronous mathematical computation on CPU, GPU, and distributed cloud architectures. Second, they provide support for automatic differentiation. These properties make NDArray vital choice for deep learning. As we saw we can create vector, matrix, and tensor and manipulate with the help of NDArray.

We can convert NDArray to Numpy if you have some scenarios and instead of NDArray if you want to use Numpy array we can use, the conversion is easy.

Note:- converted array does not share memory.

# convert x into numpy z array
z = x.asnumpy()
# display type of z for verification (type(z), z)
# display numpy array as a NDArray.
nd.array(z)

The Symbol API:

In the previous section, we learned about the NDArray to store and manipulate the data. In this section, we will be exploring the symbol API. Symbol API is the basic interface for symbolic programming. Symbolic API are following declarative approach, instead of executing program step by step you need to first define computation graph, computation graph contains the placeholder for inputs and desired output. Gluon API taking advantage of this approach under the hood before hybridization. Your computation graph is a composition of symbols, operators, network layers. With the symbolic API, we can optimize the computation graph. Symbolic API uses a small memory footprint because we can recycle memory from intermediate steps. NDArray allows writing a program in an imperative fashion but symbolic API allows writing a program in a declarative fashion. But most of the operators supported by NDArray also supports symbol API. A symbol means a multi-output symbolic expression

We will just build a simple example of a+b its symbolic API we need to declare placeholder for this using mx.sym.Variable, give them name as a and b respectively.

import mxnet as mx
a = mx.sym.Variable('a')
b = mx.sym.Variable('b')
c = 3 * a + b
type(c)
# output 
mxnet.symbol.symbol.Symbol

Symbol API also supports a rich set of neural network API with the help of those we can define neural networks as well.

First Gluon Example:

Create a simple neural network layer using the gluon nn package.

# import ndarray module from mxnet package
from mxnet import nd
# import gluon package
from mxnet.gluon import nn
# let us define layer Dense is a subclass of Block to define layer
layer = nn.Dense(2)
layer
# we need to initialise the weight [-0.7,0.7]
layer.initialize(ctx=mx.cpu(0))
# random (3,4) matrix range from -1 to 1
x = nd.random.uniform(-1,1,shape=(3,4))
layer(x)
#  print weight data
layer.weight.data()
# collect the parameters
layer.collect_params()
# type of params collected from layer
type(layer.collect_params())

In this example, we just saw How to define simple layer using Gluon API.

Summary:

In this chapter, we introduced some of the fundamental concepts such as Artificial intelligence, deep learning, machine learning, and Gluon API along with MxNet.

It consists of different machine learning types and deep learning techniques and most recent research in machine learning such as self-supervised learning. Deep learning is achieved by just adding more layers as a hidden layer this is possible because of the availability of huge data and advancement is computation. With the help of different deep learning framework and cloud computing now these techniques are available to any software engineer on a fingertip.

In this chapter, we begin our journey into deep learning using Gluon API. Introduction of Gluon API with different deep learning programming paradigm. This chapter ended with the installation of Gluon, environment setup and few small API examples. Let us ready with Gluon API tool to conquer the deep learning world.

CNN (Convolutional Neural network) using Gluon

CNN (Convolutional Neural network) using Gluon

Introduction:

Convolutional Neural Network is deep learning networks, which have achieved an excellent result on images recognition, images classifications. objects detections, face recognition, etc. CNN is everywhere and its most popular deep learning architecture. CNN is majorly used in solving the image data challenge and video analytics too. Any data that has spatial relationships are ripe for applying CNN.

In the previous chapter, we covered the basic machine learning techniques or algorithms to solve regression and classification problem. In this chapter, we will explore the deep learning architecture such as CNN (Convolutional Neural Network). CNN’s are a biologically inspired variant of MLPs. CNN aka ConvNet, in this chapter we will use this term interchangeably. In this chapter, we will explore the below points.

  • Introduction of CNN
  • CNN architecture
  • Gluon API for CNN
  • CNN implementation with gluon
  • Image segmentation using CNN

CNN Architecture:

CNN’s are regularised version of multilayer perceptrons. MLPs are the fully connected neural networks, means each neuron in one layer has a connection to all neuron in the next layer. CNN’s design inspired by vision processing of living organisms. Without conscious effort, we make predictions about everything we see and act upon them. When we see something, we label every object based on what we have learned in the past

Hubel and Wiesel in the 1950s and 1960s showed that the How cat’s visual cortex work. The animal visual cortex is the most powerful visual processing system in existence. As we all know that the visual cortex contains a complex arrangement of cells. These cells are sensitive to small sub-regions of the visual field, called a receptive field. The sub-regions are tiled to cover the entire visual field. These cells act as your local filters over the input space and are well-suited to exploit the strong spatially local correlation present in natural images. This is just a higher level intro of How cortex work. CNN is designed to recognize visual patterns directly from pixel images with minimal preprocessing.

Now let us make things simple, think about how our brain thinks and the human brain is a very powerful machine. Everyone works differently and it’s clear that we all have our own ways of learning and taking in new information. “A picture is worth a thousand words” is an English language adage. It refers to the notion that a complex idea can be conveyed with just a single picture, this picture conveys its meaning or essence more effectively than a description does. We see plenty of images every day, our brain is processing them and store them. But what about the machine, how a machine can understand, process and store meaningful insight from that image. In simple term, each image is an arrangement of a pixel, arranged in a special order. If some order or color get changed that effect the image as well. From the above explanation, you can understand that images in machine represent and processed in the form of pixels. Before CNN’s comes into the form it’s very hard to do image processing. Scientists around the world have been trying to find different ways to make computers to extract meaning from visual data(image, video) for about 60+ years from now, and the history of CV (Computer Vision), which is deeply fascinating.

The most fascinating paper was published by two neurophysiologists — David Hubel and Torsten Wiesel — in 1959 as I mentioned above the paper titled was “Receptive fields of single neurons in the cat’s striate cortex”. This duo ran pretty experiments over a cat. They placed electrodes into the primary visual cortex area of an anesthetized cat’s brain and observed, or at least tried to, the neuronal activity in that region while showing the animal various images. Their first efforts were fruitless; they couldn’t get the nerve cells to respond to anything. After a few months of research, they noticed accidentally they caught that one neuron fired as they were slipping a new slide into the projector. Hubel and Wiesel realized that what got the neuron excited was the movement of the line created by the shadow of the sharp edge of the glass slide.

[Image Source: https://commons.wikimedia.org/wiki/File:Human_visual_pathway.svg]

The researchers observed, through their experimentation, that there are simple and complex neurons in the primary visual cortex and that visual processing always starts with simple structures such as oriented edges. This is the much simpler and familiar explanation. The invention does not happen overnight it took years and its evolutionary process to get the groundbreaking the result.

After Hubel and Wiesel there is nothing happen groundbreaking on their idea for a long time. In 1982, David Marr, a British neuroscientist, published another influential paper — “Vision: A computational investigation into the human representation and processing of visual information”. David gave us the next important insight i.e. vision is hierarchical. David introduced a framework for a vision where low-level algorithms that detect edges, curves, corners, etc., and that are used as stepping stones towards to form a high-level understanding of the image.

David Marr’s representational framework:

  • A Primal Sketch of an image, where edges, bars, boundaries, etc., are represented (inspired by Hubel and Wiesel’s research);
  • A 2½D sketch representation where surfaces, information about depth and discontinuities on an image are pieced together;
  • A 3D model that is hierarchically organized in terms of surface and volumetric primitives.

Davids framework was very abstract and high-level and there is no mathematical modeling was given that could be used in artificial learning. It’s a hypothesis. At the same time, Japanese computer scientist, Kunihiko Fukushima, also developed a framework inspired by Hubel and Wiesel. This method is a self-organizing artificial network of simple and complex cells that could recognize patterns and be unaffected by position shifts. The network is Neocognitron included several convolutional layers and whose receptive fields had weight. Fukushima’s Neocognitron the first ever deep neural network and it is a grandfather of today’s convents. And a few years later in 1989, a French scientist Yann LeCun applied a backpropagation style learning algorithm to Fukushima’s neocognitron architecture. After a few more trails and error and Yann released LeNet-5. LeCun applied his architecture and developed and released a commercial product for reading zip codes. Around 1999, scientist and researchers trying to do visual data analysis using Marr’s proposed method instead of feature-based object recognition.

This is just a brief overview and important milestones we have covered that will help us to understand How CNN was evolved. Let us talk about CNN’s architecture, like an every artificial neural network architecture this also having input, hidden layers and output layer. The hidden layers consist of a series of convolutional layers that convolve with multiplication or other dot product. CNN’s are a specialized kind of neural network for processing data that has a grid like a topology, like time series data, which can be thought as one-dimensional array (vector) grid taking samples at regular time intervals but image data can be thought of as a 2-D grid of pixels (matrix). The name “Convolutional neural network” indicates that the network employs a mathematical operation called convolution. Arranging the image in the 2-D grid of pixels is depending on the whether we are looking at a black and white or color image, we might have either one or multiple numerical values corresponding to each pixel. CNN-based neural network architectures now dominate the field of computer vision to such a level that hardly anyone these days would develop a commercial application or enter a competition or hackathon related to image recognition, object detection, or semantic segmentation, without basing their approach on them. There are so many modern CNN networks owe their designs to inspirations from biology. CNNs are very good in strong predictive performance and tend to be computationally efficient because easy to parallelize and has very fewer inputs as compared to a dense layer. If we use a fully connected neural network to deal with the image recognization then we need a huge number of parameters and hidden layers to address this. let us consider we have an image of 28*28*3 then the total number of weights in the hidden layer will be 2352 and it will lead to overfitting that’s why we are not using a fully connected neural network to process image data.

In the convolutional neural network, the neuron in the layer will be connected to a small region of the layer. CNN the neuron in a layer will only be connected small region of the layer before it, instead of all the neuron in a fully connected network.

The above fig shows the general architecture of CNNs. CNN is a type of feed forward artificial neural network in which the connectivity pattern between the neurons inspired by the animal visual cortex. The basic idea is that some of the neurons from the cortex will fire when exposed horizontal and some cortex will fire when exposed vertically and similarly some will fire when exposed diagonal edges and this the motivation behind the connectivity pattern.

In general, CNN has four layers.

  1. Convolution layer
  2. Max Pooling layer
  3. ReLU layer
  4. Fully connected

The main problem with image data is, images won’t always have the same images. There can be certain deformations. Similarly to how a child recognize objects, we can show a child a dog with black color and we told him this is a dog and on the next day when some other pet with black color comes to our house with four legs He has recognized with dog but actual that is not a dog and its goat. Similarly, we have to show some samples to find a common pattern to identify the objects. We have to show millions of pictures to an algorithm to understand the data and detect the object, with the help of these millions of a records algorithm can generalize the inputs and make predictions for the new observations.

Machine see in a different way than humans do. Their world consists of only 0’s and 1’s. CNNs have a different architecture than regular artificial neural networks. In the regular full connected neural network, we putting the input through the series of hidden layers and reach to the fully connected output layer that represents the predictions. CNNs following a bit different approach. All the layers of CNNs are organized in 3 dimensions: width, height, and depth and neurons in the one layer do not connect to all neurons in the next layer but only the small portion of it and the output layer will be the reduced to a single vector of probability scores, organized along the depth dimension. Below fig, illustrate NN(neural network) vs CNN.

As we said earlier, the output can be a single class or a probability of classes that best describes the image. Now, the hard part is understanding what each of these layers does. Let us understand this.

CNNs have two components

  1. Feature extraction part (The hidden layers): The hidden layer perform a series of convolutions and pooling operations during which the features are detected. If you had a picture of a human face, this is the part of where the network would recognize two eyes, nose, lips, and nose, etc.
  2. The Classification part (Fully connected output layer): As we said last classification layer is fully connected layers will serve as a classifier on top of extracted features.

Convolution layer:

Convolution layer is the main building blocks of CNN, as we said convolution refers to the combination of two mathematical functions to produce a third function. Convolution is performed on the input data with the use of filters or kernels ( filters or kernels term people use interchangeably). Apply filters over the input data to produce a feature map. Convolution is sliding over the input. At each and every location, matrix multiplication is performed and sums the result into the feature map.

Note that in the above example an image is 2 dimensional with width and height (black and white image). If the image is colored, it is considered to have one more dimension for RGB color. For that reason, 2-D convolutions are usually used for black and white images, while 3-D convolutions are used for colored images. Let us start with (5*5) input image with no padding and we use a (3*3) convolution filter to get an output image. In the first step, the filter sliding over the matrix and in the filter each element is multiplied with an element in the corresponding location. Then you sum all the results, which is one output value. Then, you repeat this process the same step by moving the filter by one column. And you get the second output. The step size as the filter slides across the image is called a stride. In this example Here, the stride is 1. The same operation is repeated to get the third output. A stride size greater than 1 will always downsize the image. If the size is 1, the size of the image will stay the same. In the above operation, we have shown you the operation in 2D, but in real life applications mostly, convolutions are performed in a 3D matrix with a dimension for width, height, width. Depth is a dimension because of the colors channels used in an image (Red Green Blue).

We perform a number of convolutions on our input matrix and for each operation uses a different kernel (filter), the result does store in feature maps. All feature maps put into a bucket together as a final output of convolutional layer. CNNs uses ReLU is the activation function and output of the convolution passed through the activation function. As I mentioned early in the paragraph the convolution filter can slide over the input matrix. Stride is the decisive steps in a specified direction. Stride is the size of the step the convolution filter moves each time. In general, people refer to stride value as 1, meaning the filter slides pixel by pixel.

The animation above shows stride size 1. Increasing the stride size, your filter is sliding over the input with a larger gap and thus has less overlap between the cells. The size of the feature map is always less than the input matrix and this leads to shrinking our feature map. To prevent shrinking of our feature map matrix we use padding. Padding means a layer of zero value pixels is added to surround the input with zeros. Padding helps us to improve performance, makes sure the kernel and stride size will fit in the input and also keeping the spatial size constant after performing convolution.

Max Pooling layer:

After the convolution operation, the next operation is pooling layer. Max pooling is a sample-based discretization process. If you can see the first diagram in that after every convolution layer there is max pooling layer. Max pooling layer is useful to controls the overfitting and shortens the training time. The pooling function continuously reduce the dimensionality to reduce the number of parameters and number of computation in the network. Max pooling is done by applying a max filter to usually non-overlapping subregions of the initial representation. It reduces the computational cost by reducing the number of parameters to learn and provides basic translation invariance to the internal representation.

Let’s say we have a 4×4 matrix representing our initial input.
Let’s say, as well, that we have a 2×2 filter that we’ll run over our input. We’ll have a stride of 2 (meaning the (dx, dy) for stepping over our input will be (2, 2)) and won’t overlap regions. For each of the regions represented by the filter, we will take the max of that region and create a new, output matrix where each element is the max of a region in the original input.

Max Pooling takes the maximum value in each window. These window sizes need to be specified beforehand. This decreases the feature map size while at the same time keeping the significant information.

ReLU layer:

The Rectified Linear Unit(ReLU) has become very popular in the last few years. ReLU is activation function similarly we have been using different activation function is a different artificial neural network. Activation function aka transfer function. The ReLU is the most used activation function in the world right now. Since it is used in almost all the convolutional neural networks or deep learning.

The ReLU function is ?(?)=max(0,?). As you can see, the ReLU is half rectified (from bottom). f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to zero.

The ReLUs range is from 0 to infinity. ReLUs improve neural networks is by speeding up training. ReLU is idempotent. ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant. ReLU is executed after the convolution and therefore a nonlinear activation function like tanh or sigmoid. Each activation function takes a single number and performs a certain fixed mathematical operation on it. In simple words, the rectifier function does to an image like this is remove all the black elements from it keeping only positive value. We expect that any positive value will be returned unchanged whereas an input value of 0 or a negative value will be turned as the value 0. ReLU can allow your model to account for non-linearities and interactions so well. In gluon API we can use ReLU as inbuild implementation from Gluon.

net.add(gluon.nn.Dense(64, activation="relu"))

We can use a simple sample code of the ReLU function.

# rectified linear function
def rectified(x):
  return max(0.0, x)

Fully connected layer:

The fully connected layer is the fully connected neural network layer. This is also referred to as the classification layer. After completion of convolutional, ReLU and max-pooling layers, the classification part consists of a few fully connected layers. The fully connected layers can only accept 1 -Dimensional data. To convert our 3-D data to 1-D, we use the function in Python. This essentially arranges our 3-D volume into a 1-D vector.

This layer gives or returns us the output which is probabilistic value.

Types of CNN Architectures:

In the above section, we explained CNN general architecture but there are different flavors of CNN based some different combinations of layers. Let us try to explore those some useful and famous CNNs architectural style to solve some complex problem. CNNs are designed o recognize the visual patterns with minimal preprocessing from pixel images. The ImageNet project is a large visual database designed for object recognization research. This project runs an annual software contest the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where software programmer, researcher compete to correctly detect objects. In this section, we are exploring CNN architectures of ILSVRC top competitors.

Let us look into this picture this will give you a broad overview of how evaluation happen.

1. LeNet-5 — Leun et al

LeNet-5 is a 7 layer Convolutional neural network by LeCun et al in 1998. This was deployed in real life financial banking project to recognize handwritten digits on cheques. Image digitized in 32×32 pixel greyscale input images. The ability to process higher resolution images requires larger and more convolutional layers, so this technique is constrained by the availability of computing resources. At that time, the computational capacity was limited and hence the technique wasn’t scalable to large scale images.

2. AlexNet — Krizhevsky et al

AlexNet is a Convolutional neural network by Krizhevsky et al in 2012. It is outperformed significantly in all the prior competitors and won the ILSVRC challenge by reducing the top-5 error loss from 26% to 15.3%. The network was very similar to LeNet but was much more deeper with more filters per layer and had around 60 million parameters.

It consisted of 11×11, 5×5,3×3, convolutions, max pooling, dropout, data augmentation, ReLU activations, SGD with momentum. ReLU activation layer is attached after each every convolutional & fully connected layer except the last softmax layer. The figure certainly looks a bit scary. This is because the network was split into two halves, each trained simultaneously on two different GPUs. AlexNet was trained for 6 days simultaneously on two Nvidia Geforce GTX 580 GPUs. AlexNet was designed by the SuperVision group, consisting of Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever. More simple picture

In AlexNet consist of 5 Convolutional Layers and 3 Fully Connected Layers. These 8 layers combined with two new concepts at that time — MaxPooling and ReLU activation gave their model edge results.

3. ZFNet –

The ILSVRC 2013 winner was also a CNN which is known as ZFNet. It achieved a top-5 error rate of 14.8% which is now already half of the prior mentioned non-neural error rate. They achieved this by tweaking the hyper-parameters of AlexNet while maintaining the same structure with additional Deep Learning elements. As this is similar to AlexNet and have some additional deep learning elements such as dropout, augmentation and Stochastic Gradient Descent with momentum with tweaking the hyperparameters.

4. VGGNet — Simonyan et al

The runner up of 2014 ILSVRC challenge is named VGGNet, because of the simplicity of its uniform architecture, it appeals to a simpler form of a deep convolutional neural network. VGGNet was developed by Simonyan and Zisserman. VGGNet consists of 16 convolutional layers and is very appealing because of its very uniform architecture. The architecture is very much similar to AlexNet with only 3×3 convolutions, but lots of filters. VGGNet Trained on 4 GPUs for 2–3 weeks. The weight configuration of the VGGNet is publicly available and has been used in many other applications and challenges as a baseline feature extractor. VGGNet consists of 138 million parameters, which can be a bit challenging to handle. As the weight configurations are available publicly so, this network is one of the most used choices for feature extraction from images.

VGGNet has 2 simple rules

  1. Each Convolutional layer has configuration — kernel size = 3×3, stride = 1×1, padding = same. The only thing that differs is a number of filters.
  2. Each Max Pooling layer has configuration — windows size = 2×2 and stride = 2×2. Thus, we half the size of the image at every Pooling layer.

5. GoogLeNet/Inception –

The winner of the 2014 ILSVRC competition GoogleNet (Inception v1). achieved a top-5 error rate of 6.67% loss. GoogleNet used an inception module, a novel concept, with smaller convolutions that allowed the reduction of the number of parameters to a mere 4 million. GoogleNet was very close to the human level performance which the organizers of the challenge were now forced to evaluate. Googlenet was inspired by CNN LeNet but implemented a novel element which is nickname an inception module. It is used in batch normalization, image distortions, and RMSprop.

There are two diagrams which are here to understand and visualize GoogleNet very well.

5. ResNet — Kaiming He et al

The 2015 ILSVRC competition brought about a top-5 error rate of 3.57%, which is lower than the human error on top-5. The ResNet (Residual Network) model used by Kaiming He et al at the competition. The network introduced a novel approach called skip connections. Skip connections are also known as gated units or gated recurrent units. this technique they were able to train a NN with 152 layers while still having lower complexity than VGGNet.

It achieves a top-5 error rate of 3.57% which beats human-level performance on this dataset. ResNet has residual connections. The idea came out as a solution to an observation — Deep neural networks perform worse as we keep on adding a layer. The observation brought about a hypothesis: direct mappings are hard to learn. So instead of learning mapping between the output of the layer and its input, learn the difference between them learn the residual.

The Residual neural network uses 1×1 convolutions to increase and decrease the dimensionality of the number of channels.

CNN using Gluon:

As part of this example, we are exploring MNIST data set using CNN. This is the best example to make our hands dirty with Gluon API layer to build CNNs. There four important part we have always consider while building any CNNs.

  1. The kernel size
  2. The filter count (i.e how many filters do we want to use)
  3. Stride (how big steps of the filters)
  4. Padding

Let us deep dive into MNIST using CNN. Recognize handwritten digits using Gluon API using CNNs.

To start with the example we need MNIST data set and need to import some python, gluon module.

import mxnet as mx
import numpy as np
import mxnet as mx
from mxnet import nd, gluon, autograd
from mxnet.gluon import nn
# Select a fixed random seed for reproducibility
mx.random.seed(42)
def data_xform(data):
    """Move channel axis to the beginning, cast to float32, and normalize to [0, 1]."""
    return nd.moveaxis(data, 2, 0).astype('float32') / 255
train_data = mx.gluon.data.vision.MNIST(train=True).transform_first(data_xform)
val_data = mx.gluon.data.vision.MNIST(train=False).transform_first(data_xform)

The above code can download MNIST data set at the default location (this could be.mxnet/datasets/mnist/ in the home directory) and creates Dataset objects, training data set (train_data), and validation data set (val_data) for training and validation we need both two datasets. We can use transform_first() method, to moves the channel axis of the images to the beginning ((28, 28, 1) → (1, 28, 28)) and cast them into the float32 and rescales them from [0,255] to [0,1]. The MNIST dataset is very small that’s why we loaded that in memory.

set the context

ctx = mx.gpu(0) if mx.context.num_gpus() > 0 else mx.cpu(0)

Then we need a training data set and validation data set with batch size 1 and shuffle the training set and non-shuffle validation dataset.

conv_layer = nn.Conv2D(kernel_size=(3, 3), channels=32, in_channels=16, activation='relu')
print(conv_layer.params)

define the convolutional layer in this example we considering 2-D dataset so, this one is 2-D convolutional with ReLU activation function. CNN is a more structured weight representation. Instead of connecting all inputs to all outputs, the characteristic,

# define like a alias
metric = mx.metric.Accuracy()
loss_function = gluon.loss.SoftmaxCrossEntropyLoss()

We are using softmax cross-entropy as a loss function.

lenet = nn.HybridSequential(prefix='LeNet_')
with lenet.name_scope():
    lenet.add(
        nn.Conv2D(channels=20, kernel_size=(5, 5), activation='tanh'),
        nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
        nn.Conv2D(channels=50, kernel_size=(5, 5), activation='tanh'),
        nn.MaxPool2D(pool_size=(2, 2), strides=(2, 2)),
        nn.Flatten(),
        nn.Dense(500, activation='tanh'),
        nn.Dense(10, activation=None),
    )

Filters can learn to detect small local structures like edges, whereas later layers become sensitive to more and more global structures. Since images often contain a rich set of such features, it is customary to have each convolution layer employ and learn many different filters in parallel, so as to detect many different image features on their respective scales. It’s good to have a more than one filter and do apply filters in parallel. The above code defines a CNN architecture called LeNet. The LeNet architecture is a popular network known to work well on digit classification tasks. We will use a version that differs slightly from the original in the usage of tanh activations instead of sigmoid.

Likewise, input can already have multiple channels. In the above example, the convolution layer takes an input image with 16 channels and maps it to an image with 32 channels by convolving each of the input channels with a different set of 32 filters and then summing over the 16 input channels. Therefore, the total number of filter parameters in the convolution layer is channels * in_channels * prod(kernel_size), which amounts to 4608 in the above example. Another characteristic feature of CNNs is the usage of pooling, means summarizing patches to a single number. This step lowers the computational burden of training the network, but the main motivation for pooling is the assumption that it makes the network less sensitive to small translations, rotations or deformations of the image. Popular pooling strategies are max-pooling and average-pooling, and they are usually performed after convolution.

lenet.initialize(mx.init.Xavier(), ctx=ctx)
lenet.summary(nd.zeros((1, 1, 28, 28), ctx=ctx))

the summary() method can be a great help, it requires the network parameters to be initialized, and an input array to infer the sizes.

output:- 
--------------------------------------------------------------------------------
        Layer (type)                                Output Shape         Param #
================================================================================
               Input                              (1, 1, 28, 28)               0
        Activation-1                <Symbol eNet_conv0_tanh_fwd>               0
        Activation-2                             (1, 20, 24, 24)               0
            Conv2D-3                             (1, 20, 24, 24)             520
         MaxPool2D-4                             (1, 20, 12, 12)               0
        Activation-5                <Symbol eNet_conv1_tanh_fwd>               0
        Activation-6                               (1, 50, 8, 8)               0
            Conv2D-7                               (1, 50, 8, 8)           25050
         MaxPool2D-8                               (1, 50, 4, 4)               0
           Flatten-9                                    (1, 800)               0
       Activation-10               <Symbol eNet_dense0_tanh_fwd>               0
       Activation-11                                    (1, 500)               0
            Dense-12                                    (1, 500)          400500
            Dense-13                                     (1, 10)            5010
================================================================================
Parameters in forward computation graph, duplicate included
   Total params: 431080
   Trainable params: 431080
   Non-trainable params: 0
Shared params in forward computation graph: 0
Unique parameters in model: 431080

First conv + pooling layer in LeNet.

Now we train LeNet with similar hyperparameters as learning rate 0.04, etc. Note that it is advisable to use a GPU if possible since this model is significantly more computationally demanding to evaluate and train.

trainer = gluon.Trainer(
    params=lenet.collect_params(),
    optimizer='sgd',
    optimizer_params={'learning_rate': 0.04},
)
metric = mx.metric.Accuracy()
num_epochs = 10
for epoch in range(num_epochs):
    for inputs, labels in train_loader:
        inputs = inputs.as_in_context(ctx)
        labels = labels.as_in_context(ctx)
        with autograd.record():
            outputs = lenet(inputs)
            loss = loss_function(outputs, labels)
        loss.backward()
        metric.update(labels, outputs)
        trainer.step(batch_size=inputs.shape[0])
    name, acc = metric.get()
    print('After epoch {}: {} = {}'.format(epoch + 1, name, acc))
    metric.reset()
for inputs, labels in val_loader:
    inputs = inputs.as_in_context(ctx)
    labels = labels.as_in_context(ctx)
    metric.update(labels, lenet(inputs))
print('Validaton: {} = {}'.format(*metric.get()))
assert metric.get()[1] > 0.985

Let us visualize the network accuracy. Some wrong predictions on the training and validation set.

def get_mislabeled(loader):
    """Return list of ``(input, pred_lbl, true_lbl)`` for mislabeled samples."""
    mislabeled = []
    for inputs, labels in loader:
        inputs = inputs.as_in_context(ctx)
        labels = labels.as_in_context(ctx)
        outputs = lenet(inputs)
        # Predicted label is the index is where the output is maximal
        preds = nd.argmax(outputs, axis=1)
        for i, p, l in zip(inputs, preds, labels):
            p, l = int(p.asscalar()), int(l.asscalar())
            if p != l:
                mislabeled.append((i.asnumpy(), p, l))
    return mislabeled
import numpy as np
sample_size = 8
wrong_train = get_mislabeled(train_loader)
wrong_val = get_mislabeled(val_loader)
wrong_train_sample = [wrong_train[i] for i in np.random.randint(0, len(wrong_train), size=sample_size)]
wrong_val_sample = [wrong_val[i] for i in np.random.randint(0, len(wrong_val), size=sample_size)]
import matplotlib.pyplot as plt
fig, axs = plt.subplots(ncols=sample_size)
for ax, (img, pred, lbl) in zip(axs, wrong_train_sample):
    fig.set_size_inches(18, 4)
    fig.suptitle("Sample of wrong predictions in the training set", fontsize=20)
    ax.imshow(img[0], cmap="gray")
    ax.set_title("Predicted: {}\nActual: {}".format(pred, lbl))
    ax.xaxis.set_visible(False)
    ax.yaxis.set_visible(False)
fig, axs = plt.subplots(ncols=sample_size)
for ax, (img, pred, lbl) in zip(axs, wrong_val_sample):
    fig.set_size_inches(18, 4)
    fig.suptitle("Sample of wrong predictions in the validation set", fontsize=20)
    ax.imshow(img[0], cmap="gray")
    ax.set_title("Predicted: {}\nActual: {}".format(pred, lbl))
    ax.xaxis.set_visible(False)
    ax.yaxis.set_visible(False)