Featured

Micronauts Launch: The best way to getting started.

Micronaut has launched a website to generate the Micronauts project using the website without installing the Micronauts CLI SDK.

In a couple of blogs, we have seen about Micronauts if you don’t know you can check below blog post:

Micronaut is very much similar to the spring framework. Micronaut took inference from Spring framework and most the API are in sync only, that’s why adopting Micronauts for spring developer is very easy. As we have start.spring.io to start and create spring or spring boot project on the same note Micronauts also launched website aka Micronauts launch.

https://micronaut.io/launch/

As we know we can generate micronaut project using CLI but we can take same advantage using website as well.

If you see here we have different options few are listed below.

  1. Application type
  2. Java version
  3. Base Package
  4. Name of application
  5. etc

1. Application type:-

Application type where we have to specify which type of application we want such as Application (web or any other application), CLI application, Serverless function, gRPC application, and Messaging application. This application type will help us to organize the dependencies.

2. Java Version:-

Java version where we have to specify on which JDK we want to develop your application, e.g Java 8,11, 14 etc.

3. Base Package:-

Base package here we have specify our package of the application under which we want organise our classes, interfaces.

e.g com.techwasti.micronaut.demo

4. Name:-

Here we have to specify name of the application.

e.g HelloworldLaunch.

5. Micronaut Version:-

Which micronaut version our application should be compatible latest one when I am writing this blog post is 2.0.0.

6. Language:-

Select which language do you want to write down the beautiful code, right now micronaut support java, kotlin and Groovy.

7. Build Tool:-

Select which build tool either from maven or gradle.

8. Test Framework:-

Here we have a choice to select the test framework anything from the list such as Junit, Spock, and kotlintest.

9. Features:

When you click on features button one popup will launch.

In features we have different groups such as cache, config, database, etc.

10. Diff:-

This is to show the difference. This is interesting option. Shows the changes that the selected features have on an application generated without any features selected.

11. Preview:-

Another best option this site provides is the preview of your project based on your selection.

The final option is to generate the project and once you click on this are getting zip file. After zip extraction, you will get below kind of structure.

Were we have docker file, build file, gitignore along with source directory structure. Download and import this in any of your IDE(eclipse, intellij) and happy coding.

This is it for now. Let me know your finding on this if any.

Featured

Micronaut with Graal native image example.

As we have seen in the last couple of articles on how to create simple Micronauts application development and dockerizing it. In this article, we are gone exploring Helloworld Graal micronaut application.

Here is the definition from Wikipedia. If you are crossing this article that means you are familiar with either of the topic.

GraalVM is a Java VM and JDK based on HotSpot/OpenJDK, implemented in Java. It supports additional programming languages and execution modes, like an ahead-of-time compilation of Java applications for fast startup and low memory footprint. The first production-ready version, GraalVM 19.0, was released in May 2019.

Let us start coding and simultaneously enjoy the topic.

Create a micronaut application using CLI:

$ mn create-app helloworld-graal --features=graal-native-image

The default option is not available to add Graal support we have to use this option  — features=graal-native-image.

If you are using Java or Kotlin and IntelliJ IDEA make sure you have enabled annotation processing.

Now let us create one simple POJO class to hold Play name to make it simple.

import io.micronaut.core.annotation.Introspected;

@Introspected
public class Play{

    private String name;

    public Play(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

@Introspected annotation is used to generate BeanIntrospection metadata at compilation time. This information is using the render the POJO as JSON using Jackson without using reflection.

Now let us create the Singelton Service class and return play name randomly.

(Note:- The play names are Marathi Play names of famous Sri Pu la Deshpande)

import javax.inject.Singleton;
import java.util.Arrays;
import java.util.List;
import java.util.Random;

@Singleton
public class PlayService {
// create list of plays
    private static final List<Play> PLAYS = Arrays.asList(
            new Play("Tujhe Ahe Tujpashi"),
            new Play("Sundar Mee Honar"),
            new Play("Tee Phularani"),
            new Play("Teen Paishacha Tamasha"),
            new Play("Ek Jhunj Varyashi")
    );
 // to choose random play from PLAYS list
    public Play randomPlay() {
        return PLAYS.get(new Random().nextInt(PLAYS.size()));
    }
}

Now we need a controller to serve the request of random play name from service class.

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;

@Controller
public class PlayController {

    private final PlayService playService;

    public PlayController(PlayService playService) {
        this.playService = playService;
    }

    @Get("/randomplay")
    public Play randomPlay() {
        return playService.randomPlay();
    }
}

Created controller and injected service object using constructor injection and mapping of GET method using @Get(“/randomplay”).

Now our application is ready you can test by executing below command.

$ ./gradlew run

http://localhost:8080/randomplay

JSON output 

{

 name: “Tee Phularani”

}

Let us create a Graal native image.

Micronaut only supported in Java or Kotlin for graal native-image.

While creating a project we have added — features=graal-native-image this is adding three important features. 

  1. svm(Substrate VM) and graal dependencies in build.gradle.
compileOnly "org.graalvm.nativeimage:svm"
annotationProcessor "io.micronaut:micronaut-graal"

2. A Dockerfile which can be used to construct the native image executing docker-build.sh

3. A native-image.properties file in the resource directory.

Args = -H:IncludeResources=logback.xml|application.yml|bootstrap.yml \
       -H:Name=helloworld-graal \
       -H:Class=helloworld.graal.Application

This is very easy for developer to create a native image inside docker. Fire below two commands: 

$ ./gradlew assemble
$ ./docker-build.sh

Once image is ready we can create a container to verify our understanding. 

$ docker run -p 8080:8080 helloworld-graal

To test the application you can use curl with time:

$ time curl localhost:8080/randomplay

This is for now. You can check the time difference with native image executable and docker with a native image. 

Source code download or clone from github: https://github.com/maheshwarLigade/micronaut-examples/tree/master/helloworld-graal

Featured

Configuration as a Service: Spring Cloud Config – using kotlin.

Developing a microservice architecture with Java and Spring Boot is quite popular these days. In microservice architecture we hundreds of services and managing services for each service and for each profile which is a quite tedious task. In this article, we will demonstrate the Spring cloud config server using kotlin. 

Spring Boot provided a much-needed spark to the Spring projects.

Spring cloud-config provides a server and client-side support for externalizedconfiguration in a distributed system. With the Config Server, you have a central place to manage external properties for applications across all environments.

From the above diagram, you can easily predict that in distributed systems managing configuration as a central service is a bit tedious task and spring cloud config provide client, server architecture mechanism to manage the configuration easily. 

Let us go to the https://start.spring.io/

When we do any changes in any service we have to restart the services to apply the changes.

Let us create one git repo to manage our configuration and to achieve this we are creating one git repo.

So we will create “springbootclient” as one small spring boot microservice to take the username and that will read username from spring cloud config central configuration server system i.e git here.

We have created three different properties files for each of our different environments. 

  1. springbootclient.properties
  2. springbootclient-dev.properties
  3. springbootclient-prod.properties

https://github.com/maheshwarLigade/cloud-common-config-server

Here is our spring cloud config properties are available, you can clone or use directly this repository too.

Now as we have created spring config server application using spring starter let us download and import that project in your favorite IDE or editor. git repo here we used to store our configuration and spring cloud config server application is to serve those properties to the client.

Basically git is datastore, spring cloud config server is server application and there are multiple microservices are the clients which needs configurations.

Now our git as datastore is ready. In this repository, we have created one sample client application and the name of that app is springbootclient. In the future microservice article we will utilize the same spring cloud config as a configuration server.

Let us go and check the code base for the client app.

This is the sample application.properties file:

server.port=8888
logging.level.org.springframework.cloud.config=DEBUG
spring.cloud.config.server.git.uri=https://github.com/maheshwarLigade/cloud-common-config-server.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.searchPaths=springbootclient

Sample Code for SpringCloudConfigServerexApplication.kt

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.cloud.config.server.EnableConfigServer

@SpringBootApplication
@EnableConfigServer
class SpringCloudConfigServerexApplication

fun main(args: Array<String>) {
   runApplication<SpringCloudConfigServerexApplication>(*args)
}

Now run and up the spring cloud-config server and check the below URL:

http://localhost:8888/springbootclient/dev/master

Spring Boot Client App:

Let us create one small microservice which will read configuration from spring cloud config server and serve that property value over REST end point.

Go to the https://start.spring.io/ and create spring boot client microservice using kotlin.

Sample POM.xml dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-config</artifactId>
</dependency>

Now check the SpringCloudClientAppApplication.kt code

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication

@SpringBootApplication
class SpringCloudClientAppApplication

fun main(args: Array<String>) {
    runApplication<SpringCloudClientAppApplication>(*args)
}

Now create one sample REST controller which is serving REST request. We want to check ” /whoami” this endpoint is returning which is the user based on active profile dev, prod, etc.

UserController.kt

import org.springframework.beans.factory.annotation.Value
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RestController


@RestController
class UserController {

    @Value("\${app.adminusername}")
    var username="Test"
//get request serving
    @GetMapping("/whoami")
    fun whoami() = "I am a  "+ username

}

Create a bootstrap.properties file where we will specify the spring cloud config server details, which is a git branch and what is active profile dev, local, prod, etc.

spring.application.name=springbootclient
spring.profiles.active=dev
spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.fail-fast=true
spring.cloud.config.label=master

All properties are self exclamatory, what is the use of which one.

Once you hit this URL http://localhost:9080/whoami

Output:- I am a DevUser

Github source link:

Config Server: https://github.com/maheshwarLigade/cloud-common-config-server

Codebase: https://github.com/maheshwarLigade/spring-cloud-config-kotlin-ex

More such Stories

Featured

Enable Spring security using kotlin!!

Spring security is the defacto abstraction in the spring framework world to add authentication and authorization layer for your application. There are plenty of examples. In this article, we have spring security using kotlin.

In the last article, we have seen that and developed Spring Boot, MongoDB REST API using kotlin. We will use the same example to add spring security dependency.

https://github.com/maheshwarLigade/springboot-mongodb.restapi/tree/master

To illustrate spring-security example add spring security starter dependency.

implementation("org.springframework.boot:spring-boot-starter-security")

If you are in favor of the Maven build tool then use below dependency.

<dependency> 
<groupId>org.springframework.boot</groupId> 
<artifactId>spring-boot-starter-security</artifactId> 
</dependency>

As our existing example of REST API using kotlin. it contains patient data and CRUD operation so we will have two different roles, ADMIN and Doctor. 

For simplification, we will have two roles only and we will only one endpoint “\patients” with five different operations. We will secure all our end point either using @Secured annotation or config class.

import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.http.HttpMethod
import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder
import org.springframework.security.config.annotation.web.builders.HttpSecurity
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder
import org.springframework.security.crypto.password.PasswordEncoder

@Configuration
class SecurityConfig : WebSecurityConfigurerAdapter() {

    @Bean
    fun encoder(): PasswordEncoder {
        return BCryptPasswordEncoder()
    }

    override fun configure(auth: AuthenticationManagerBuilder) {
        auth.inMemoryAuthentication()
                .withUser("admin")
                .password(encoder().encode("pass"))
                .roles("DOCTOR", "ADMIN")
                .and()
                .withUser("doctor")
                .password(encoder().encode("pass"))
                .roles("DOCTOR")
    }

    @Throws(Exception::class)
    override fun configure(http: HttpSecurity) {
        http.httpBasic()
                .and()
                .authorizeRequests()
                .antMatchers(HttpMethod.GET, "/patients").hasRole("ADMIN")
                .antMatchers(HttpMethod.POST, "/patients/**").hasRole("ADMIN")
                .antMatchers(HttpMethod.PUT, "/patients/**").hasRole("ADMIN")
                .antMatchers(HttpMethod.DELETE, "/patients/**").hasRole("ADMIN")
                .antMatchers(HttpMethod.GET, "/patients/**").hasAnyRole("ADMIN", "DOCTOR")
                .and()
                .csrf().disable()
                .formLogin().disable()
    }

}

As we have five operations and two roles ADMIN and DOCTOR.

If you don’t want to encrypt the password you can use a plain text password with {noop} prefix.

First step where we have declared two users and then which endpoints do we want to secure and https method those we have configured. 

In this example we have two get endpoints first one to get all records of patients and another one is a specific patient record. This is a very good use case because 

  1. Get ALL patient only admin can access
  2. Get a particular patient that endpoint is accessible DOCTOR who treats him/her and ADMIN as well who has access to everything. 

csrf().disable is about disabling Spring Security built-in cross-site scripting protection.

formLogin().disable() is about disabling default login form.

Now everything is ready we have secured our REST endpoint. Let us hit all the REST endpoints in a previous way, you will get 401/403 HTTP status code. Try using the CURL command. 

$ curl localhost:8090/patients 
{   
"timestamp": "2020-04-12T05:37:16.718+0000",
"status": 401,   
"error": "Unauthorized",   
"message": "Unauthorized",   
"path": "/patients"
}

$ curl localhost:8090/patients -u admin:pass

Use the above endpoint and you will get successful results.

This is it for now about this. I know this is a very simple use case and example too but you can use the same structure and scale that to next level using a database with different user roles and access control lists.

GitHubCode Repo:-

https://github.com/maheshwarLigade/springboot-mongodb.restapi/tree/master

Featured

Kafka Connect QuickStart.

understand the kaka connect.

Apache Kafka is a distributed streaming platform. If you are not aware of apache kafka then you can use below articles

Kafka Installation guide

Kafka Idempotent Consumer

Kafka Idempotent Producer

Kafka Connect, an open-source component of Apache Kafka. kafka connect is another project in Apache kafka family. As we know some common use cases of apache kafka such as consume data from some RDBMS and sink data into Hadoop. This framework is developed based on convention over configuration.

Kafka Connect is a framework for connecting Kafka with external systems such as databases, key-value stores, search indexes, and file systems.

https://kafka.apache.org/documentation/

Advantages of Using Kafka Connect: 

Below are some benefits of Kafka connect. 

  1. Data-Centric Pipeline
  2. Flexibility and Scalability
  3. Reusability and Extensibility
  4. Distributed and standalone modes
  5. Streaming/batch integration
  6. REST interface for better operations
  7. Automatic offset management

Kafka Connect is a tool suite for scalably and reliably streaming data between Kafka and other external systems such as databases, key-value stores, search indexes, and file systems, using so-called Connectors.

As in the diagram, you can see kafka connect generally contains three main components such as source connector, sink connector, and kafka topic. 

A source connector collects data from a source system. Source systems can be entire databases, tables, or any message brokers. A source connector could also collect metrics from application servers into Kafka topics, making the data available for stream processing with low latency.

A sink connector reads data from Kafka topics into other systems, which might be indexed such as Elasticsearch, Hadoop or any other database.

Prerequisite: 

  •  Java8, Maven, basic knowledge of Kafka installed and running on your system.
  • Kafka version used for this guide: kafka_2.10–0.10.2.1.
  1. Download Kafka from apache kafka

Kafka Connect Example:

In this example, we will take a very simple example to demonstrate the power of kafka connect. Basic kafka connector file source connector and file sink connector. 

  1. Start Zookeeper 
  2. Start Kafka 

Create a kafka topic for our testing

$KAFKA_HOME/bin/kafka-topics.sh \
  --create \
  --zookeeper localhost:2181 \
  --replication-factor 1 \
  --partitions 1 \
  --topic my-connect-test

Kafka Connect currently supports two modes of execution: standalone and distributed.

In standalone mode, all work is performed in a single process. This standalone mode is for development.

$ bin/connect-standalone.sh config/connect-standalone.properties connector1.properties [connector2.properties ...]

Source Connector configurations

Provide source configuration file and this should be in your home directory

For the source connector, the reference configuration is below:

name=local-file-source-connector
connector.class=FileStreamSource
tasks.max=1
topic=my-connect-test
file=test-connector.txt
  • name – Unique name for the connector. Attempting to register again with the same name will fail.
  • connector.class – The Java class for the connector
  • tasks.max – The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.
  • key.converter – (optional) Override the default key converter set by the worker.
  • value.converter – (optional) Override the default value converter set by the worker.
  • topics – A comma-separated list of topics to use as input for this connector
  • topics.regex – A Java regular expression of topics to use as input for this connector

Sink Connector Configuration

name=local-file-sink-connector
connector.class=FileStreamSink
tasks.max=1
file=test-sink-conector.txt
topics=my-connect-test

Sink connector, the reference configuration is below:

Now source and sink are ready to let us have worker configuration. Most of the properties are the same because both source and sink are file connectors. 

Worker Configuration

Worker connector, the reference configuration is below:

bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=20000
plugin.path=/share/java
  • bootstrap.servers – List of Kafka servers used to bootstrap connections to Kafka
  • offset.flush.interval -Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before canceling the process and restoring the offset data to be committed in a future attempt.
  • plugin.path -List of paths separated by commas (,) that contain plugins (connectors, converters, transformations).
  • key.converter – Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.
  • value.converter – Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.

Below config property specific to standalone mode:

  • offset.storage.file.filename – File to store offset data in.

To check more on configuration official document

Now Everything is ready:

  1. kafka/config/connect-standalone.properties 
  2. kafka/config/local-file-source-connector.properties
  3. kafka/config/local-file-sink-connector.properties

the content of test-connector.text

{
		color: "red",
		value: "#f00"
	}
	{
		color: "green",
		value: "#0f0"
	}
	{
		color: "blue",
		value: "#00f"
	}
	{
		color: "cyan",
		value: "#0ff"
	}
	{
		color: "magenta",
		value: "#f0f"
	}
	{
		color: "yellow",
		value: "#ff0"
	}
	{
		color: "black",
		value: "#000"
	}

You can add content later in the same file and check kafka connector able to read and sink data in test-sink-conector.txt

The source connector will automatically detect the changes and publish content over kafka.

make sure to insert a newline at the end, otherwise, the source connector won’t consider the last line.

Now let us run and check the kafka connect 

$KAFKA_HOME/bin/connect-standalone.sh config/connect-standalone.propertiesconfig/local-file-source-connector.propertiesconfig/local-file-sink-connector.properties

This is the simple implementation of kafka connect.

Discover more kafka connector here:- https://www.confluent.io/hub/

Reference documents:-

https://docs.confluent.io/current/connect/quickstart.html

https://kafka.apache.org/documentation/#connect

Featured

Tensorflow2.0 HelloWorld using google colab.

In this article, we use the most popular deep learning framework TensorFlow and we will take a basic hello world example to do this example you no need to set up a local environment on your machine. 

Image result for tensorflow
Tensorflow.org

We are using google Colab If you are not aware of what it is? here you go and check out my article on the same Colab getting started!!
Train deep neural network free using google colaboratory.medium.com

Now visit https://colab.research.google.com/ and you will see 

Brief About Colab:

Once you opened the Colab and if you are already logged in Gmail account. 

The google colab is available with zero configuration and free access to GPU and the best part is it sharable. The Google Collaboration is free service for the developers to try TensorFlow on CPU and GPU over the cloud instance of Google. This service is totally free for improving Python programming skills, developers can log in with their Google Gmail account and connect to this service. Here developers can try deep learning applications using popular machine learning libraries such as Keras, TensorFlow, PyTorch, OpenCV & others.

Sign in to google colab and create a new notebook for our HelloWorld example.

Go to File → New NoteBook(Google sign-in is required) → 

Now new notebook is ready we want to use TF2.0.0 for our example so let us first install TensorFlow 2.0.0 is already released as a production version. For installing TensorFlow2.0.0 run the following command.

!pip install tensorflow==2.0.0

After a successful installation, we can verify the installed version.

import tensorflow as tf
print(tf.__version__)

Helloworld example:

Now everything is ready and looking promising. We have installed TensorFlow and verified versions too. Now let us look at helicopter overview and create a hello world example. 

To change Runtime: Click on Runtime →Change Runtime Type → one popup will open choose perticular runtime and hardware accelrator such as GPU and TPU.

There are a lot of changes that are there in TF1.0 and TF 2.0.0 TF comes with the ease of development less coding it needs in this version of TF2.0.0. TensorFlow 2.0.0 is developed to remove the issues and complexity of previous versions. 

In the TF 2.0 eager execution is enabled by default.

The eager execution mode evaluates the program immediately and without building the graph. The eager execution mode operation returns the concrete value instead of constructing a computational graph and then execute the program.

We will use the same Hello world code from tensorflow 1.x version for this and let us observe the output.

#This code snippet is from tensorflow 1.X version
import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#session
sess = tf.Session()

#print the message
print(sess.run(msg))

In this example, we are using Tensorflow 1.X.X version code to print the message, but Session has been removed in TF2.0.0 this will cause the exception i.e

AttributeError: module 'tensorflow' has no attribute 'Session'

We will use the same above code snippet by removing the Session

import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#print the message
print(msg)

#print using tf.print()
tf.print(msg)

Here we have two print statement observe output for both print:

  1. tf.Tensor(b’Hello and welcome to Tensorflow world’, shape=(), dtype=string) 
  2. Hello and welcome to Tensorflow world.

This is it, for now, we will start exploring different API of TF in the next article.

Code: 

Code is available over github you can directly import that in colab and run it.

https://github.com/maheshwarLigade/GoogleColab/blob/master/HelloWorldTF2_0.ipynb

More Articles on Tensorflows:

https://medium.com/analytics-vidhya/optimization-techniques-tflite-5f6d9ae676d5

https://medium.com/analytics-vidhya/tensorflow-lite-converter-dl-example-febe804b8673

https://medium.com/techwasti/tensorflow-lite-machine-learning-at-the-edge-26e8421ae661

https://medium.com/techwasti/dynamic-computation-graphs-dcg-with-tensorflow-fold-33638b2d5754

https://medium.com/techwasti/tensorflow-lite-deployment-523eec79c017

Featured

Best resources to learn Go programming language!!

Golang aka go programming language is the fastest-growing programming most loved programming language.

If you think you are not used ” go ” directly or indirectly then I think you are wrong. Have you heard about Docker containerization technology then you are indirectly using Go language in your day to day basis.

Docker is written in the Google Go programming language.

What is GoLang?

Go-Lang is an open-source language officially released by the Google production team in 2009. It was developed by Robert Griesemer, Ken Thompson and Rob Pike. It is a multi-purpose programming language specially designed to build faster and scalable applications. It provides features like fast compilation, garbage collection, dynamic types, concurrency, standard libraries, and packages.

Let us take a tour to understand what are the best resources available to get started this programming language.

1. Go Tour:-  

This is my favorite site to get started and make our hands dirty. This is the official Go Tour website: https://tour.golang.org. The best things about this site are that the tour is available offline just by running go tool tour in your command line if you have already installed Go-lang locally. This is providing an interactive tutorial where you can run your code snippet and it gives you an overview of Go-Lang. The tour is classified into different sets of modules.

2. Go By Example:- 

Another effective to start go-lang learning is going by example. Go by Example is an interactive online course tutorial for learning Go. Once you know the basic then go ahead and hit the Go by example (https://gobyexample.com). Start hacking by taking examples and get moderate knowledge about go-lang. 

3. Effective Go:-

This is another official resource to learn go-lang. This is also available for free. This is a very interesting website https://golang.org/doc/effective_go.html to explore more about the go-lang. I found it very useful especially because is not just a syntax reference document but a more complete description of all the Go features and constructs and how to use them effectively. This is where you will get some level of expertise.

4.Golangbootcamp:-

Golang Bootcamp is a mini book to start learning go-lang. How to get started on Go? Hit this URL http://www.golangbootcamp.com/book/ to explore this book. This book will open a window for you to start learning effectively go-lang. The best thing about this mini-book is, it has a list of basic constructs and concepts and all those attached with go-lang playground. 

5. Go-Playground:- 

Now you know basic of go-lang language and you know how to construct the things. you no need to install go-lang locally on your system to start. We have online https://play.golang.org/ go-lang playground to test your knowledge and constructs. 

6. Go-Lang FAQ:-

Go-lang FAQ is really golden gate for you to understand the core concept and clarify your Bigbang doubts. This is also an official website https://golang.org/doc/faq.

7. Go-lang Bot:-

Golangbot is a fun and easy way to follow and learn Golang consistently and regularly. This can help you in improving your coding, solving practical issues, basics of Golang to advanced tutorials. This is inclusive of all learning materials of Golang. here you will get a different experience of learning. 

Hit this URL to go https://golangbot.com/ and start with hello world to a complex program and Quizs too.  

8. Tutorials Point:-

The tutorials point is also one of the best resources to get familiar with go-lang. if you are an avid reader and learner you should know tutorials point. 

https://www.tutorialspoint.com/go/.

9. Go-Lang Tutorials:-

GoLang tutorials is the best free online classes to learn go-lang. These classes best suited for professionals as well as beginners. It has a cover of the basic concept, control flow, looping, interfaces, memory management, etc. Tutorials are classifieds into sections and all sections having examples.Table of Contents
Audience Installing and configuring Go A step by step approach to Hello World in Go Updated for Go1 Typical early…golangtutorials.blogspot.com

10. Reference Books:-

  1. Introducing Go by O’Reilly.
  2. Go in Action.
  3. Learning Functional Programming in Go.

Conclusion:-

These are my findings. Please let us know your resources to learn go-lang. How you started and what are the other resources do you think are better to start learning go-lang.

Featured

Kafka Idempotent Producer!!

Kafka idempotent producer this is just the term but what exactly mean bu idempotent producer.

Let us first try to understand what is mean by an idempotent.

“Denoting an element of a set which is unchanged in value when multiplied or otherwise operated on by itself”. — Google dictionary 

Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application.

Now we know basically in HTTP verbs get is the Idempotent operation.

As seen in the above picture you can imagine, a producer will be publishing data over the broker and some times due to network error you will see a duplicate message in Kafka. When the producer sends the message to kafka topic, you can introducer duplicate message due to network error. 

kafka producer

Good request flow: Producer publish the message over kafka and kafka say I got the message and I committed and this the ack and producer listening to the ack.

In Failure case: Producer publishes the message over kafka and kafka says I got the message and I committed and this the ack and due to network error producer unable to get the ack and here is the problem. Then in the above case producer say I am going to try because I haven’t received ack from kafka. The producer publishes the same message and this makes the data duplication. 

  1. If the producer resends the message it creates duplicate data.
  2. If the producer doesn’t resend the message then message lost.

How to solve it?

Kafka provides “at least once” delivery semantics. This means that a message that is sent may be delivered one or more times. In kafka ≥0.11 released in 2017, you can configure “idempotent producer”, which won’t introducer duplicate data. To stop processing a message multiple times, it must be persisted to Kafka topic only once. During initialization, unique ID gets assigned to the producer which is called producer ID or PID.

In this flow after network failure also kafka doesn’t make duplicate data, even though the producer publishes a message multiple times till he receives the ack, because of Producer ID or PID.

using PID

To achieve this as a programmer we don’t have to do anything.

producer = Producer({'bootstrap.servers': ‘localhost:9092’,          'message.send.max.retries': 10000000,          'enable.idempotence': True})

Enable the idempotence is true and kafka producer will take care of everything for you.

message.send.max.retries= Integer.MAX_VALUE #which is really huge number

Just consider this in my mind how times you want to retry. The early Idempotent Producer was forcing max.in.flight.requests.per.connection to 1 but in the latest releases it can now be used with max.in.flight.requests.per.connection set to up to 5 and still keep its guarantees.

Idempotent delivery ensures that messages are delivered exactly once to a particular topic partition during the lifetime of a single producer.

Reference document:- 

Apache Kafka
You’re viewing documentation for an older version of Kafka – check out our current documentation here. Here is a…kafka.apache.org

Featured

Micronaut java full stack Microservice Framework!!

Micronaut is a modern, JVM-based, full-stack microservices framework designed for building modular, easily testable microservice applications.

Micronaut is the latest framework designed to make creating microservices quick and easy.

Micronaut is a JVM-based framework for building lightweight, modular applications. Developed by OCI, the same company that created Grails. Micronaut is developed by the creators of the Grails framework and takes inspiration from lessons learned over the years building real-world applications from monoliths to microservices using Spring, Spring Boot, and Grails.

Micronaut supports Java, Groovy or Kotlin.

Features of Micronaut:-

One of the most exciting features of Micronaut is its compile-time dependency injection mechanism. If you know the spring-boot mostly using reflection API and proxies which is always at run time and that causing the spring boot application needs a more startup time as compared to Node application. 

  1. First-class support for reactive HTTP clients and servers based on Netty.
  2. An efficient compile time dependency injection container. 
  3. Minimal startup time and lower memory usage.
  4. Cloud-native features to boost developer productivity.
  5. Very minimal learning curve because Micronaut code looks very similar to Spring Boot with Spring Cloud.

What’s wrong with Spring Boot?

Disclaimer 

There is nothing wrong with spring boot. I am a very big fan of spring projects including spring, spring Boot, Spring Data, etc. Spring Boot is a good and very elegant solution and it makes developer job easier than anything, just add one dependency and magic will happen for you. 

When spring is providing things on your fingertip that means spring is doing so many things under the hood for you. Spring does reflection, proxy classes and injection and many more and for this, you have to pay the cost because spring does things at runtime. You have pay in terms of memory, CPU cycles and application bootstrap time.

Micronaut addressed some of these problems using ATC (Ahead of time compilation), GraalVM. Micronaut does major things at compile time that reduces memory footprint and CPU utilization this leads to reduce in application bootstrap time. Currently, spring is not supporting GraalVM and Micronaut is supporting this is also a big difference. 

GraalVM is basically a high-performance polyglot VM. GraalVM is a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Groovy, Kotlin, Clojure, and LLVM-based languages such as C and C++.

Now we know what is and why Micronaut and a couple of features. Let us setup Micronaut and we will create one simple Hello world application. 

Setup Micronaut?

To install or setup micronaut is a very easy task. Go to this page https://micronaut.io/download.html

Either you can download binary, or use SDKMAN to setup micronaut on your favorite OS.

Using SDKMAN

Simply open a new terminal and start:

$ curl -s https://get.sdkman.io | bash

$ source “$HOME/.sdkman/bin/sdkman-init.sh”

$ sdk install micronaut

$ mn — version

Now installation is done let us create simple Hello world application

mn create-app hello-world

By Default Micronaut uses Gradle as a build tool you can also specify maven as well.

mn create-app hello-world --build maven

Using Homebrew

Before installing make sure you have the latest Homebrew installed.

$ brew update

$ brew install micronaut

Using Binary on windows

  1. Download the latest binary from
  2. Extract the binary to appropriate location
  3. Create an environment variable MICRONAUT_HOME which points to the installation directory
  4. Update the PATH environment variable, append %MICRONAUT_HOME%\bin

Now enjoy the coding.

Let us have HelloWorld controller 

import io.micronaut.http.MediaType; 
import io.micronaut.http.annotation.Controller; 
import io.micronaut.http.annotation.Get; 
@Controller("/hello")  
public class HelloController {  
    
@Get(produces = MediaType.TEXT_PLAIN)      
public String index() { 
        
return "Hello World";     
 
} 
}

Now enjoy the output:

$ curl http://localhost:8080/hello 

> Hello World

Conclusion:-

Spring boot and micronaut both have some pros and cons. As per my understanding if you are developing a new greenfield application start with micronaut but don’t rewrite existing application of spring boot to micronaut unless and until you are facing some serious performance issues. If you are migrating from monolith to cloud-native microservice then micronaut is the good option. Please let us know your thoughts on this.

Reference link:

This is the performance comparison between spring boot and micronaut.

https://docs.micronaut.io/latest/guide/index.html

Featured

Fast Inference: TFLite GPU Delegate!!

Running inference over the edge devices, especially on mobile devices is very demanding. When you have a really big machine learning model taking inference with the limited resources is a very crucial task. 

Many mobile devices especially mobile devices have hardware accelerators such as GPU. Tensorflow Lite Delegate is useful to optimize our trained model and leveraged the benefits of hardware acceleration.

What is Tensorflow Lite Delegate?

Delegator’s job, in general, is to delegate or transfer your work to someone. TensorFlow Lite supports several hardware accelerators.

A TensorFlow Lite delegate is a way to delegate part or all of graph execution to another executor.

Why should you use delegates?

Running inference on compute-heavy deep learning models on edge devices is resource-demanding due to the mobile devices’ limited processing, memory, and power. Instead of relying on the device CPU, some devices have hardware accelerators, such as GPU or DSP(Digital Signal Processing), that allows for better performance and higher energy efficiency.

How TFLite Delegate work?

How TFLite Delegate work. tensorflow.org

Let us consider the graph on the left side. It has an input node where we will get input for inference. We will get input node going through convolutional operation and then mean operation and it uses the output of these two operations to compute the SquareDifference. 

Let us assume we have a hardware accelerator that can perform Conv2d and mean operations very fastly and efficiently and above graph will be like this:

In this case, we will delegate conv2d and mean these two operations to a specialized hardware accelerator using the TFLite delegator. 

TFLite GPU delegator will delegate the operations to a GPU delegator if available.

TFLite allows us to provide delegates for specific operations, in which case the graph will split into multiple subgraphs, where each subgraph handled by a delegate. Each and every subgraph that is handled by a delegate will be replaced with a node that evaluates the subgraph on its invoked call. Depending on the model, the final graph can end up with one node or many nodes, which means that all of the graphs were delegated or multiple nodes handled the subgraphs. In general, you don’t want to have multiple subgraphs handled by the delegate, since each time you switch from delegate to the main graph, there is an overhead for passing the results from the subgraph to the main graph. 

It’s not always safe to share memory.

How to add a delegate?

  1. Define a kernel node that is responsible for evaluating the delegate subgraph.
  2. Create an instance of TfLiteDelegate, which will register the kernel and claim the nodes that the delegate can execute.

Android:

Tensorflow has provided a demo app for android:

In your application, add the AAR as above, import org.tensorflow.lite.gpu.GpuDelegate module, and use theaddDelegate function to register the GPU delegate to the interpreter

import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;

// Initialize interpreter with GPU delegate
GpuDelegate delegate = new GpuDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
Interpreter interpreter = new Interpreter(model, options);

// Run inference
while (true) {
  writeToInput(input);
  interpreter.run(input, output);
  readFromOutput(output);
}

// Clean up
delegate.close();

iOS:

Include the GPU delegate header and call the Interpreter::ModifyGraphWithDelegate function to register the GPU delegate to the interpreter:

#import "tensorflow/lite/delegates/gpu/metal_delegate.h"

// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr);  // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;

// Run inference
while (true) {
  WriteToInputTensor(interpreter->typed_input_tensor<float>(0));
  if (interpreter->Invoke() != kTfLiteOk) return false;
  ReadFromOutputTensor(interpreter->typed_output_tensor<float>(0));
}

// Clean up
interpreter = nullptr;
DeleteGpuDelegate(delegate);

Note:-

Some operations that are trivial on the CPU may have a high cost for the GPU.

Reference Link:

https://www.tensorflow.org/lite/performance/gpu

For more such stories

Featured

Optimization techniques – TFLite!!

One of the most popular Optimization technique is called quantization.


Running the machine learning model and making inference on mobile devices or embedded devices comes with certain challenges such as the limited amount of resources such as memory, power and data storage, so it’s crucial and critical to deploy ML model on edge devices. 

It’s critical to deploy optimized machine learning models on mobile and embedded devices so that they can run efficiently. There are optimization techniques and one of the optimization techniques is Quantization. In the last article, we have seen how to use the TFLite Converter to optimize the model for edge devices without any modification in weights and activation types.


What is Quantization?

Quantization is generally used in mathematics and digital signal processing. Below is the wiki definition.

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes.

Quantization refers to the process of reducing the number of bits that represent a number. In the context of deep learning, the dominant numerical format used for research and for deployment has so far been a 32-bit floating-point or FP32. Convert FP32 weights and output activations into the nearest 8-bit integer, some times 4/2/1 bit as well in quantization.

Quantization optimizes the model by quantizing the weights and activation type. TFLite uses quantization technique to speed up inference over the edge devices. TFLite converter is the answer to whether we can manage a deep learning model with lower precision. Now you know exactly quantization, let us, deep dive:

Quantization dramatically reduces both the memory requirement and computational cost of using neural networks.

The quantizing deep learning model uses techniques that allow for reduced precision representations of weights and, optionally, activations for both storage and computation.

TFLite provides several level of support to quantization.

  1. Post-training quantization
  2. Quantization aware training.

Below is a table that shows the benefits of model quantization for some CNN models. 

Benefits of model quantization for select CNN models. tensorflow.org

Post-training quantization:

As the name implies its post-training technique, this is after your model is trained. Post-training quantization is a technique used to quantizing weights and activation types. This technique can reduce the model size and also improving CPU and hardware acceleration latency. There are different optimization options such as weight, full integer, etc based on our requirement we can choose. 

TensorFlow org provided a decision tree that can help us in making decision

tensorflow.org

Weight Quantization:

The very simple post-training quantization is quantizing only weights from FP to 8 bit precision. This option is available with TFLite converter. At inference, weights are converted from 8-bits of precision to floating-point and computed using floating-point kernels. This conversion is done once and cached to reduce latency. If you want to improve latency further use of a hybrid operator. 

import tensorflow as tfconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]tflite_quant_model = converter.convert()

At the time of conversion, set the optimizations flag to optimize for model size.

This optimization provides latencies close to fully fixed-point inference. but, the outputs are still stored using floating-point.

Full integer quantization:

We can get further latency improvements, reductions in peak memory usage, and access to an integer only hardware accelerators by making sure all model math is quantized. In full integer quantization, you need to measure the dynamic range of activations and inputs by supplying data sets, create a dataset using an input data generator.

import tensorflow as tfdef representative_dataset_gen():  for _ in range(num_calibration_steps):    yield [input]
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.representative_dataset = representative_dataset_gentflite_quant_model = converter.convert()

The result of full integer quantization should be full quantized, any ops don’t have quantized implementation left in FP. Full integer-only execution gets a model with even faster latency, smaller size, and integer-only accelerators compatible model.

you can enforce full integer quantization for all ops and use integer input and output by adding the following lines before you convert.

The converter throw an error if it encounters an operation it cannot currently quantize.

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8converter.inference_output_type = tf.uint8

Float 16 Quantization example:

The IEEE standard for 16-bit floating-point numbers. We can reduce the size of a floating-point model by quantizing the weights to float16. This technique reduces the model size by half with minimal loss of accuracy as compared to other techniques. This technique model will “dequantize” the weights values to float32 when running on the CPU.

import tensorflow as tfconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)converter.optimizations = [tf.lite.Optimize.DEFAULT]converter.target_spec.supported_types = [tf.lite.constants.FLOAT16]tflite_quant_model = converter.convert()

We have seen a different technique in post-training quantization: The float 16 quantization may not be a good choice if you need maximum performance. A Full integer quantization to fixed-point math would be better in that case. Weight quantization is a very basic quantization. Since weights are quantized post-training, there could be an accuracy loss, particularly for smaller networks.

Tensorflow Lite model accuracy

Quantization aware Training:

There could be an accuracy loss in a post-training model quantization and to avoid this and if you don’t want to compromise the model accuracy do quantization aware training. As we have learned the post-training quantization technique is after the model has been trained. To overcome post-training quantization technique drawbacks we have quantization aware model training. This technique ensures that the forward pass matches precision for both training and inference. In this technique Tensorflow created flow, wherein the process of constructing the graph you can insert fake nodes in each layer, to simulate the effect of quantization in the forward and backward passes and to learn ranges in the training process, for each layer separately.

There are two aspects of this technique

  • Operator fusion at inference time is accurately modeled at training time.
  • Quantization effects at inference are modeled at training time.
tf.quantization.quantize(    input,    min_range,    max_range,    T,    mode='MIN_COMBINED',    round_mode='HALF_AWAY_FROM_ZERO',    name=None)
out[i] = (in[i] - min_range) * range(T) / (max_range - min_range)if T == qint8: out[i] -= (range(T) + 1) / 2.0
num_discrete_values = 1 << (# of bits in T)range_adjust = num_discrete_values / (num_discrete_values - 1)range = (range_max - range_min) * range_adjustrange_scale = num_discrete_values / rangequantized = round(input * range_scale) - round(range_min * range_scale) +  numeric_limits<T>::min()quantized = max(quantized, numeric_limits<T>::min())quantized = min(quantized, numeric_limits<T>::max())

Check the complete example here:

References:-

https://www.tensorflow.org/lite/convert/quantization

https://github.com/tensorflow/tensorflow/tree/r1.13/tensorflow/contrib/quantize

Featured

Tensorflow Lite Model Deployment!

Here you go — — Introduction Story of Tensorflow Lite

In the above article, we introduced TensorFlow lite. What is TensorFlow lite and what is the purpose of it and what is TensorFlow lite is not.

In this article, we will dig deeper and steps involved in the TensorFlow lite model deployment. 

The above diagram states the deployment flow of Tensorflow lite mode at the edge devices.

Let us go through the steps from the top of the diagram.

Very high level convert this diagram into two functionality first step is converter and second, is the interpreter or inference the model.

  1. Train Model:- 

Train your model using TensorFlow. We can train our model using any high-level TensorFlow API such as Keras or either you have a legacy TensorFlow model. You can train our model using high-level API like Keras or low-level API. You can develop your own model or use TensorFlow inbuilt model. 

If you have any other model also you can convert those models into TensorFlow using ONNX and use it. Once the model is ready you have save that model. We can save our model in a different format based on APIs such as HDF5, SavedModel or FrozenGraphDef.

2. Convert Model:- 

In this step, we are actually using the Tensorflow Lite converter to convert the TensorFlow model into the TensorFlow lite flatbuffer format.

FlatBuffers is a special data serialization format that is optimized for performance. Tensorflow Lite flatbuffer aka TF Lite model. The TensorFlow Lite converter takes a TensorFlow model and generates a TensorFlow Lite FlatBuffer file (.tflite). The converter supports SavedModel directories, tf.keras models, and concrete functions. Now our TFLite model is ready.

You can convert a model using the Python API or command-line tool. CLI support very basic models.

Python API example:- 

//export_dir is the path of your TF model is saved.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)tflite_model = converter.convert()

CLI example 

bazel run //tensorflow/lite/python:tflite_convert -- \  --saved_model_dir=/tmp/mobilenet_saved_model \  --output_file=/tmp/mobilenet.tflite

3. Deploy Model:-

Now our model is ready and we have ‘.tflite’ file. We can deploy this to IoT devices, embedded devices or mobile devices. We can 

4. Deploy model:-

To perform inference with a TensorFlow Lite model, you must run it through an interpreter. TensorFlow Lite model serves on a device using an interpreter. TensorFlow Lite interpreter provides a wide range of interfaces and supports a wide range of devices. The TensorFlow Lite interpreter is designed to be lean and fast. We can run models locally on these devices using the Tensorflow Lite interpreter. Once this model gets loaded into devices such as embedded devices, Android or iOS devices. Once a device is deployed then take inference. 

The inferencing model goes through the below steps in generally. 

a. Loading a model:- You must load .tflite model file into memory.

b. Transforming data:- Raw input data for the model generally does not much input data format expected by the model. You need to transform the data.

c. Running inference:- Execute inference over transformed data.

d. Interpreting output:- When you receive results from the model inference, you must interpret the tensors in a meaningful way that’s useful in your application.

PreTrained Models

https://www.tensorflow.org/lite/models

https://www.tensorflow.org/lite/guide/roadmap

Some Examples

https://www.tensorflow.org/lite/examples

Featured

Tensorflow Lite- machine learning at the edge!!

Tensorflow created a buzz in AI and deep learning forum and TensorFlow is the most popular framework in the deep learning community. 

tensorflow.org

Introduction:- 

As we know that to train deep learning models we need to compute power and this age of computation. Now we are moving with cloud computing along with edge computing. Edge computing is the need of today’s world because of innovation in the IoT domain and due to compliance and data protection laws enforcing companies to do computation and the edge side instead of computing model in the cloud and sending the result back to a client device is now the legacy.

As TensorFlow is the most popular deep-learning framework. It comes with its lite weight version for edge computation. Now a day’s mobile devices have good processing power but edge devices have less power.

Train deep learning model in less than 100KB.

The official definition of Tensorflow Lite:

“TensorFlow Lite is an open-source deep learning framework for on-device inference.”

Deploy machine learning models on mobile and IoT devices.

Tensorflow Lite is package of tools to help developers to run TensorFlow models on mobile, embedded devices, and IoT devices. It enables on-device machine learning inference with low latency and a small binary size.

Tensorflow Lite is providing machine learning at the edge devices.

Edge computing means compute at local.

Deep Dive:-

This diagram illustrates the standard flow for deploying the model using TensorFlow Lite.

Deploying model using TensorFlow Lite at the edge devices

Tensorflow Lite is not a separate deep learning framework, it is providing a set of tools that will help developers run TensorFlow models or any other deep learning models on mobile, embedded and IoT devices.

Steps:-

  1. Choose Model or develop your own model.
  2. Choose Model
  3. Convert the Model
  4. Deploy the Model
  5. Run the inference with the Model
  6. Optimize the Model and repeat the above steps.

Tensorflow Lite consists of two main components

  1. Converter:- Tensorflow Lite Converter converts the TensorFlow model into the TensorFlow lite model.
  2. Interpreter:- It is supporting a set of core operators that are optimized for on-device applications and with a small binary size. It is basically for inferencing the model.

Why Edge Computing?

Edge computing is really best to use case along with cloud computing. Nowadays cloud computing becomes crazy but there are a certain requirement where edge computation will beat cloud computing. Why edge computation is more important and what is advantage you will derive from this.

  1. Privacy:- No data needs to leave the device. Everything is there only.
  2. Latency:- There’s no back and forth request to a server.
  3. Connectivity:- Internet connection not required
  4. Power Consumption:- Connecting to a network requires power.

Tensorflow Lite is the one-stop solution to convert your deep learning model and deploy efficiently and enjoy inferencing. TensorFlow lite supports both mobile devices and microcontrollers. 

Featured

Installation of Apache Kafka on Mac(OSx).

Installation and environment setup for Kafka on mac os.

To install Apache Kafka on Mac or any system, the prerequisite is the only Java(Java 8 ). First, we will look into the installation steps of Java and then we will proceed to set up Apache Kafka.

Install Java:-

Open the below link in a browserJava SE Downloads
Java SE 13.0.1 is the latest release for the Java SE Platform Learn more Looking for Oracle OpenJDK builds? Oracle…www.oracle.com

  1. Click on JDK, check the “Accept License Agreement” and download .dmg file for installation on Mac.
  2. Install the JDK on your system.
  3. Setup the Java path in your bash_profile.

You may verify the Java installation, by running the following command over a terminal.

java -version

Now you will come to know the Java version(the required version is Java8).

Install Apache Kafka using binary:-

  1. Download the latest Apache Kafka from https://kafka.apache.org/downloads under Binary downloads.

2. Click on any of the binary downloads, or choose a specific scala version if you have any dependency with scala in your development.

3. Go with the recommended mirror site and download.

4. Extract the downloaded file. Navigate to root of Apache Kafka folder and open a Terminal.

Now Kafka is ready to start.

Install Apache Kafka using Brew:-

Brew Cask, I am assuming you don’t have cask. Let us execute the below commands.

https://gist.github.com/maheshwarLigade/f626eb8b280017e92c9f6d449da6e400

$ brew tap caskroom/cask

# install JDK 8 if you have already please skip this step

$ brew cask install java8

# now install Kafka along with zookeeper service

$ brew install Kafka

There are other ways but this is the two ways most of the people will follow.

Up & Running:-

Let us start zookeeper and Kafka locally. To start Kafka you must start with the first zookeeper. If you go to the extracted Kafka content you will get below picture. In the below directory structure you can get it in “bin” folder we have all binaries and in “config” we have all the configuration files to start Zookeeper and Kafka. To start with we need server.properties and zookeeper.properties.

By default, both Kafka and ZooKeeper send all data to /tmp/data.

You can change this default path to something different if you want from zookeeper.properties.

start zookeeper

$ zookeeper-server-start /usr/local/etc/kafka/zookeeper.properties

Once zookeeper is up and running then go and start kafka server.

$ kafka-server-start /usr/local/etc/kafka/server.properties

Above configuration file path may change based on your installtion.

Featured

Introduction of AWS DynamoDB!!

DynamoDB is a NoSQL document database service that is fully managed. Unlike traditional databases, NoSQL databases, are schema-less. Schema-less simply means that the database doesn’t contain a fixed (or rigid) data structure.

Advantages:- 

  • DynamoDB is found under the Database section on the AWS Management Console.
  • DynamoDB can handle more than 10 trillion requests per day.
  • DynamoDB is serverless as there are no servers to provision, patch, or manage.
  • DynamoDB supports key-value and document data models.
  • DynamoDB synchronously replicates data across three Availability Zones in an AWS Region.
  • DynamoDB supports GET/PUT operations using a primary key.

Fast and flexible NoSQL database service for any scale

Features:- 

  1. Performance at scale
  2. High Availability and Durability
  3. No servers to manage
  4. Enterprise-ready

DynamoDB is a NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.

Applications best suit:- 

  1. Serverless Web Apps
  2. Mobile app backend
  3. Microservices

Let us make hands dirty

  • Create a table
  • Add data to a table
  • Query data in a table
  • Cleanup or delete table

Go to Services and select DynamoDB service 

→ On the DynamoDB Console, click “Create table

→Enter UsersInfo as the Table name. 

→Enter Name in for the Partition key and ensure String is selected.

→For now, keep everything default and click on Create button.

Now the table is ready, Add data into it

→Click on “Create Item”.

 →Enter the value e.g your good name

 → Then click on + button then select ” Insert ” → then select datatype as String provides field name as “EmailId” enter value → Click on “Save” button.

Now data is ready you can add more records for your testimonials.

Query record in a table:-

 → Select the “Query” from drop-down

→ Where it says, Enter value in the row next to the name Partition key. i.e Mahesh, in this case, it may be different for you.

 →Click on “Start search” button.

→You should see your search results

Once everything is done cleanup resources, delete resources.

 → Click on the Delete table button.

 → Ensure Delete all CloudWatch alarms for this table is selected and click Delete.

If you want to create a backup check “Create a backup before deleting this table”.

Featured

FireStore now supports IN Queries & array-contains!!

Firestore with new in and array-contains-any queries.

This is the good news for cloud firestore developers, the pain point they are facing unable to use of ‘in’ operator.

Image result for firestore

Cloud FireStore is NoSQL database built for global apps.

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps — at a global scale. Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud Platform.

The importance of ‘in’ is like a pinch of salt in day to day life.

In Query:

With the in query, you can query a specific field for multiple values (up to 10) in a single query. You do this by passing a list containing all the values you want to search in, and Cloud Firestore will match any document whose field equals one of those values.

in queries are the best way to run simple OR queries in Cloud Firestore. For instance, if the database for your E-commerce app had a customer_orders collection and you wanted to find which orders had a “Ready to ship”, “Out for delivery” or “Completed” status, this is now something you can do with a single query, like so:

collection(“orders”)
.where(
“status”, 
“in”, 
[“Ready to ship”, “Out For Delivery”, “Completed”]
)

one more example:-

citiesRef.where('country', 'in', ['India', 'Japan','CostaRica']);

array-contains-any query:

Firestore launched another feature similar to the in query, the array-contains-any query. This feature allows you to perform array-contains queries against multiple values at the same time.

In your e-commerce site has plenty of products with an array of categories that every item belongs in, and you want to fire a query to fetch products with a category such as “Appliances” or “Electronics” then.

collection(“products”).where(
 “category”,
 “array-contains-any”,
 [“Appliances”,”Electronics”]
)

one more example: 

citiesRef.where('regions', 'array-contains-any',
['west_coast', 'east_coast']);

Note:- These queries are also supported in the Firebase console, which gives you the ability to try them out on your dataset before you start modifying your client code.

Remember:

  • As we mentioned earlier, you’re currently limited to a maximum of 10 different values in your queries.
  • You can have only one of these types of operations in a single query. You can combine these with most other query operations, however.

References:-

https://firebase.googleblog.com/2019/11/cloud-firestore-now-supports-in-queries.html

https://firebase.google.com/docs/firestore/query-data/queries

Featured

Compare Strings in kotlin

String comparison in kotlin. Equality of String in kotlin. In this tutorial, we will discuss how to compare strings in kotlin?

1. Using “==” Operator:-

As we are aware every programming language gas equal to an operator ( == ) to compare two things. Kotlin also allows equal to equal to as a comparator operator. Let’s start with the “==” operator. This operator can be used to check if the strings are structurally equal. It’s the equivalent of using the equals method in Java. According to the documentation of Equality in Kotlin,  ==  operator is used for Structural Equality.

str1==str2  is implicitly translated to str1?.equals(str2) ?: (str2 === null)  by Kotlin language.

# main method to compare two string using == operator.
fun main(args: Array<String>) {
    var a: String = "kotlin is very easy"
    var b: String = "kotlin is very" + " easy"
    if(a==b){
        println("Strings '$a' and '$b' are equal.")
    } else {
        println("Strings '$a' and '$b' are not equal.")
    }
    # change the content
    b = "Kotlin runs on JVM:)"
    if(a==b){
        println("Strings '$a' and '$b' are equal.")
    } else {
        println("Strings '$a' and '$b' are not equal.")
    }
}

In the above example if you see we have defined string using string literals ‘kotlin is very easy’ so, both the content of the variables is the same and they are pointing to the same object thats why both are equal.

It’s the equivalent of using == in Java

When we initialize string values using String literals, they point to the same object. However, if we build a string separately, the variable will point to a separate object. Let us take one more example.

fun main(args: Array<String>) {
    val a: String = "kotlin"
    val b: String = "kotlin"
    val c = buildString { "kotlin" }
    if(a==b){
        println("Strings '$a' and '$b' are equal.")
    } else {
        println("Strings '$a' and '$b' are not equal.")
    }
   
    if(a==c){
        println("Strings '$a' and '$c' are equal.")
    } else {
        println("Strings '$a' and '$c' are not equal.")
    }
}

2. Using equals():-

In this section, we will explore equals() method to compare two string in kotlin. This method will compare string with case-sensitive nature. “kotlin” and “KoTlin” both are different.

val a = "JVM"
val b = "jvm"

a.equals(b) # this will be return false.

The equals method returns the same result as the “ == ” operator.

If you want to have a case insensitive comparison. Then just pass true as the second argument.

Case-insensitive string comparison in kotlin, we can use the equals method and pass true as the second argument.

a.equals(b, true) # this will be return true.

3. Using compareTo:-

Kotlin provides compareTo() extension function to String, to compare strings equality. Like equals method, compareTo method which can be used to compare the order of the two strings.

Syntax:-

fun String.compareTo(     
 other: String,      
 ignoreCase: Boolean = false 
 ): Int

The compareTo method also comes with an optional ignoreCase argument, like a equal method. But compareTo return int value not a boolean.

Return ValueDescription
0The two strings are equal.
negative integerIf the string is less than the other string
positive integerIf the string is greater than the other string
a.compareTo(b, true) # case-insensitive
b.compareTo(b)  #case sensitive.

4. Conclusion:-

In this tutorial we have discussed comparing strings in kotlin using equal to operator “==”, equals() method and compareTo(). Comparing strings in kotlin is easy and straight forward.

Featured

Kubernetes 5 Free Learning resources.

If you don’t know docker read here.

Kubernetes(k8) is an orchestration platform to manage containers.

Kubernetes is the buzzword in the market because of the boom of containerization and microservices. We can have a microservices architecture with its pros and cons and plenty of containers but the question is how to manage those containers and the source of truth is K8.

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.

The definition is from the k8 official website. 

In this article, we will list down 5 free resources to which will help you to learn k8.

Kubernetes is a container orchestration software

  1. Learn K8 basics:- Learn Kubernetes Basics is the official documentation by the developers of Kubernetes. This tutorial will help you to understand the basics step by step guide of k8 cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts and includes an interactive online tutorial. I think this the source of truth to start learning k8.
  2. Learning Path kubernetes:- This course can give us from Kubernetes basics to advanced networking and workloads. This tutorial series introduced by IBM and its very good resource to deep dive into k8. If you’re new to Kubernetes and container orchestration and want to begin learning about it, this learning path covers everything from basic prerequisites to more advanced skills needed for containerization. This course will give you brief the idea about a container up to advance k8 concepts. After completion of this course, you will be able to understand the basics of containers, build containerized applications and deploy them onto Kubernetes, understand the advantages of a deployment that uses Helm with Kubernetes, deploy various microservices with Kubernetes, understand basic networking for applications that are running in Kubernetes and much more.
  3. A Tutorial Introduction to Kubernetes:- A Tutorial Introduction to Kubernetes is provided by Ulaş Türkmen on his blog. In this tutorial series, you will learn how to use Kubernetes using Minikube, how to configure kubectl, understanding nodes and namespaces, how to use the dashboard, deploying various container images in order to demonstrate Kubernetes feature, running service, etc.
  4. Coursera K8:- Architecting with Google Kubernetes Engine Specialisation this is the course name. Actually, this course is design and developed by Google. In this k8 course you will learn, how to implement solutions using Google Kubernetes Engine, or GKE, including building, scheduling, load balancing, and monitoring workloads, as well as providing for the discovery of services, managing role-based access control and security, and providing persistent storage to these applications.
  5. Fundamentals of Containers, Kubernetes, and Red Hat OpenShift:- This course will provide you with an introduction to container and container orchestration technology using Docker, Kubernetes, and Red Hat OpenShift Container Platform. You will learn how to containerize applications and services, test them using Docker, and deploy them on a Kubernetes cluster using Red Hat OpenShift. Additionally, you will build and deploy an application from source code using the Source-to-Image facility of Red Hat OpenShift. After the completion of this course, you will be able to create containerized services, manage containers and container images, create custom container images and deploy containerized applications on Red Hat OpenShift.

Kubernetes — Explained Like You’re Five

Summary:- Modern applications are increasingly built using containers, microservices packaged with their dependencies and configurations. K8s is open-source orchestration software for deploying and managing those containers at scale. With K8s, you can build, deploy, deliver and scale containerized apps faster and smoother.

This is just a list of the sources from where we can start our journey and makes hands dirty. There is a more complete list of concepts available on the Kubernetes website. I suggest you give them a quick look if you want to get a better grasp of who does what and what goes where.

This is it, for now, please let me know if you know more resources which are very simple and effective.

Featured

Colab getting started!!

Train deep neural network free using google colaboratory.

GPU and TPU compute for free? Are you kidding?

Google Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.

With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. If you don’t have money to procure GPU and want to train neural network or want to makes hands dirty with zero investment then this if for you. Colab is a Google internal research tool for data science.

You can use GPU as a backend for free for 12 hours at a time.

It supports Python 2.7 and 3.6, but not R or Scala yet.

Many people want to train some machine learning model or deep learning model but playing with this requires GPU computation and huge resources that blocking many people to try out these things and make hands dirty.

Google Colab is nothing but cloud-hosted jupyter notebook.

Colaboratory is a free Jupyter notebook environment provided by Google where you can use free GPUs and TPUs which can solve all these issues. The best thing about colab is TPUs (tensor processing unity) the special hardware designed by google to process tensor.

Let’s Start:- 

 To start with this you should know jupyter notebook and should have a google account. 

http://colab.research.google.com/

Click on the above link to access google colaboratory. This is not only a static page but an interactive environment that lets you write and execute code in Python and other languages. You can create a new Jupyter notebook by File →New python3 notebook. clicking New Python3 Notebook or New Python2 Notebook.

We will create one python3 notebook and it will create one for us save it on google drive. 

Colab is an ideal way to start everything from improving your Python coding skills to working with deep learning frameworks, like PyTorch, Keras, and TensorFlow and you can install any Python package which is require for your python coding like from simple sklearn, numpy too TensorFlow. 

You can create notebooks in Colab, upload existing notebooks, store notebooks, share notebooks with anyone, mount your Google Drive and use whatever you’ve got stored in there, import most of your directories, upload notebooks directly from GitHub, upload Kaggle files, download your notebooks, and do whatever your doing with your local jupyter notebook.

On the top right you can choose to connect to hosted runtime or connect to local runtime

Set up GPU or TPU:-

It’s very simple and straight forward as going to the “runtime” dropdown menu, selecting “change runtime type” and selecting GPU/TPU in the hardware accelerator drop-down menu!

Now you can start coding and start executing your code !!

How to install a framework or libraries?

It’s as simple as writing import statement in python!.

!pip install fastai

use normal pip install command to install different packages like TensorFlow or PyTorch and start playing with it.

For more details and information

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=gJr_9dXGpJ05

https://colab.research.google.com/notebooks/welcome.ipynb#scrollTo=-Rh3-Vt9Nev9

Micronaut Kafka Consumer Producer example.

Micronaut Kafka consumer and producer example.

Micronaut is a java framework and it’s been popular to develop microservice-based applications because of lower memory footprint and fast startup.

In this article, we will see how to write down simple Kafka consumers and producers using the micronaut framework. 

You can read my articles on micronaut framework on https://www.techwasti.com/

Start generating project using https://micronaut.io/launch/

You can create a project either using the launch site or using CLI tool.

$ mn create-app techwastikafkaexample --features kafka

Micronaut version for this demo is 2.0.0 and Java 8.

generating project and adding Kafka profile CLI provides powerful option such as;

$ mn create-app techwasti-kafka-service --profile kafka

Prerequisites:– 

  1. Java programming
  2. Kafka
  3. Micronaut

I am assuming the people. You know about this if you don’t know then learn it.

Micronaut features dedicated support for defining both Kafka Producer and Consumer instances.

Kafka Producer:-

We will create one simple Producer using annotation.

package com.techwasti.kafkaex;

import io.micronaut.configuration.kafka.annotation.KafkaClient;
import io.micronaut.configuration.kafka.annotation.KafkaKey;
import io.micronaut.configuration.kafka.annotation.Topic;

@KafkaClient
public interface GreetMessageClient {

    // greet is a kafka topic
    @Topic("greet")
    void sendGreetMessage(@KafkaKey String day, String message);

    void sendGreetMessage(@Topic String topic, @KafkaKey String day, String message);
}

@KafkaClient annotation is used to mark this is the kafka client.

@Topic annotation is indicated, on which topic message should get published. 

@Kafkakey annotation is to have a message key.

In the above code snippet we have defined two methods:

  1. In the first method accepting two arguments key and value and topic name is annotated using @Topic. 
  2. In the second method instead of annotating the topic name, we are accepting the topic name in the argument. 

If you omit the @KafkaKey then it’s null.

As we are aware of the beauty of micronaut framework, it will produce an implementation of the above client interface. We can retrieve this instance either by looking up the bean from ApplicationContext or by injecting the bean using @Inject. 

GreetMessageClient client = applicationContext.getBean(GreetMessageClient.class); client.sendProduct("Thursday", "Good morning");

Now our producer is ready and we sent a successful message as well.

Let us create Kafka Consumer.

Kafka Consumer:-

As we have seen a couple of annotations to create a producer and produce a message over a topic. The same way we have @KafkaListener annotation to create a kafka consumer.

package com.techwasti.kafkaex;

import io.micronaut.configuration.kafka.annotation.KafkaKey;
import io.micronaut.configuration.kafka.annotation.KafkaListener;
import io.micronaut.configuration.kafka.annotation.OffsetReset;
import io.micronaut.configuration.kafka.annotation.Topic;

@KafkaListener(offsetReset = OffsetReset.EARLIEST)
public class GreetMessageConsumer {

    @Topic("greet")
    public void receive(@KafkaKey String day, String message) {
        System.out.println("Got Message for the  - " + day + " and Message is  " + message);
    }
}

@KafkaListener is used to indicate this is kafka consumer and while reading a message from the topic “greet” and offset should be earliest this will start reading a message from the start.

receive method having two arguments one is key and another one is the message. 

This is a simple example of having kafka consumers and producers. 

Advanced Options for producer and consumer:

@Header: To add a header to kafka message. 

When we want to add some header into the kafka producer when we produce a message, let us say we want to add authentication token while publishing message over kafka in this case 

e.g.

@Header(name = “JWT-Token”, value = “${my.authentication.token}”)

Also, you can pass the header as a method argument the same as a topic name. 

@Body: to explicitly indicate the message body.

Generally, the value sent by the producer resolved using @Body annotation only but if we haven’t mentioned it then the first argument resolved as message body. 

e.g 

@Topic(“greet”)
void sendGreetMessage(@KafkaKey String day, String message);

or

@Topic(“greet”)
void sendGreetMessage(@KafkaKey String day, @Body String message);

For more such things please visit micronaut documentation.

Reference:

https://micronaut-projects.github.io/micronaut-kafka/latest/guide/

Spring Boot Neo4j Reactive CRUD.

This article is about the spring data for neo4j database. Neo4j is a popular graph database.

neo4j.com

Spring Data Neo4j module is there which is support only imperative style and currently, it’s only in support and maintenance.

Prerequisites:-

You head this article that means you at least heard about Neo4j and spring boot. below are the prerequisites

  1. Neo4j (https://neo4j.com/graphacademy/online-training/introduction-to-neo4j-40/)
  2. Installation of Neo4j on local or use Neo4j sandbox.
  3. Knowledge with spring data and spring boot.
  4. For this example, we are using JDK 11.

If you don’t know anything about the above things then I will recommend you should start exploring these things and come back.

In this example, I am using Neo4j sandbox environment: https://neo4j.com/sandbox/

Advantages of using SDN-Rx:

  1. It supports both imperative and reactive development.
  2. Built-in OGM(Object graph mapping) and very lightweight.
  3. Support immutable entities for both Java and kotlin.

Maven/Gradle Dependencies:-

Right now Spring Data Neo4j Reactive starter is not yet part of the official Spring repositories so we have to add that manually, so it won’t be available in the spring initializer website.

## maven dependency
<dependency>
    <groupId>org.neo4j.springframework.data</groupId>
    <artifactId>spring-data-neo4j-rx-spring-boot-starter</artifactId>
    <version>1.1.1</version>
</dependency>
## gradle 
dependencies {
    compile 'org.neo4j.springframework.data:spring-data-neo4j-rx-spring-boot-starter:1.1.1'
}

PrePare Database:-

For this article, we are using the Neo4j-standard movie graph database because it’s in small size and it’s available in your sandbox as well as in your local.

use this command to start:

:play movies

Execute the command and deck is an interactive mode, so its seamless execution. The movie database contains a database such as a movie name, release date, crew, director of movie, a rating is given by different individuals or rating companies. The minimal schema relation could be like this

(:Person {name})-[:ACTED_IN {roles}]->(:Movie {title,released})
movie DB schema

Create Project:

The best way to start with the spring boot project is start.spring.io. Create a spring boot project.

Do not choose Spring Data Neo4j here, as it will show the legacy generation of Spring Data Neo4j that has only imperative support.

Once your project is ready then add the spring data neo4j Rx dependency in your POM or build.gradle.

Configurations:

You can put here your database-specific configurations.

org.neo4j.driver.uri=neo4j://localhost:7474
org.neo4j.driver.authentication.username=neo4j
org.neo4j.driver.authentication.password=password
spring.data.neo4j.repositories.type=reactive

Domain Entity:

All our configurations are done now let us begin and define the domain entity object. As we stated we are using a movie database so we have to create Movie as a domain entity with few properties.

Entities are nodes.

package com.techwasti.entity;

import org.neo4j.springframework.data.core.schema.Id;
import org.neo4j.springframework.data.core.schema.Node;
import org.neo4j.springframework.data.core.schema.Property;
import org.neo4j.springframework.data.core.schema.Relationship;

import java.util.HashSet;
import java.util.Set;

import static org.neo4j.springframework.data.core.schema.Relationship.Direction.INCOMING;

@Node("Movie")
public class Movie {

    @Id
    private final String mtitle;

    @Property("tagline")
    private final String tagline;

    @Relationship(type = "ACTED_IN", direction = INCOMING)
    private Set<Person> actors = new HashSet<>();

    @Relationship(type = "DIRECTED", direction = INCOMING)
    private Set<Person> directors = new HashSet<>();

    public Movie(String title, String tagline) {
        this.mtitle = title;
        this.tagline = tagline;
    }

    public String getTitle() {
        return mtitle;
    }

    public String getTagline() {
        return tagline;
    }
    
    public Set<Person> getActors() {
        return actors;
    }

    public void setActors(Set<Person> actors) {
        this.actors = actors;
    }

    public Set<Person> getDirectors() {
        return directors;
    }

    public void setDirectors(Set<Person> directors) {
        this.directors = directors;
    }
}

In the movie entity, we defined a movie name, tagline, actors, and directors.

@Node annotation marks the given class is the managed node. @Id annotation to have a unique property and then we defined different relationships using @Relationship annotation. In the same way, we have a Person entity that contains two fields.

package com.techwasti.entity;

import org.neo4j.springframework.data.core.schema.Id;
import org.neo4j.springframework.data.core.schema.Node;

@Node("Person")
public class Person {

    @Id
    private final String name;

    private final Integer born;

    public Person(Integer born, String name) {
        this.born = born;
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public Integer getBorn() {
        return born;
    }
}

In these entities, we just defined one-way relation to have demonstrated things simple but you can also define an entity in such a way to fulfill two-way relationships.

Let us create a repository class then.

package com.techwasti.dao;

import com.techwasti.entity.Movie;
import org.neo4j.driver.internal.shaded.reactor.core.publisher.Mono;
import org.neo4j.springframework.data.repository.ReactiveNeo4jRepository;

public interface MovieRepository extends ReactiveNeo4jRepository<Movie, String> {
    Mono<Movie> findOneByTitle(String title);
}

This is to demonstrate the reactive programming style so we used here ReactiveNeo4jRepository which is reactive repository implementation.

You can hit below endpoints to see the output:

GET http://localhost:8080/movies

DELETE http://localhost:8080/movies/The Matrix

This is it for now.

References:-

https://neo4j.com/developer/spring-data-neo4j-rx/
https://neo4j.com/developer/spring-data-neo4j/

https://spring.io/guides/gs/accessing-data-neo4j/

Spring boot Cloud Native Buildpacks and Layered jars.

In May 2020 spring boot 2.3 is released with some interesting features. There are many but in this article, we will talk about support for building OCI images using cloud-native build packs. 

Cloud-Native BuildPacks?

These days cloud migration and in that cloud-native application development is becoming a trend.

Cloud-Native Buildpacks
transform your application source code to images that can run on any cloud.

Cloud-native buildpacks definition from From https://buildpacks.io/,

The Cloud Native Buildpacks project was initiated by Pivotal and Heroku in January 2018 and joined the Cloud Native Sandbox in October 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku.

credit goes to https://buildpacks.io/

Cloud-Native Buildpacks embrace modern container standards, such as the OCI(Open container initiative) image format. They take advantage of the latest capabilities of these standards, such as cross-repository blob mounting and image layer “rebasing” on Docker API v2 registries.

All the above information is from buildpack website only. The buildpack in fewer words: will transform your beautiful source code into runnable container images.

The Paketo java build is used by default to create an image.

Prerequisites for this example:

  1. Java
  2. Docker
  3. Any IDE if you want.

Note:- For this demo, I am using spring-boot:2.3.1 version, JDK-8 and maven.

and Always for spring start with https://start.spring.io/

I have imported the project into the VSCode. If you want to learn about Spring tools for Visual Studio Code, please go through this link: https://www.techwasti.com/spring-tools-4-for-visual-studio-code/

Create One REST Controller:-

As part of this article, our focus is on buildpack not on complex coding.

We have a simple controller this will return the current date.

package com.techwasti.spring.buildpackex.springboot23ocibuildpackex;

import java.util.Date;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class CurrentDateController{


    @GetMapping("/gettodaysdate")
     public String getTodaysDate(){
        return new Date().toString();
     }
}

Output: Sun Jul 19 09:00:36 IST 2020

Build the image for our app:

As mentioned above and as per the spring boot documentation, we will build an image. Package the source code and build the image as per OCI standard using the maven task.

$ mvn spring-boot:build-image

When you fire this command everything will be taken by spring boot build-image task. After a successful building image.

In your log you will see similar logs:

Successfully built image ‘docker.io/library/spring-boot23-oci-buildpack-ex:0.0.1-SNAPSHOT’

We can validate our docker image using the below command.

$ docker images| grep spring

Output
spring-boot23-oci-buildpack-ex 0.0.1-SNAPSHOT ddabb93c2218 40 ago 231MB

Now our image is ready, let us run the image and create a container 

$ docker run -d -p 8080:8080  — name springbuildpackex spring-boot23-oci-buildpack-ex:0.0.1-SNAPSHOT

once a container ready verify using 

$ docker ps

Hit REST API endpoint http://localhost:8080/gettodaysdate

You can hit actuator endpoints 

http://localhost:8080/actuator/metrics

http://localhost:8080/actuator/info

Spring Boot 2.3.0.RC1 Paketo Java buildpack is used by default to create images.

you can check on your local docker when you fire below command you can see Paketo docker image was downloaded.

$ docker images| grep gcr
gcr.io/paketo-buildpacks/run       base-cnb                c8c8215efa6f        8 days ago          71.1MB
gcr.io/paketo-buildpacks/builder base-platform-api-0.3 e49209451fa6 40 years ago 696MB

Customize build pack configuration:

Now we have seen that by default the name of the image is based on artifactId and a tag is a version of our maven project. Image name if spring-boot23-oci-buildpack-ex 0.0.1-SNAPSHOT

docker.io/library/${project.artifactId}:{project.version}

In your real-life project, you would like to push the OCI image to a specific docker image registry, which is internal to your organization. Here I am using the docker hub is a public central registry. You configure parameters such as the name of the docker image in POM.xml

<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>

With these custom tags and names, I can push this image to the docker hub.

docker push docker.io/maheshwarligade/spring-boot23-oci-buildpack-ex

Using the command line as well:

$ mvn spring-boot:build-image -Dspring-boot.build-image.imageName=enterprise.com/library/domain/sspring-boot23-oci-buildpack-ex

We can also configure build packs builder to build the image using below configuration param

<configuration>            

</configuration>

Proxy Configuration:

If any proxy is configured between the Docker daemon the builder runs in and network locations that build-packs download artifacts from, you will need to configure the builder to use the proxy. When using the default builder, this can be accomplished by setting the HTTPS_PROXY and/or HTTP_PROXY environment:

<configuration>      

</configuration>

Layered Jars:

We have seen above to create the image we used to build-pack, but you might not want to use build pack to build an image, perhaps we want to use some tool which is used within your organization based docker file to create an application image, Spring wanted to make it also easier to create optimized Docker images that can be built with a regular dockerfile so Spring has added support for layered jars.

basically we follow approach to create a docker image using spring boot application fat jar and add that into docker file and add a command to execute this.

The jar is organized into three main parts:

  • Classes used to bootstrap jar loading
  • Your application classes in BOOT-INF/classes
  • Dependencies in BOOT-INF/lib

Since this format is unique to Spring Boot, In spring boot in 2.3.0.M1 providing a new layout type call LAYERED_JAR

As we know about the docker file its layered file and when we rebuild image for dev purpose it should build where changes had happened instead of rebuilding the fat jar layer again and again. The layered jar type is designed to separate code based on how likely it is to change between application builds. Library code is less likely to change between builds, so it is placed in its own layers to allow tooling to re-use the layers from the cache. Application code is more likely to change between builds so it is isolated in a separate layer.

  • dependencies (for regularly released dependencies)
  • snapshot-dependencies (for snapshot dependencies)
  • resources (for static resources)
  • application (for application classes and resources)

Build the docker file 

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<layout>LAYERED_JAR</layout>
</configuration>
</plugin>
</plugins>
</build>

Build jar for application

$ mvn clean package

We can have layered jar using jarmode

$ java -Djarmode=layertools -jar target/spring-boot23-oci-buildpack-ex-0.0.1-SNAPSHOT.jar list

Output

dependencies
snapshot-dependencies
resources
application

based on this we can craft docker file which will be similar like below:

FROM adoptopenjdk:11-jre-hotspot as builder
WORKDIR application
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract

FROM adoptopenjdk:11-jre-hotspot
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/resources/ ./
COPY --from=builder application/application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]

Summary:

Here we have seen multiple ways to create an image of our spring boot application. With buildpacks, docker files, & existing plugins such as jib, there is no conclusion that which is the best way. Each approach has pros and cons and we have to use these tools based on our easiness and simplification.

Source Code: https://github.com/maheshwarLigade/spring-boot23-oci-buildpack-ex

References:

Dockerise Micronaut application.

Micronauts is a java framework to develop a cloud-native microservices application easily and seamlessly. If you don’t know about Micronaut Please go through below two articles. 

In this article, we are exploring a micronaut framework and How to dockerize it. 

Let us create a small micronaut REST service application and try to dockerize it.

Micronaut provides a CLI option to create an application easily.

$ mn create-app helloworld

This will scaffold a new Gradle project. If you prefer Maven, add a --build maven parameter. If you want to create a new Groovy or Kotlin project, add a --lang parameter.

$ mn create-app --lang groovy helloworld-groovy
$ mn create-app --lang kotlin helloworld-kotlin

These options depend on you, which language are you comfortable with.

These options depend on you, which language are you comfortable with. 

Once the project is ready we can import that in your favorite editor. I am using IntelliJ.

We are using already created Hello world app, source code is available at below location you can clone

https://github.com/maheshwarLigade/micronaut-examples/tree/master/helloworld

By default, micronaut app can create Docker file for you and docker file you can locate on current directory of your project <appname>/Docker 

e.g helloworld/Docker

The Default content of Docker file:

FROM adoptopenjdk/openjdk13-openj9:jdk-13.0.2_8_openj9-0.18.0-alpine-slim
COPY build/libs/helloworld-*-all.jar helloworld.jar
EXPOSE 8080
CMD ["java", "-Dcom.sun.management.jmxremote", "-Xmx128m", "-XX:+IdleTuningGcOnIdle", "-Xtune:virtualized", "-jar", "helloworld.jar"]

If you are familiar with docker then fine if not you can explore below article to understand docker.

https://www.techwasti.com/demystify-docker-container-technology-9a8e1ec3968b/

Micronaut create docker file with alpine-slim 

and JDK image which is used here is unofficial.

This repo provides Unofficial AdoptOpenJDK Docker Images,

Reference:- https://hub.docker.com/r/adoptopenjdk/openjdk13-openj9

Thrid line to copy the generated jar(helloworld.jar) file and the expose default port as 8080. Last line to launch the jar file.

For this example, I am using Gradle as a build tool

$ cd helloworld
$ ./gradlew run

To test whether code is working fine or not. (curl http://localhost:8080/hello)

Now build a Docker image from the docker file for that fire below command. 

To run the application with IntelliJ IDEA, you need to enable annotation processing:

  1. open Settings → Build → Execution → Deployment → Compiler →Annotation Processors
  2. Set the checkbox Enable annotation processing

As we know micronaut CLI generates a Dockerfile by default, making it easy to package your application for a container environment such as Kubernetes.

$ docker build . -t hello-world-ex

Fire above command to create a docker image. -t 1.0.0 indicates the tag for this image. Now our image is ready to make a container from its fire below command. 

$ docker run --rm -p 8080:8080 hello-world-ex

As we have exposed 8080 port in docker file. We are doing port mapping to an external system.

to verify the docker image fire below command.

$ curl http://localhost:8080/hello

In this article, we have seen dockerizing micronaut apps. We have created helloworld application and created a docker image using the existing Docker file. You can edit the docker file and optimize it as per your requirement. 

Spring Boot Firebase CRUD

In this article, we show How to build a CRUD application using Firebase and Spring boot.

Create a Firebase project in the Firebase console:

https://console.firebase.google.com/

Hit the https://console.firebase.google.com and sign up for an account.

Click the “Add Project” button from the project overview page.

Type “Firebase DB for Spring Boot” in the “Project name” field.

Click the “CREATE PROJECT” button.

Now we have created a project on Firebase, now let us add firebase to our spring boot app.

Add Firebase to your web app:

You can find your Realtime Database URL in the Database tab (DEVELOP → Database → Realtime Database → Start in test Mode ) in the Firebase console. It will be in the form of https://<databaseName>.firebaseio.com.

Create Firebase in test mode this is not useful for Prod development but for our this article we will use it in test mode which is available publicly. 

Your Database URL should look like this https://<Projectname XYZ>.firebaseio.com/

Our data is ready but still, we need a service account 

Go and click on Project settings → Service Accounts → Choose Language as Java. to copy code snippet

and Download JSON file as well by clicking on “Generate new private key”

We will also grab the admin SDK configuration snippet for java.

Then go to https://start.spring.io/ and create a project, Once the project added then open the pom.xml file and add below dependency.

<dependency>
    <groupId>com.google.firebase</groupId>
    <artifactId>firebase-admin</artifactId>
    <version>6.11.0</version>
 </dependency>

Now everything is ready to Let us initialize Firebase Database.

import com.google.auth.oauth2.GoogleCredentials;
import com.google.firebase.FirebaseApp;
import com.google.firebase.FirebaseOptions;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;
import java.io.FileInputStream;

@Service
public class FBInitialize {

    @PostConstruct
    public void initialize() {
        try {
            FileInputStream serviceAccount =
                    new FileInputStream("./serviceaccount.json");

            FirebaseOptions options = new FirebaseOptions.Builder()
                    .setCredentials(GoogleCredentials.fromStream(serviceAccount))
                    .setDatabaseUrl("https://chatapp-e6e15.firebaseio.com")
                    .build();

            FirebaseApp.initializeApp(options);
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

I am using the existing Firebase Database.

@Service and @PostConstruct these are the two annotations from Spring Boot. 

First-line reads the configurations from the JSON file and then initializes the connection for the specified database. 

Now firebase connection is initialized then let us create CRUD operations.

Create a POJO class as a Patient

public class Patient {

    private String name;

    private int age;

    private String city;


    public Patient(String name, int age, String city) {
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public String getCity() {
        return city;
    }

    public void setCity(String city) {
        this.city = city;
    }
}

Create Service class

import com.google.api.core.ApiFuture;
import com.google.cloud.firestore.DocumentReference;
import com.google.cloud.firestore.DocumentSnapshot;
import com.google.cloud.firestore.Firestore;
import com.google.cloud.firestore.WriteResult;
import com.google.firebase.cloud.FirestoreClient;
import org.springframework.stereotype.Service;

import java.util.concurrent.ExecutionException;

//CRUD operations
@Service
public class PatientService {

    public static final String COL_NAME="users";

    public String savePatientDetails(Patient patient) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> collectionsApiFuture = dbFirestore.collection(COL_NAME).document(patient.getName()).set(patient);
        return collectionsApiFuture.get().getUpdateTime().toString();
    }

    public Patient getPatientDetails(String name) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        DocumentReference documentReference = dbFirestore.collection(COL_NAME).document(name);
        ApiFuture<DocumentSnapshot> future = documentReference.get();

        DocumentSnapshot document = future.get();

        Patient patient = null;

        if(document.exists()) {
            patient = document.toObject(Patient.class);
            return patient;
        }else {
            return null;
        }
    }

    public String updatePatientDetails(Patient person) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> collectionsApiFuture = dbFirestore.collection(COL_NAME).document(person.getName()).set(person);
        return collectionsApiFuture.get().getUpdateTime().toString();
    }

    public String deletePatient(String name) {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> writeResult = dbFirestore.collection(COL_NAME).document(name).delete();
        return "Document with Patient ID "+name+" has been deleted";
    }

}

Now we are ready with CRUD operation let us develop the REST Controller which will help us in interaction with this service layer.

Note:- You have to enable Cloud FireStore API.

Now we just need to create Controller which can handle REST request.

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.concurrent.ExecutionException;

@RestController
public class PatientController {

    @Autowired
    PatientService patientService;

    @GetMapping("/getPatientDetails")
    public Patient getPatient(@RequestParam String name ) throws InterruptedException, ExecutionException{
        return patientService.getPatientDetails(name);
    }

    @PostMapping("/createPatient")
    public String createPatient(@RequestBody Patient patient ) throws InterruptedException, ExecutionException {
        return patientService.savePatientDetails(patient);
    }

    @PutMapping("/updatePatient")
    public String updatePatient(@RequestBody Patient patient  ) throws InterruptedException, ExecutionException {
        return patientService.updatePatientDetails(patient);
    }

    @DeleteMapping("/deletePatient")
    public String deletePatient(@RequestParam String name){
        return patientService.deletePatient(name);
    }
}

Now coding is done try by yourself and let’s know.

Installation of Micronaut on Mac(OSX) & Linux.

Micronaut is a full framework to develop cloud native microservice architecture based application using java, kotlin or Groovy.

Let us check the steps required to install micronaut on OSx.

Simple and effortless start on Mac OSX, Linux, you can use SDKMAN! (The Software Development Kit Manager) to download and configure any Micronaut version of your choice.

INSTALLING WITH SDKMAN:

This tool makes installing of a Micronaut on any Unix based platform such as Linux, OSx.

Open a terminal and install SDKMAN,

$  curl -s https://get.sdkman.io | bash

Follow the on-screen instructions to complete installation.

Then fire below command after installation of SDKMAN to configure SDKMAN.

 $ source "$HOME/.sdkman/bin/sdkman-init.sh"

Once above two steps are in align then go and setup micrnaut using SDKMAN,

$ sdk install micronaut

After installation is complete it can be validated with below command.

$ mn --version
installation and validation of micronaut

this is the simple steps with SDKMAN.

INSTALLING WITH HomeBrew:

Before installation using homebrew you should update homebrew version

$ brew update

In order to install Micronaut, run following command:

$ brew install micronaut

After installation is complete it can be validated with below command.

$ mn --version

Installing with MacPorts:

Before installing it is recommended to sync the latest Portfiles, So that there shouldn’t be any issue,

$ sudo port sync

To install Micronaut, run following command

$ sudo port install micronaut

After installation is complete it can be validated with below command.

$ mn --version

Above are they three different way we can setup micronaut framework on MacOS and linux based OS.

Spring Tools 4 for Visual Studio Code.

Visual studio code is the most popular, open-source and lightweight editor in the market. Spring boot has also another popular and powerful Framework in the Java ecosystem. It gains popularity because of simplicity and bootstrap the development. Spring boot is very handy to develop microservice-based applications and also support for cloud-native development. 

This article is for those who want to leverage their VS code editor to develop spring framework application. Spring comes up with tools to support the development of spring framework based application in VSCode.

Spring really provides the flexibility to a developer, for spring developer you don’t need any special editor, IDE or OS or any tool suite as well. 

Spring is nature’s way of saying, ‘Let’s Party!

As per the above quotes spring framework really saying developer don’t worry let’s do a party I will take care of everything. 

If you have hit this article that means you are familiar with Visual Studio code editor. You can download visual studio code if you haven’t by using the below link:

https://code.visualstudio.com/

Configure the Spring boot with Visual Studio Code

After installation of vs code editor on your local system. Now we have considered below points to configure Spring boot with VS code.

I will assume you have done below configurations

  1. VS code installation.
  2. Java extension for VS Code.
  3. Kotlin extension if you want to develop a spring boot app using kotlin.

If above everything is done then go and open VS Code and go to extension and search for “spring”. You will able to something like the below image in your vscode editor.

Click and install “Spring Boot Extension Pack” Once the installation is done reload the VS code.

Spring Boot Extension Pack is acollection of extensions for developing and deploying Spring Boot Application.

  1. Spring boot tools.
  2. Spring Initializr Java Support.
  3. Spring Boot Dashboard.

Spring Boot Tools:

VSCode extension and Language Server providing support for working with Spring Boot application.properties, application.yml and .java files.

Spring Initializr Java Support:

Spring Initializr is a lightweight extension to quickly generate a Spring Boot project in Visual Studio Code (VS Code). It helps you to customize your projects with configurations and manage Spring Boot dependencies.

Spring Boot Dashboard:

Spring Boot Dashboard is a lightweight extension in Visual Studio Code (VS Code). With an explorer in the side bar, you can view and manage all available Spring Boot projects in your workspace. It also supports the features to quickly start, stop or debug a Spring Boot project.

Feature List

  • View Spring Boot apps in workspace
  • Start / Stop a Spring Boot app
  • Debug a Spring Boot app
  • Open a Spring Boot app in the browser
  • Generate a Maven/Gradle Spring Boot project
  • Customize configurations for a new project (language, group id, artifact id, boot version, and dependencies)
  • Search for dependencies
  • Quickstart with last settings
  • Edit Spring Boot dependencies of an existing Spring Boot project

Extention pack contains:

  1. IDE Java tooling for developing and troubleshooting Spring Boot applications.
  2. It provides support for editing Cloud Foundry deployment manifest .yml files for Spring Boot application deployment.
  3. The Concourse CI Pipeline Editor provides support for setting up Concourse build pipeline for the Spring Boot application.
  4. It provides support for generating quickstart Spring Boot Java projects with Spring Initiailizr API.
  5. It provides an explorer in the sidebar where you can view all of a workspace’s spring boot projects conveniently in one place.

This is it for now up to installation.

Want to create an application using vscode, check the below video. 

Spring Boot in VS Code

Use of spring initializer 

  • Launch VS Code
  • Press Ctrl + Shift + P to open the command palette.
  • Type Spring Initializr to start generating a Maven or Gradle project.
  • Follow the wizard.
  • Right-click inside the pom.xml file and choose Edit starters for dependency refactoring. (Gradle project is not supported yet, PR is welcome for it.)

Shortcuts may change based on the Operating System.

More such Stories