Spring boot Cloud Native Buildpacks and Layered jars.

In May 2020 spring boot 2.3 is released with some interesting features. There are many but in this article, we will talk about support for building OCI images using cloud-native build packs. 

Cloud-Native BuildPacks?

These days cloud migration and in that cloud-native application development is becoming a trend.

Cloud-Native Buildpacks
transform your application source code to images that can run on any cloud.

Cloud-native buildpacks definition from From https://buildpacks.io/,

The Cloud Native Buildpacks project was initiated by Pivotal and Heroku in January 2018 and joined the Cloud Native Sandbox in October 2018. The project aims to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and that incorporates learnings from maintaining production-grade buildpacks for years at both Pivotal and Heroku.

credit goes to https://buildpacks.io/

Cloud-Native Buildpacks embrace modern container standards, such as the OCI(Open container initiative) image format. They take advantage of the latest capabilities of these standards, such as cross-repository blob mounting and image layer “rebasing” on Docker API v2 registries.

All the above information is from buildpack website only. The buildpack in fewer words: will transform your beautiful source code into runnable container images.

The Paketo java build is used by default to create an image.

Prerequisites for this example:

  1. Java
  2. Docker
  3. Any IDE if you want.

Note:- For this demo, I am using spring-boot:2.3.1 version, JDK-8 and maven.

and Always for spring start with https://start.spring.io/

I have imported the project into the VSCode. If you want to learn about Spring tools for Visual Studio Code, please go through this link: https://www.techwasti.com/spring-tools-4-for-visual-studio-code/

Create One REST Controller:-

As part of this article, our focus is on buildpack not on complex coding.

We have a simple controller this will return the current date.

package com.techwasti.spring.buildpackex.springboot23ocibuildpackex;

import java.util.Date;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class CurrentDateController{


    @GetMapping("/gettodaysdate")
     public String getTodaysDate(){
        return new Date().toString();
     }
}

Output: Sun Jul 19 09:00:36 IST 2020

Build the image for our app:

As mentioned above and as per the spring boot documentation, we will build an image. Package the source code and build the image as per OCI standard using the maven task.

$ mvn spring-boot:build-image

When you fire this command everything will be taken by spring boot build-image task. After a successful building image.

In your log you will see similar logs:

Successfully built image ‘docker.io/library/spring-boot23-oci-buildpack-ex:0.0.1-SNAPSHOT’

We can validate our docker image using the below command.

$ docker images| grep spring

Output
spring-boot23-oci-buildpack-ex 0.0.1-SNAPSHOT ddabb93c2218 40 ago 231MB

Now our image is ready, let us run the image and create a container 

$ docker run -d -p 8080:8080  — name springbuildpackex spring-boot23-oci-buildpack-ex:0.0.1-SNAPSHOT

once a container ready verify using 

$ docker ps

Hit REST API endpoint http://localhost:8080/gettodaysdate

You can hit actuator endpoints 

http://localhost:8080/actuator/metrics

http://localhost:8080/actuator/info

Spring Boot 2.3.0.RC1 Paketo Java buildpack is used by default to create images.

you can check on your local docker when you fire below command you can see Paketo docker image was downloaded.

$ docker images| grep gcr
gcr.io/paketo-buildpacks/run       base-cnb                c8c8215efa6f        8 days ago          71.1MB
gcr.io/paketo-buildpacks/builder base-platform-api-0.3 e49209451fa6 40 years ago 696MB

Customize build pack configuration:

Now we have seen that by default the name of the image is based on artifactId and a tag is a version of our maven project. Image name if spring-boot23-oci-buildpack-ex 0.0.1-SNAPSHOT

docker.io/library/${project.artifactId}:{project.version}

In your real-life project, you would like to push the OCI image to a specific docker image registry, which is internal to your organization. Here I am using the docker hub is a public central registry. You configure parameters such as the name of the docker image in POM.xml

<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>

With these custom tags and names, I can push this image to the docker hub.

docker push docker.io/maheshwarligade/spring-boot23-oci-buildpack-ex

Using the command line as well:

$ mvn spring-boot:build-image -Dspring-boot.build-image.imageName=enterprise.com/library/domain/sspring-boot23-oci-buildpack-ex

We can also configure build packs builder to build the image using below configuration param

<configuration>            

</configuration>

Proxy Configuration:

If any proxy is configured between the Docker daemon the builder runs in and network locations that build-packs download artifacts from, you will need to configure the builder to use the proxy. When using the default builder, this can be accomplished by setting the HTTPS_PROXY and/or HTTP_PROXY environment:

<configuration>      

</configuration>

Layered Jars:

We have seen above to create the image we used to build-pack, but you might not want to use build pack to build an image, perhaps we want to use some tool which is used within your organization based docker file to create an application image, Spring wanted to make it also easier to create optimized Docker images that can be built with a regular dockerfile so Spring has added support for layered jars.

basically we follow approach to create a docker image using spring boot application fat jar and add that into docker file and add a command to execute this.

The jar is organized into three main parts:

  • Classes used to bootstrap jar loading
  • Your application classes in BOOT-INF/classes
  • Dependencies in BOOT-INF/lib

Since this format is unique to Spring Boot, In spring boot in 2.3.0.M1 providing a new layout type call LAYERED_JAR

As we know about the docker file its layered file and when we rebuild image for dev purpose it should build where changes had happened instead of rebuilding the fat jar layer again and again. The layered jar type is designed to separate code based on how likely it is to change between application builds. Library code is less likely to change between builds, so it is placed in its own layers to allow tooling to re-use the layers from the cache. Application code is more likely to change between builds so it is isolated in a separate layer.

  • dependencies (for regularly released dependencies)
  • snapshot-dependencies (for snapshot dependencies)
  • resources (for static resources)
  • application (for application classes and resources)

Build the docker file 

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<layout>LAYERED_JAR</layout>
</configuration>
</plugin>
</plugins>
</build>

Build jar for application

$ mvn clean package

We can have layered jar using jarmode

$ java -Djarmode=layertools -jar target/spring-boot23-oci-buildpack-ex-0.0.1-SNAPSHOT.jar list

Output

dependencies
snapshot-dependencies
resources
application

based on this we can craft docker file which will be similar like below:

FROM adoptopenjdk:11-jre-hotspot as builder
WORKDIR application
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract

FROM adoptopenjdk:11-jre-hotspot
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/resources/ ./
COPY --from=builder application/application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]

Summary:

Here we have seen multiple ways to create an image of our spring boot application. With buildpacks, docker files, & existing plugins such as jib, there is no conclusion that which is the best way. Each approach has pros and cons and we have to use these tools based on our easiness and simplification.

Source Code: https://github.com/maheshwarLigade/spring-boot23-oci-buildpack-ex

References:

Micronaut with Graal native image example.

As we have seen in the last couple of articles on how to create simple Micronauts application development and dockerizing it. In this article, we are gone exploring Helloworld Graal micronaut application.

Here is the definition from Wikipedia. If you are crossing this article that means you are familiar with either of the topic.

GraalVM is a Java VM and JDK based on HotSpot/OpenJDK, implemented in Java. It supports additional programming languages and execution modes, like an ahead-of-time compilation of Java applications for fast startup and low memory footprint. The first production-ready version, GraalVM 19.0, was released in May 2019.

Let us start coding and simultaneously enjoy the topic.

Create a micronaut application using CLI:

$ mn create-app helloworld-graal --features=graal-native-image

The default option is not available to add Graal support we have to use this option  — features=graal-native-image.

If you are using Java or Kotlin and IntelliJ IDEA make sure you have enabled annotation processing.

Now let us create one simple POJO class to hold Play name to make it simple.

import io.micronaut.core.annotation.Introspected;

@Introspected
public class Play{

    private String name;

    public Play(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

@Introspected annotation is used to generate BeanIntrospection metadata at compilation time. This information is using the render the POJO as JSON using Jackson without using reflection.

Now let us create the Singelton Service class and return play name randomly.

(Note:- The play names are Marathi Play names of famous Sri Pu la Deshpande)

import javax.inject.Singleton;
import java.util.Arrays;
import java.util.List;
import java.util.Random;

@Singleton
public class PlayService {
// create list of plays
    private static final List<Play> PLAYS = Arrays.asList(
            new Play("Tujhe Ahe Tujpashi"),
            new Play("Sundar Mee Honar"),
            new Play("Tee Phularani"),
            new Play("Teen Paishacha Tamasha"),
            new Play("Ek Jhunj Varyashi")
    );
 // to choose random play from PLAYS list
    public Play randomPlay() {
        return PLAYS.get(new Random().nextInt(PLAYS.size()));
    }
}

Now we need a controller to serve the request of random play name from service class.

import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Get;

@Controller
public class PlayController {

    private final PlayService playService;

    public PlayController(PlayService playService) {
        this.playService = playService;
    }

    @Get("/randomplay")
    public Play randomPlay() {
        return playService.randomPlay();
    }
}

Created controller and injected service object using constructor injection and mapping of GET method using @Get(“/randomplay”).

Now our application is ready you can test by executing below command.

$ ./gradlew run

http://localhost:8080/randomplay

JSON output 

{

 name: “Tee Phularani”

}

Let us create a Graal native image.

Micronaut only supported in Java or Kotlin for graal native-image.

While creating a project we have added — features=graal-native-image this is adding three important features. 

  1. svm(Substrate VM) and graal dependencies in build.gradle.
compileOnly "org.graalvm.nativeimage:svm"
annotationProcessor "io.micronaut:micronaut-graal"

2. A Dockerfile which can be used to construct the native image executing docker-build.sh

3. A native-image.properties file in the resource directory.

Args = -H:IncludeResources=logback.xml|application.yml|bootstrap.yml \
       -H:Name=helloworld-graal \
       -H:Class=helloworld.graal.Application

This is very easy for developer to create a native image inside docker. Fire below two commands: 

$ ./gradlew assemble
$ ./docker-build.sh

Once image is ready we can create a container to verify our understanding. 

$ docker run -p 8080:8080 helloworld-graal

To test the application you can use curl with time:

$ time curl localhost:8080/randomplay

This is for now. You can check the time difference with native image executable and docker with a native image. 

Source code download or clone from github: https://github.com/maheshwarLigade/micronaut-examples/tree/master/helloworld-graal

Dockerise Micronaut application.

Micronauts is a java framework to develop a cloud-native microservices application easily and seamlessly. If you don’t know about Micronaut Please go through below two articles. 

In this article, we are exploring a micronaut framework and How to dockerize it. 

Let us create a small micronaut REST service application and try to dockerize it.

Micronaut provides a CLI option to create an application easily.

$ mn create-app helloworld

This will scaffold a new Gradle project. If you prefer Maven, add a --build maven parameter. If you want to create a new Groovy or Kotlin project, add a --lang parameter.

$ mn create-app --lang groovy helloworld-groovy
$ mn create-app --lang kotlin helloworld-kotlin

These options depend on you, which language are you comfortable with.

These options depend on you, which language are you comfortable with. 

Once the project is ready we can import that in your favorite editor. I am using IntelliJ.

We are using already created Hello world app, source code is available at below location you can clone

https://github.com/maheshwarLigade/micronaut-examples/tree/master/helloworld

By default, micronaut app can create Docker file for you and docker file you can locate on current directory of your project <appname>/Docker 

e.g helloworld/Docker

The Default content of Docker file:

FROM adoptopenjdk/openjdk13-openj9:jdk-13.0.2_8_openj9-0.18.0-alpine-slim
COPY build/libs/helloworld-*-all.jar helloworld.jar
EXPOSE 8080
CMD ["java", "-Dcom.sun.management.jmxremote", "-Xmx128m", "-XX:+IdleTuningGcOnIdle", "-Xtune:virtualized", "-jar", "helloworld.jar"]

If you are familiar with docker then fine if not you can explore below article to understand docker.

https://www.techwasti.com/demystify-docker-container-technology-9a8e1ec3968b/

Micronaut create docker file with alpine-slim 

and JDK image which is used here is unofficial.

This repo provides Unofficial AdoptOpenJDK Docker Images,

Reference:- https://hub.docker.com/r/adoptopenjdk/openjdk13-openj9

Thrid line to copy the generated jar(helloworld.jar) file and the expose default port as 8080. Last line to launch the jar file.

For this example, I am using Gradle as a build tool

$ cd helloworld
$ ./gradlew run

To test whether code is working fine or not. (curl http://localhost:8080/hello)

Now build a Docker image from the docker file for that fire below command. 

To run the application with IntelliJ IDEA, you need to enable annotation processing:

  1. open Settings → Build → Execution → Deployment → Compiler →Annotation Processors
  2. Set the checkbox Enable annotation processing

As we know micronaut CLI generates a Dockerfile by default, making it easy to package your application for a container environment such as Kubernetes.

$ docker build . -t hello-world-ex

Fire above command to create a docker image. -t 1.0.0 indicates the tag for this image. Now our image is ready to make a container from its fire below command. 

$ docker run --rm -p 8080:8080 hello-world-ex

As we have exposed 8080 port in docker file. We are doing port mapping to an external system.

to verify the docker image fire below command.

$ curl http://localhost:8080/hello

In this article, we have seen dockerizing micronaut apps. We have created helloworld application and created a docker image using the existing Docker file. You can edit the docker file and optimize it as per your requirement. 

Spring Boot Firebase CRUD

In this article, we show How to build a CRUD application using Firebase and Spring boot.

Create a Firebase project in the Firebase console:

https://console.firebase.google.com/

Hit the https://console.firebase.google.com and sign up for an account.

Click the “Add Project” button from the project overview page.

Type “Firebase DB for Spring Boot” in the “Project name” field.

Click the “CREATE PROJECT” button.

Now we have created a project on Firebase, now let us add firebase to our spring boot app.

Add Firebase to your web app:

You can find your Realtime Database URL in the Database tab (DEVELOP → Database → Realtime Database → Start in test Mode ) in the Firebase console. It will be in the form of https://<databaseName>.firebaseio.com.

Create Firebase in test mode this is not useful for Prod development but for our this article we will use it in test mode which is available publicly. 

Your Database URL should look like this https://<Projectname XYZ>.firebaseio.com/

Our data is ready but still, we need a service account 

Go and click on Project settings → Service Accounts → Choose Language as Java. to copy code snippet

and Download JSON file as well by clicking on “Generate new private key”

We will also grab the admin SDK configuration snippet for java.

Then go to https://start.spring.io/ and create a project, Once the project added then open the pom.xml file and add below dependency.

<dependency>
    <groupId>com.google.firebase</groupId>
    <artifactId>firebase-admin</artifactId>
    <version>6.11.0</version>
 </dependency>

Now everything is ready to Let us initialize Firebase Database.

import com.google.auth.oauth2.GoogleCredentials;
import com.google.firebase.FirebaseApp;
import com.google.firebase.FirebaseOptions;
import org.springframework.stereotype.Service;

import javax.annotation.PostConstruct;
import java.io.FileInputStream;

@Service
public class FBInitialize {

    @PostConstruct
    public void initialize() {
        try {
            FileInputStream serviceAccount =
                    new FileInputStream("./serviceaccount.json");

            FirebaseOptions options = new FirebaseOptions.Builder()
                    .setCredentials(GoogleCredentials.fromStream(serviceAccount))
                    .setDatabaseUrl("https://chatapp-e6e15.firebaseio.com")
                    .build();

            FirebaseApp.initializeApp(options);
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

I am using the existing Firebase Database.

@Service and @PostConstruct these are the two annotations from Spring Boot. 

First-line reads the configurations from the JSON file and then initializes the connection for the specified database. 

Now firebase connection is initialized then let us create CRUD operations.

Create a POJO class as a Patient

public class Patient {

    private String name;

    private int age;

    private String city;


    public Patient(String name, int age, String city) {
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public String getCity() {
        return city;
    }

    public void setCity(String city) {
        this.city = city;
    }
}

Create Service class

import com.google.api.core.ApiFuture;
import com.google.cloud.firestore.DocumentReference;
import com.google.cloud.firestore.DocumentSnapshot;
import com.google.cloud.firestore.Firestore;
import com.google.cloud.firestore.WriteResult;
import com.google.firebase.cloud.FirestoreClient;
import org.springframework.stereotype.Service;

import java.util.concurrent.ExecutionException;

//CRUD operations
@Service
public class PatientService {

    public static final String COL_NAME="users";

    public String savePatientDetails(Patient patient) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> collectionsApiFuture = dbFirestore.collection(COL_NAME).document(patient.getName()).set(patient);
        return collectionsApiFuture.get().getUpdateTime().toString();
    }

    public Patient getPatientDetails(String name) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        DocumentReference documentReference = dbFirestore.collection(COL_NAME).document(name);
        ApiFuture<DocumentSnapshot> future = documentReference.get();

        DocumentSnapshot document = future.get();

        Patient patient = null;

        if(document.exists()) {
            patient = document.toObject(Patient.class);
            return patient;
        }else {
            return null;
        }
    }

    public String updatePatientDetails(Patient person) throws InterruptedException, ExecutionException {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> collectionsApiFuture = dbFirestore.collection(COL_NAME).document(person.getName()).set(person);
        return collectionsApiFuture.get().getUpdateTime().toString();
    }

    public String deletePatient(String name) {
        Firestore dbFirestore = FirestoreClient.getFirestore();
        ApiFuture<WriteResult> writeResult = dbFirestore.collection(COL_NAME).document(name).delete();
        return "Document with Patient ID "+name+" has been deleted";
    }

}

Now we are ready with CRUD operation let us develop the REST Controller which will help us in interaction with this service layer.

Note:- You have to enable Cloud FireStore API.

Now we just need to create Controller which can handle REST request.

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

import java.util.concurrent.ExecutionException;

@RestController
public class PatientController {

    @Autowired
    PatientService patientService;

    @GetMapping("/getPatientDetails")
    public Patient getPatient(@RequestParam String name ) throws InterruptedException, ExecutionException{
        return patientService.getPatientDetails(name);
    }

    @PostMapping("/createPatient")
    public String createPatient(@RequestBody Patient patient ) throws InterruptedException, ExecutionException {
        return patientService.savePatientDetails(patient);
    }

    @PutMapping("/updatePatient")
    public String updatePatient(@RequestBody Patient patient  ) throws InterruptedException, ExecutionException {
        return patientService.updatePatientDetails(patient);
    }

    @DeleteMapping("/deletePatient")
    public String deletePatient(@RequestParam String name){
        return patientService.deletePatient(name);
    }
}

Now coding is done try by yourself and let’s know.

Installation of Micronaut on Mac(OSX) & Linux.

Micronaut is a full framework to develop cloud native microservice architecture based application using java, kotlin or Groovy.

Let us check the steps required to install micronaut on OSx.

Simple and effortless start on Mac OSX, Linux, you can use SDKMAN! (The Software Development Kit Manager) to download and configure any Micronaut version of your choice.

INSTALLING WITH SDKMAN:

This tool makes installing of a Micronaut on any Unix based platform such as Linux, OSx.

Open a terminal and install SDKMAN,

$  curl -s https://get.sdkman.io | bash

Follow the on-screen instructions to complete installation.

Then fire below command after installation of SDKMAN to configure SDKMAN.

 $ source "$HOME/.sdkman/bin/sdkman-init.sh"

Once above two steps are in align then go and setup micrnaut using SDKMAN,

$ sdk install micronaut

After installation is complete it can be validated with below command.

$ mn --version
installation and validation of micronaut

this is the simple steps with SDKMAN.

INSTALLING WITH HomeBrew:

Before installation using homebrew you should update homebrew version

$ brew update

In order to install Micronaut, run following command:

$ brew install micronaut

After installation is complete it can be validated with below command.

$ mn --version

Installing with MacPorts:

Before installing it is recommended to sync the latest Portfiles, So that there shouldn’t be any issue,

$ sudo port sync

To install Micronaut, run following command

$ sudo port install micronaut

After installation is complete it can be validated with below command.

$ mn --version

Above are they three different way we can setup micronaut framework on MacOS and linux based OS.

Configuration as a Service: Spring Cloud Config – using kotlin.

Developing a microservice architecture with Java and Spring Boot is quite popular these days. In microservice architecture we hundreds of services and managing services for each service and for each profile which is a quite tedious task. In this article, we will demonstrate the Spring cloud config server using kotlin. 

Spring Boot provided a much-needed spark to the Spring projects.

Spring cloud-config provides a server and client-side support for externalizedconfiguration in a distributed system. With the Config Server, you have a central place to manage external properties for applications across all environments.

From the above diagram, you can easily predict that in distributed systems managing configuration as a central service is a bit tedious task and spring cloud config provide client, server architecture mechanism to manage the configuration easily. 

Let us go to the https://start.spring.io/

When we do any changes in any service we have to restart the services to apply the changes.

Let us create one git repo to manage our configuration and to achieve this we are creating one git repo.

So we will create “springbootclient” as one small spring boot microservice to take the username and that will read username from spring cloud config central configuration server system i.e git here.

We have created three different properties files for each of our different environments. 

  1. springbootclient.properties
  2. springbootclient-dev.properties
  3. springbootclient-prod.properties

https://github.com/maheshwarLigade/cloud-common-config-server

Here is our spring cloud config properties are available, you can clone or use directly this repository too.

Now as we have created spring config server application using spring starter let us download and import that project in your favorite IDE or editor. git repo here we used to store our configuration and spring cloud config server application is to serve those properties to the client.

Basically git is datastore, spring cloud config server is server application and there are multiple microservices are the clients which needs configurations.

Now our git as datastore is ready. In this repository, we have created one sample client application and the name of that app is springbootclient. In the future microservice article we will utilize the same spring cloud config as a configuration server.

Let us go and check the code base for the client app.

This is the sample application.properties file:

server.port=8888
logging.level.org.springframework.cloud.config=DEBUG
spring.cloud.config.server.git.uri=https://github.com/maheshwarLigade/cloud-common-config-server.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.searchPaths=springbootclient

Sample Code for SpringCloudConfigServerexApplication.kt

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication
import org.springframework.cloud.config.server.EnableConfigServer

@SpringBootApplication
@EnableConfigServer
class SpringCloudConfigServerexApplication

fun main(args: Array<String>) {
   runApplication<SpringCloudConfigServerexApplication>(*args)
}

Now run and up the spring cloud-config server and check the below URL:

http://localhost:8888/springbootclient/dev/master

Spring Boot Client App:

Let us create one small microservice which will read configuration from spring cloud config server and serve that property value over REST end point.

Go to the https://start.spring.io/ and create spring boot client microservice using kotlin.

Sample POM.xml dependencies.

<dependency>
   <groupId>org.springframework.boot</groupId>
   <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
   <groupId>com.fasterxml.jackson.module</groupId>
   <artifactId>jackson-module-kotlin</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-reflect</artifactId>
</dependency>
<dependency>
   <groupId>org.jetbrains.kotlin</groupId>
   <artifactId>kotlin-stdlib-jdk8</artifactId>
</dependency>
<dependency>
   <groupId>org.springframework.cloud</groupId>
   <artifactId>spring-cloud-starter-config</artifactId>
</dependency>

Now check the SpringCloudClientAppApplication.kt code

import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.runApplication

@SpringBootApplication
class SpringCloudClientAppApplication

fun main(args: Array<String>) {
    runApplication<SpringCloudClientAppApplication>(*args)
}

Now create one sample REST controller which is serving REST request. We want to check ” /whoami” this endpoint is returning which is the user based on active profile dev, prod, etc.

UserController.kt

import org.springframework.beans.factory.annotation.Value
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RestController


@RestController
class UserController {

    @Value("\${app.adminusername}")
    var username="Test"
//get request serving
    @GetMapping("/whoami")
    fun whoami() = "I am a  "+ username

}

Create a bootstrap.properties file where we will specify the spring cloud config server details, which is a git branch and what is active profile dev, local, prod, etc.

spring.application.name=springbootclient
spring.profiles.active=dev
spring.cloud.config.uri=http://localhost:8888
spring.cloud.config.fail-fast=true
spring.cloud.config.label=master

All properties are self exclamatory, what is the use of which one.

Once you hit this URL http://localhost:9080/whoami

Output:- I am a DevUser

Github source link:

Config Server: https://github.com/maheshwarLigade/cloud-common-config-server

Codebase: https://github.com/maheshwarLigade/spring-cloud-config-kotlin-ex

More such Stories

Spring Boot, MongoDB REST API using Kotlin.

As part of this article our focus to develop simple REST API using spring boot and MongoDB. 

Getting started with this is the Spring Initialiser tool: https://start.spring.io/

In this example, I am considering gradle as build tool and MongoDB as Database.

Download and import Project into your favorite editor, I prefer intellij,

Either you can install MongoDB on your local or you can use MongoDB hosted solution https://mlab.com/.

I am using mlab.com for this example.

Let us provide the MongoDB connection details in the application.properties

spring.data.mongodb.host=localhost #for now I kept localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=mongo-rest-api-kotlin-demo

Let us create entity class as Patient.

@Document
data class Patient (
        @Id
        val id: ObjectId = ObjectId.get(),
        val name: String,
        val description: String,
        val createdDate: LocalDateTime = LocalDateTime.now(),
        val modifiedDate: LocalDateTime = LocalDateTime.now()
)

@Document annotation rather than @Entity is used here for marking a class which objects we’d like to persist to the mongodb. 

@Id: is used for marking a field used for identification purposes. 

Also, we have provided some default values for the created date and modified date.

Let us create a repository interface.

import org.bson.types.ObjectId
import org.springframework.data.mongodb.repository.MongoRepository

interface PatientRepository : MongoRepository<Patient, String> {
    fun findOneById(id: ObjectId): Patient
    override fun deleteAll()

}

The repository interface is ready to use, we don’t have to write an implementation for it. This feature is provided by SpringData JPA. Also, MongoRepository interface provides all basic methods for CRUD operations. For now, we will consider only finOneById.

Now our backend is ready, let us write down the REST Controller which will serve our request efficiently.

@RestController
@RequestMapping("/patients")
class PatientController(
        private val patientsRepository: PatientRepository
) {

    @GetMapping
    fun getAllPatients(): ResponseEntity<List<Patient>> {
        val patients = patientsRepository.findAll()
        return ResponseEntity.ok(patients)
    }

    @GetMapping("/{id}")
    fun getOnePatient(@PathVariable("id") id: String): ResponseEntity<Patient> {
        val patient = patientsRepository.findOneById(ObjectId(id))
        return ResponseEntity.ok(patient)
    }
}

Now our basic Controller is ready, let us write some test cases.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(SpringExtension::class)
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
class PatientControllerIntTest @Autowired constructor(
        private val patientRepository: PatientRepository,
        private val restTemplate: TestRestTemplate
) {
    private val defaultPatientId = ObjectId.get()

    @LocalServerPort
    protected var port: Int = 0

    @BeforeEach
    fun setUp() {
        patientRepository.deleteAll()
    }


    private fun getRootUrl(): String? = "http://localhost:$port/patients"

    private fun saveOnePatient() = patientRepository.save(Patient(defaultPatientId, "Name", "Description"))

    @Test
    fun `should return all patients`() {
        saveOnePatient()

        val response = restTemplate.getForEntity(
                getRootUrl(),
                List::class.java
        )

        assertEquals(200, response.statusCode.value())
        assertNotNull(response.body)
        assertEquals(1, response.body?.size)
    }

    @Test
    fun `should return single patient by id`() {
        saveOnePatient()

        val response = restTemplate.getForEntity(
                getRootUrl() + "/$defaultPatientId",
                Patient::class.java
        )

        assertEquals(200, response.statusCode.value())
        assertNotNull(response.body)
        assertEquals(defaultPatientId, response.body?.id)
    }
}

Here we are using spring boot test to do integration testing also SpringBootTest.WebEnvironment.RANDOM_PORT is used here.

Note:

https://kotlinlang.org/docs/reference/coding-conventions.html#naming-rules

Please consider the naming convention while writing test cases for kotlin.

In JVM world similar conventions are well-known in Groovy and Scalaworld.

Always start with simple steps first, we will write down get operation first, try to fetch all Patient details.

Run the application and hit the http://localhost:8090/patients endpoint.

Let us create a POST request.

Create one simple request Object that will help us to create entity in mango world.

class PatientRequest(
        val name: String,
        val description: String
)

Here we will Pass patient names and descriptions about treatment. 

Now go to the REST Controller and handle a POST request.

@PostMapping
fun createPatient(@RequestBody request: PatientRequest): ResponseEntity<Patient> {
    val patient = patientsRepository.save(Patient(
            name = request.name,
            description = request.description
    ))
    return ResponseEntity(patient, HttpStatus.CREATED)
}

Let us create a method to create a PUT method to handle amendments in a document.

@PutMapping("/{id}")
fun updatePatient(@RequestBody request: PatientRequest, @PathVariable("id") id: String): ResponseEntity<Patient> {
    val patient = patientsRepository.findOneById(ObjectId(id))
    val updatedPatient = patientsRepository.save(Patient(
            id = patient.id,
            name = request.name,
            description = request.description,
            createdDate = patient.createdDate,
            modifiedDate = LocalDateTime.now()
    ))
    return ResponseEntity.ok(updatedPatient)
}

Test Method for an Update operation.

@Test
fun `should update existing patient`() {
    saveOnePatient()
    val patientRequest = preparePatientRequest()

    val updateResponse = restTemplate.exchange(
            getRootUrl() + "/$defaultPatientId",
            HttpMethod.PUT,
            HttpEntity(patientRequest, HttpHeaders()),
            Patient::class.java
    )
    val patientRequest = patientRepository.findOneById(defaultPatientId)

    assertEquals(200, updateResponse.statusCode.value())
    assertEquals(defaultPatientId, patientRequest.id)
    assertEquals(patientRequest.description, patientRequest.description)
    assertEquals(patientRequest.name, patientRequest.name)
}

Now our update operation is ready.

Let us Delete records using Delete operation.

As the deleted document won’t be included in the response, the 204 code will be returned.

@DeleteMapping("/{id}")
fun deletePatient(@PathVariable("id") id: String): ResponseEntity<Unit> {
    patientsRepository.deleteById(id)
    return ResponseEntity.noContent().build()
}

Test method which is straight forward to test delete method.

@Test
fun `should delete existing patient`() {
    saveOnePatient()

    val delete = restTemplate.exchange(
            getRootUrl() + "/$defaultPatientId",
            HttpMethod.DELETE,
            HttpEntity(null, HttpHeaders()),
            ResponseEntity::class.java
    )

    assertEquals(204, delete.statusCode.value())
    assertThrows(EmptyResultDataAccessException::class.java) { patientRepository.findOneById(defaultPatientId) }
}

Now our all CRUD operations are ready, run the application

This is for now Code is available on Github

https://github.com/maheshwarLigade/springboot-mongodb.restapi/tree/master

Tensorflow2.0 HelloWorld using google colab.

In this article, we use the most popular deep learning framework TensorFlow and we will take a basic hello world example to do this example you no need to set up a local environment on your machine. 

Image result for tensorflow
Tensorflow.org

We are using google Colab If you are not aware of what it is? here you go and check out my article on the same Colab getting started!!
Train deep neural network free using google colaboratory.medium.com

Now visit https://colab.research.google.com/ and you will see 

Brief About Colab:

Once you opened the Colab and if you are already logged in Gmail account. 

The google colab is available with zero configuration and free access to GPU and the best part is it sharable. The Google Collaboration is free service for the developers to try TensorFlow on CPU and GPU over the cloud instance of Google. This service is totally free for improving Python programming skills, developers can log in with their Google Gmail account and connect to this service. Here developers can try deep learning applications using popular machine learning libraries such as Keras, TensorFlow, PyTorch, OpenCV & others.

Sign in to google colab and create a new notebook for our HelloWorld example.

Go to File → New NoteBook(Google sign-in is required) → 

Now new notebook is ready we want to use TF2.0.0 for our example so let us first install TensorFlow 2.0.0 is already released as a production version. For installing TensorFlow2.0.0 run the following command.

!pip install tensorflow==2.0.0

After a successful installation, we can verify the installed version.

import tensorflow as tf
print(tf.__version__)

Helloworld example:

Now everything is ready and looking promising. We have installed TensorFlow and verified versions too. Now let us look at helicopter overview and create a hello world example. 

To change Runtime: Click on Runtime →Change Runtime Type → one popup will open choose perticular runtime and hardware accelrator such as GPU and TPU.

There are a lot of changes that are there in TF1.0 and TF 2.0.0 TF comes with the ease of development less coding it needs in this version of TF2.0.0. TensorFlow 2.0.0 is developed to remove the issues and complexity of previous versions. 

In the TF 2.0 eager execution is enabled by default.

The eager execution mode evaluates the program immediately and without building the graph. The eager execution mode operation returns the concrete value instead of constructing a computational graph and then execute the program.

We will use the same Hello world code from tensorflow 1.x version for this and let us observe the output.

#This code snippet is from tensorflow 1.X version
import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#session
sess = tf.Session()

#print the message
print(sess.run(msg))

In this example, we are using Tensorflow 1.X.X version code to print the message, but Session has been removed in TF2.0.0 this will cause the exception i.e

AttributeError: module 'tensorflow' has no attribute 'Session'

We will use the same above code snippet by removing the Session

import tensorflow as tf

msg = tf.constant('Hello and welcome to Tensorflow world')

#print the message
print(msg)

#print using tf.print()
tf.print(msg)

Here we have two print statement observe output for both print:

  1. tf.Tensor(b’Hello and welcome to Tensorflow world’, shape=(), dtype=string) 
  2. Hello and welcome to Tensorflow world.

This is it, for now, we will start exploring different API of TF in the next article.

Code: 

Code is available over github you can directly import that in colab and run it.

https://github.com/maheshwarLigade/GoogleColab/blob/master/HelloWorldTF2_0.ipynb

More Articles on Tensorflows:

https://medium.com/analytics-vidhya/optimization-techniques-tflite-5f6d9ae676d5

https://medium.com/analytics-vidhya/tensorflow-lite-converter-dl-example-febe804b8673

https://medium.com/techwasti/tensorflow-lite-machine-learning-at-the-edge-26e8421ae661

https://medium.com/techwasti/dynamic-computation-graphs-dcg-with-tensorflow-fold-33638b2d5754

https://medium.com/techwasti/tensorflow-lite-deployment-523eec79c017

Spring Cloud function Helloworld on AWS and local!!

In this article, we’ll learn how to use Spring Cloud Function. Serverless function using spring cloud function.

A serverless function is a modern industry buzzword and the reason behind that is boost in cloud computing.

credit goes to gitconnected

As part of this article, We’ll build and run a simple Spring Cloud Function locally and then deploy it to the AWS cloud platform.

Spring Cloud Function is a project with the following high-level goals as per spring cloud official website:

  • Promote the implementation of business logic via functions.
  • Decouple the development lifecycle of business logic from any specific runtime target so that the same code can run as a web endpoint, a stream processor, or a task.
  • Support a uniform programming model across serverless providers, as well as the ability to run standalone (locally or in a PaaS).
  • Enable Spring Boot features (auto-configuration, dependency injection, metrics) on serverless providers.

If you could able to understand the above things you will be able to relate things this with any serverless technology by any cloud provider. The reason behind serverless is concentrating on business logic, not on infra and any other things.

Features:

There are below number features spring cloud function provides.

  • Choice of programming styles — reactive, imperative or hybrid.
  • Function composition and adaptation (e.g., composing imperative functions with reactive).
  • Support for reactive function with multiple inputs and outputs allowing merging, joining and other complex streaming operations to be handled by functions.
  • Transparent type conversion of inputs and outputs.
  • Packaging functions for deployments, specific to the target platform (e.g., Project Riff, AWS Lambda and more)
  • Adapters to expose a function to the outside world as HTTP endpoints etc.
  • Deploying a JAR file containing such an application context with an isolated classloader, so that you can pack them together in a single JVM.
  • Compiling strings which are Java function bodies into bytecode, and then turning them into @Beans that can be wrapped as above.
  • Adapters for AWS Lambda, Microsoft Azure, Apache OpenWhisk and possibly other “serverless” service providers.
pivotal

Let us deep dive and do some coding.

We will take one simple example 

  • Convert a String to lower case, using a String method
  • And a HelloWorld greeting message using a dedicated class.

For this example, I am using Maven you can use Gradle as a build tool as well. 

#add spring cloud function dependency
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-function-web</artifactId>
<version>1.0.1.RELEASE</version>
</dependency>

First Spring Cloud Function:

We will expose @Bean of type function using spring cloud function. As part of this code, we are exposing toLowerCase functionality as a spring cloud function.

@SpringBootApplication
public class CloudFunctionApp {
public static void main(String[] args) {
SpringApplication.run(CloudFunctionApp.class, args);
}
@Bean
public Function<String, String> lowerCaseString() {
return value -> new StringBuilder(value).toString().toLowerCase().toString();
}
}

Test this functionality in our local using curl command

curl localhost:8087/lowerCaseString -H "Content-Type: text/plain" -d "Spring cloud FUNCTION"

The endpoint is the name of the bean. here it is lowerCaseString.

Spring Cloud Function in Packages:

In the above we have exposed @Bean as method Apart from this, we could also write our software as classes that implement the functional interface Function<T, R>. We can specify the packages to scan for relevant beans in the application.properties file.

package com.techwasti.spring.cloudfunction.functions;


public class Hello implements Function<String, String> {
@Override
public String apply(String s) {
return "Hello " + s + ", and welcome to Server less world!!!";
}
}

Also as mentioned above add below line in application.properties file

spring.cloud.function.scan.packages=com.techwasti.spring.cloudfunction.functions

We can also test this one in our local

curl localhost:8080/hello -H "Content-Type: text/plain" -d "Maheshwar"

The endpoint is the name of the class that implements the functional interface.

Spring Cloud Function on AWS:

The best thing about the spring ecosystem is seamless adoption and customization. The same applies to Spring Cloud Function, spring cloud function so powerful is that we can build Spring enabled functions that are cloud-agnostic. Spring cloud function provides abstraction API and adapter so that we can build function tests on local and deploy on any cloud provider such as AWS, Azure or Google Cloud platform without changing any of the business logic.

AWS is a popular one so for this exercise we choose AWS. 

Remember we have used above spring cloud function maven dependency we need the same one for this as 

<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-function-web</artifactId>
<version>1.0.1.RELEASE</version>
</dependency>
##to handle AWS lambda we need below dependency 
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>2.0.2</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.1.0</version>
<scope>provided</scope>
</dependency>

We are going to upload the artifact generated by the maven build to AWS Lambda, we need to build an artifact that is shaded, which means, it has all the dependencies burst out as individual class files instead of jars.

We are adding spring-boot-thin-layout dependency that helps us to reduce the size of the artifact by excluding some dependencies that are not needed.

<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.springframework.boot.experimental</groupId>
<artifactId>spring-boot-thin-layout</artifactId>
<version>1.0.10.RELEASE</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<shadedArtifactAttached>true</shadedArtifactAttached>
<shadedClassifierName>aws</shadedClassifierName>
</configuration>
</plugin>
</plugins>
</build>

If we are shipping our spring cloud function such as lower case String converter to handle this over AWS we need Handler

public class MyHelloWorldHandlers extends SpringBootRequestHandler<String, String> {

}

Spring Cloud Function AWS also ships with SpringBootStreamHandler and FunctionInvokingS3EventHandler as well.

This is just an empty class but it plays a very important role in both acting as an entry point and also defines the input and output types.

How Does AWS lambda Know Which Spring Cloud Function to Invoke?

In our example, if we have more than one spring cloud function How AWS know which one we have to invoke,

Even if we have more than one Spring Cloud Function in our application, AWS can invoke only one of them.

We have to specify the cloud function name in an environment variable called FUNCTION_NAME on the AWS console.

Upload function over AWS and Test:

Now we are ready to upload spring cloud function over AWS.

On the AWS Lambda console page, in the Function code section, we can select a Java 8runtime and simply click Upload. As I mentioned we need to specify handler as well i.e MyHelloWorldHandlers.

And then specify FUNCTION_NAME in environment variable as lowerCaseString.

it’s time for us to test the Lambda spring cloud function by creating a test event and supplying a sample string such as “Spring CLOUD function”, then save it and, then click the Test button. Similar way we can test the “hello” spring cloud function by changing the value of FUNCTION_NAME parameter. 

Spring Cloud Function is a powerful tool for decoupling the business logic from any specific runtime environment.

References:

Introducing Spring Cloud Function
Spring Cloud Function is a new project with the following high-level goals: Promote the implementation of business…spring.io

Spring Cloud Function
Spring Cloud Function features: Choice of programming styles – reactive, imperative or hybrid. Function composition and…spring.io

Micronaut java full stack Microservice Framework!!

Micronaut is a modern, JVM-based, full-stack microservices framework designed for building modular, easily testable microservice applications.

Micronaut is the latest framework designed to make creating microservices quick and easy.

Micronaut is a JVM-based framework for building lightweight, modular applications. Developed by OCI, the same company that created Grails. Micronaut is developed by the creators of the Grails framework and takes inspiration from lessons learned over the years building real-world applications from monoliths to microservices using Spring, Spring Boot, and Grails.

Micronaut supports Java, Groovy or Kotlin.

Features of Micronaut:-

One of the most exciting features of Micronaut is its compile-time dependency injection mechanism. If you know the spring-boot mostly using reflection API and proxies which is always at run time and that causing the spring boot application needs a more startup time as compared to Node application. 

  1. First-class support for reactive HTTP clients and servers based on Netty.
  2. An efficient compile time dependency injection container. 
  3. Minimal startup time and lower memory usage.
  4. Cloud-native features to boost developer productivity.
  5. Very minimal learning curve because Micronaut code looks very similar to Spring Boot with Spring Cloud.

What’s wrong with Spring Boot?

Disclaimer 

There is nothing wrong with spring boot. I am a very big fan of spring projects including spring, spring Boot, Spring Data, etc. Spring Boot is a good and very elegant solution and it makes developer job easier than anything, just add one dependency and magic will happen for you. 

When spring is providing things on your fingertip that means spring is doing so many things under the hood for you. Spring does reflection, proxy classes and injection and many more and for this, you have to pay the cost because spring does things at runtime. You have pay in terms of memory, CPU cycles and application bootstrap time.

Micronaut addressed some of these problems using ATC (Ahead of time compilation), GraalVM. Micronaut does major things at compile time that reduces memory footprint and CPU utilization this leads to reduce in application bootstrap time. Currently, spring is not supporting GraalVM and Micronaut is supporting this is also a big difference. 

GraalVM is basically a high-performance polyglot VM. GraalVM is a universal virtual machine for running applications written in JavaScript, Python, Ruby, R, JVM-based languages like Java, Scala, Groovy, Kotlin, Clojure, and LLVM-based languages such as C and C++.

Now we know what is and why Micronaut and a couple of features. Let us setup Micronaut and we will create one simple Hello world application. 

Setup Micronaut?

To install or setup micronaut is a very easy task. Go to this page https://micronaut.io/download.html

Either you can download binary, or use SDKMAN to setup micronaut on your favorite OS.

Using SDKMAN

Simply open a new terminal and start:

$ curl -s https://get.sdkman.io | bash

$ source “$HOME/.sdkman/bin/sdkman-init.sh”

$ sdk install micronaut

$ mn — version

Now installation is done let us create simple Hello world application

mn create-app hello-world

By Default Micronaut uses Gradle as a build tool you can also specify maven as well.

mn create-app hello-world --build maven

Using Homebrew

Before installing make sure you have the latest Homebrew installed.

$ brew update

$ brew install micronaut

Using Binary on windows

  1. Download the latest binary from
  2. Extract the binary to appropriate location
  3. Create an environment variable MICRONAUT_HOME which points to the installation directory
  4. Update the PATH environment variable, append %MICRONAUT_HOME%\bin

Now enjoy the coding.

Let us have HelloWorld controller 

import io.micronaut.http.MediaType; 
import io.micronaut.http.annotation.Controller; 
import io.micronaut.http.annotation.Get; 
@Controller("/hello")  
public class HelloController {  
    
@Get(produces = MediaType.TEXT_PLAIN)      
public String index() { 
        
return "Hello World";     
 
} 
}

Now enjoy the output:

$ curl http://localhost:8080/hello 

> Hello World

Conclusion:-

Spring boot and micronaut both have some pros and cons. As per my understanding if you are developing a new greenfield application start with micronaut but don’t rewrite existing application of spring boot to micronaut unless and until you are facing some serious performance issues. If you are migrating from monolith to cloud-native microservice then micronaut is the good option. Please let us know your thoughts on this.

Reference link:

This is the performance comparison between spring boot and micronaut.

https://docs.micronaut.io/latest/guide/index.html