Fast Inference: TFLite GPU Delegate!!

Running inference over the edge devices, especially on mobile devices is very demanding. When you have a really big machine learning model taking inference with the limited resources is a very crucial task. 

Many mobile devices especially mobile devices have hardware accelerators such as GPU. Tensorflow Lite Delegate is useful to optimize our trained model and leveraged the benefits of hardware acceleration.

What is Tensorflow Lite Delegate?

Delegator’s job, in general, is to delegate or transfer your work to someone. TensorFlow Lite supports several hardware accelerators.

A TensorFlow Lite delegate is a way to delegate part or all of graph execution to another executor.

Why should you use delegates?

Running inference on compute-heavy deep learning models on edge devices is resource-demanding due to the mobile devices’ limited processing, memory, and power. Instead of relying on the device CPU, some devices have hardware accelerators, such as GPU or DSP(Digital Signal Processing), that allows for better performance and higher energy efficiency.

How TFLite Delegate work?

How TFLite Delegate work.

Let us consider the graph on the left side. It has an input node where we will get input for inference. We will get input node going through convolutional operation and then mean operation and it uses the output of these two operations to compute the SquareDifference. 

Let us assume we have a hardware accelerator that can perform Conv2d and mean operations very fastly and efficiently and above graph will be like this:

In this case, we will delegate conv2d and mean these two operations to a specialized hardware accelerator using the TFLite delegator. 

TFLite GPU delegator will delegate the operations to a GPU delegator if available.

TFLite allows us to provide delegates for specific operations, in which case the graph will split into multiple subgraphs, where each subgraph handled by a delegate. Each and every subgraph that is handled by a delegate will be replaced with a node that evaluates the subgraph on its invoked call. Depending on the model, the final graph can end up with one node or many nodes, which means that all of the graphs were delegated or multiple nodes handled the subgraphs. In general, you don’t want to have multiple subgraphs handled by the delegate, since each time you switch from delegate to the main graph, there is an overhead for passing the results from the subgraph to the main graph. 

It’s not always safe to share memory.

How to add a delegate?

  1. Define a kernel node that is responsible for evaluating the delegate subgraph.
  2. Create an instance of TfLiteDelegate, which will register the kernel and claim the nodes that the delegate can execute.


Tensorflow has provided a demo app for android:

In your application, add the AAR as above, import org.tensorflow.lite.gpu.GpuDelegate module, and use theaddDelegate function to register the GPU delegate to the interpreter

import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;

// Initialize interpreter with GPU delegate
GpuDelegate delegate = new GpuDelegate();
Interpreter.Options options = (new Interpreter.Options()).addDelegate(delegate);
Interpreter interpreter = new Interpreter(model, options);

// Run inference
while (true) {
  writeToInput(input);, output);

// Clean up


Include the GPU delegate header and call the Interpreter::ModifyGraphWithDelegate function to register the GPU delegate to the interpreter:

#import "tensorflow/lite/delegates/gpu/metal_delegate.h"

// Initialize interpreter with GPU delegate
std::unique_ptr<Interpreter> interpreter;
InterpreterBuilder(*model, resolver)(&interpreter);
auto* delegate = NewGpuDelegate(nullptr);  // default config
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk) return false;

// Run inference
while (true) {
  if (interpreter->Invoke() != kTfLiteOk) return false;

// Clean up
interpreter = nullptr;


Some operations that are trivial on the CPU may have a high cost for the GPU.

Reference Link:

For more such stories

Tensorflow Lite- machine learning at the edge!!

Tensorflow created a buzz in AI and deep learning forum and TensorFlow is the most popular framework in the deep learning community.


As we know that to train deep learning models we need to compute power and this age of computation. Now we are moving with cloud computing along with edge computing. Edge computing is the need of today’s world because of innovation in the IoT domain and due to compliance and data protection laws enforcing companies to do computation and the edge side instead of computing model in the cloud and sending the result back to a client device is now the legacy.

As TensorFlow is the most popular deep-learning framework. It comes with its lite weight version for edge computation. Now a day’s mobile devices have good processing power but edge devices have less power.

Train deep learning model in less than 100KB.

The official definition of Tensorflow Lite:

“TensorFlow Lite is an open-source deep learning framework for on-device inference.”

Deploy machine learning models on mobile and IoT devices.

Tensorflow Lite is package of tools to help developers to run TensorFlow models on mobile, embedded devices, and IoT devices. It enables on-device machine learning inference with low latency and a small binary size.

Tensorflow Lite is providing machine learning at the edge devices.

Edge computing means compute at local.

Deep Dive:-

This diagram illustrates the standard flow for deploying the model using TensorFlow Lite.

Deploying model using TensorFlow Lite at the edge devices

Tensorflow Lite is not a separate deep learning framework, it is providing a set of tools that will help developers run TensorFlow models or any other deep learning models on mobile, embedded and IoT devices.


  1. Choose Model or develop your own model.
  2. Choose Model
  3. Convert the Model
  4. Deploy the Model
  5. Run the inference with the Model
  6. Optimize the Model and repeat the above steps.

Tensorflow Lite consists of two main components

  1. Converter:- Tensorflow Lite Converter converts the TensorFlow model into the TensorFlow lite model.
  2. Interpreter:- It is supporting a set of core operators that are optimized for on-device applications and with a small binary size. It is basically for inferencing the model.

Why Edge Computing?

Edge computing is really best to use case along with cloud computing. Nowadays cloud computing becomes crazy but there are a certain requirement where edge computation will beat cloud computing. Why edge computation is more important and what is advantage you will derive from this.

  1. Privacy:- No data needs to leave the device. Everything is there only.
  2. Latency:- There’s no back and forth request to a server.
  3. Connectivity:- Internet connection not required
  4. Power Consumption:- Connecting to a network requires power.

Tensorflow Lite is the one-stop solution to convert your deep learning model and deploy efficiently and enjoy inferencing. TensorFlow lite supports both mobile devices and microcontrollers. 

FireStore now supports IN Queries & array-contains!!

Firestore with new in and array-contains-any queries.

This is the good news for cloud firestore developers, the pain point they are facing unable to use of ‘in’ operator.

Image result for firestore

Cloud FireStore is NoSQL database built for global apps.

Cloud Firestore is a NoSQL document database that lets you easily store, sync, and query data for your mobile and web apps — at a global scale. Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud Platform.

The importance of ‘in’ is like a pinch of salt in day to day life.

In Query:

With the in query, you can query a specific field for multiple values (up to 10) in a single query. You do this by passing a list containing all the values you want to search in, and Cloud Firestore will match any document whose field equals one of those values.

in queries are the best way to run simple OR queries in Cloud Firestore. For instance, if the database for your E-commerce app had a customer_orders collection and you wanted to find which orders had a “Ready to ship”, “Out for delivery” or “Completed” status, this is now something you can do with a single query, like so:

[“Ready to ship”, “Out For Delivery”, “Completed”]

one more example:-

citiesRef.where('country', 'in', ['India', 'Japan','CostaRica']);

array-contains-any query:

Firestore launched another feature similar to the in query, the array-contains-any query. This feature allows you to perform array-contains queries against multiple values at the same time.

In your e-commerce site has plenty of products with an array of categories that every item belongs in, and you want to fire a query to fetch products with a category such as “Appliances” or “Electronics” then.


one more example: 

citiesRef.where('regions', 'array-contains-any',
['west_coast', 'east_coast']);

Note:- These queries are also supported in the Firebase console, which gives you the ability to try them out on your dataset before you start modifying your client code.


  • As we mentioned earlier, you’re currently limited to a maximum of 10 different values in your queries.
  • You can have only one of these types of operations in a single query. You can combine these with most other query operations, however.


JUnit5: Parameterized Tests

As we studied in Junit5 in part1 and part2. Junit5 is very impressive in the extension model and architectural style, also the Junit5 Assumptions. The another best aspect about junit5 is compatibility with a lambda expression. In this section let us start looking for Junit5 parameterized tests.

The term parameter is often used to refer to the variable as found in the function definition, while argument refers to the actual input passed.

Maven Dependency:


Gradle Dependency:


The basic difference while annotating the method instead of start by declaring a test method on @ParameterizedTest instead of @Test for a parameterised test. There are few scenarios where we want to pass values dynamically as method argument and a unit test that for this type of a scenario parameterized test cases are useful.

It looks below code is incomplete. From where this word value will come. how would JUnit know which arguments the parameter word should take? And indeed, Jupiter engine does not execute the test and instead throw a PreconditionViolationException.

void parameterizedTest(String word) {
Configuration error: You must provide at least
one argument for this @ParameterizedTest

Let us start correcting the above exception.

@ValueSource(strings = {"JUnit5 ParamTest" , "Welcome"})
void withValueSource(String word) {

Now above code will successfully get executed. This is just a simple use case but in real life project, you need more tools for that purpose you should know in detail of @ValueSource annotation.


@ValueSource annotation is to provide a source of argument to the parameterized test method. The source can be anything like single value, an array of values, null source, CSV file, etc. As we have seen in the above example @ValueSource annotation, we can pass an array of literal values to the test method.

public class Strings {
     public static boolean isEmptyString(String str) {
         return str == null || str.trim().isEmpty();
// Test case for the above method could be 
@ValueSource(strings = {"", "  ","Non Empty"})
 void isEmptyStringReturnTrueForNullOrBlankStrings(String str) {

Limitations of value sources:

1. It only support the following data types.

2. We can pass only one argument to the test method each time.

3. We can not pass null as a argument to the test method.

  • short (with the shorts attribute)
  • byte (with the bytes attribute)
  • int (with the ints attribute)
  • long  (with the longs attribute)
  • float (with the floats attribute)
  • double (with the doubles attribute)
  • char (with the chars attribute)
  • java.lang.String (with the strings attribute)
  • java.lang.Class (with the classes attribute)

@NullSource and @EmptySource:

We can pass a single null value to a parameterized test method using @NullSource and its not for the primitive data types.

@EmptySource passes a single empty argument and you can use empty source for collection types and for array too.

In order to pass both null and empty values, we can use the composed @NullAndEmptySource annotation

@ValueSource(strings = {” “, “\t”, “\n”})
void isEmptyStringReturnTrueForAllTypesOfBlankStrings(String input) {


The name implies its self, if we want to test different values from an enumeration, we can use @EnumSource.

void getValueForADay_IsAlwaysBetweenOneAndSeven(WeekDay day) {
int dayNumber = day.getValue();
assertTrue(dayNumber >= 1 && dayNumber <= 7);

We can filter out a few days by using the names attribute of enum. @EnumSource annotation has option to select enum constant mode, you can either include and exclude too using EnumSource.Mode.EXCLUDE.

We can pass a string literal and regular expression both to the names attribute

Reference Document


We need argument sources capable of passing multiple arguments. As we know @ValueSource and @EnumSource are only allowing one argument each time. In real life project we want to read row input values manipulate those and unit test those for that purpose @CsvSource

@CsvSource(value = {“juniT:junit”, “MaN:man”, “Java:java”}, delimiter = ‘:’)
void toLowerCaseValue(String input, String expected) {
String actualValue = input.toLowerCase();
assertEquals(expected, actualValue);

In the above example you have key value pair with colon as a delimiter, we can also pass CSV file as a resource argument too:

//CSV file

@CsvFileSource(resources = “/testdata.csv”, numLinesToSkip = 1)
void toUpperCaseValueCSVFile(String input, String expected) {
String actualValue = input.toUpperCase();
assertEquals(expected, actualValue);

resources attribute represents the CSV file resources on the classpath and we can pass multiple CSV files too. Let us take few more examples

“2019-09-21, 2018-09-21”,
“null, 2018-08-15”,
“2017-04-01, null”
void shouldCreateValidDateRange(LocalDate startDate, LocalDate endDate) {
new DateRange(startDate, endDate);

“2019-09-21, 2017-09-21”,
“null, null”
void shouldNotCreateInvalidDateRange(LocalDate startDate, LocalDate endDate) {
assertThrows(IllegalArgumentException.class, () -> new DateRange(startDate, endDate));

When you are executing above programming you will end up getting exception.

org.junit.jupiter.api.extension.ParameterResolutionException: Error converting parameter at index 0: Failed to convert String “null” to type java.time.LocalDate

The null value isn’t accepted in @ValueSource or@CsvSource.

Method Source:

The @ValueSource and @EnumSource and pretty simple and has one limitation they won’t support complex types. MethodSource allows providing complex argument source. MehtodSource annotation takes the name of a method as an argument needs to match an existing method that returns Steam type.

void withMethodSource(String word, int length) { }

private static Stream wordsWithLength() {
return Stream.of(
Arguments.of(“JavaTesting”, 10),
Arguments.of(“JUnit 5”, 7));

When we won’t provide a name for the @MethodSource, JUnit will search for a source method with the same name as the parameterized test  method.

void wordsWithLength(String word, int length) { }

Custom Argument Provider:

As of now, we have covered inbuilt argument provider but in few scenarios, this doesn’t work for you then Junit provides custom argument provider. You can create your own source, argument provider. To achieve this we have to implement an interface called ArgumentsProvider.

public interface ArgumentsProvider {
Stream<? extends Arguments> provideArguments(
    ContainerExtensionContext context) throws Exception;


For this, we have just test with a custom empty String provider.

class EmptyStringsArgumentProvider implements ArgumentsProvider {

public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
    return Stream.of( 
      Arguments.of("   "),
      Arguments.of((String) null) 

We can use this custom argument a provider using @ArgumentSource annotation.

void isEmptyStringsArgProvider(String input) {


As part of this article, we have discussed Parameterised test cases and argument provider in bits and pieces and some level of custom argument provider using ArgumentsProvider interface and @ArgumentsSource.

There are different source provider from primitive to CSV and MethodSource provider. This is for now.

Junit5 Assumptions

Assumptions are used to run tests only if certain conditions are met. This is typically used for external conditions that are required for the test to execute properly, but which are not directly related to whatever is being unit tested.

If the assumeTrue() condition is true, then run the test, else aborting the test.

If the assumeFalse() condition is false, then run the test, else aborting the test.

The assumingThat() is much more flexible, it allows part of the code to run as a conditional test.

When the assumption is false, a TestAbortedException is thrown and the test is aborting execution.

void trueAssumption() {
    assumeTrue(6 > 2);
    assertEquals(6 + 2, 8);

void falseAssumption() {
    assumeFalse(4 < 1);
    assertEquals(4 + 2, 6);

void assumptionThat() {
    String str = "a simple string";
        str.equals("a simple string"),
        () -> assertEquals(3 + 2, 1)

Junit5 tutorial: Part2

Before exploring this part, please read first part 1 is here.

Junit5 for beginners. In this tutorial let us make our hands dirty and have practical experience. If you are using Maven or Gradle for either of the build tool you can use dependency.

Maven dependency for Junit 5.0:

Add below dependency in pom.xml.


Junit 5.0 with Gradle:

Add below dependency in build.gradle file. We can start by supplying the unit test platform to the build tool. Now we have specified test platform as Junit.

test {useJUnitPlatform()}

Now after the above steps, we need to provide Junit5 dependencies here is the difference between Junit4 and junit5. As we have discussed in previous article Junit5 is modular so we have three different modules and each one has a different purpose.

testImplementation 'org.junit.jupiter:junit-jupiter-api:5.3.1'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.3.1'

In JUnit 5, though, the API is separated from the runtime, meaning two dependencies have to provide testImplementation and timeRuntimeOnly respectively.

The API is manifest with junit-jupiter-api. The runtime is junit-jupiter-engine for JUnit 5, and junit-vintage-engine for JUnit 3 or 4.

1. JUnit Jupiter:

In the first article, we have explored the difference between Junit4 and Juni5. There are new annotations introduced as part of this module. JUnit 5 new annotations in comparison to JUnit 4 are:

@Tag — Mark tags to test method or test classes for filtering tests.

@ExtendWith —Register custom extensions.

@Nested — Used to create nested test classes.

@TestFactory — Mark or denotes a method is a test factory for dynamic tests.

@BeforeEach — The method annotated with this annotation will be run before each test method in the test class. (Similar to Junit4 @Before )

@AfterEach — The method annotated with this annotation will be executed after each test method. (Similar to Junit4 @After )

@BeforeAll — The method annotated with this annotation will be executed before all test methods in the current class. (Similar to Junit4 @BeforeClass )

@AfterAll — The method annotated with this annotation will be executed after all test methods in the current class. (Similar to Junit4 @AfterClass )

@Disable — This is used to disable a test class or method (Similar to Junit4 @Ignore)

@DisplayName — This annotation defines a custom display name for a test class or a test method.

2. JUnit Vintage:

As part of this module, there is no new annotation has introduced but the purpose of this module is to supports running JUnit 3 and JUnit 4 based tests on the JUnit 5 platform.

Let us deep dive and do some coding:

@DisplayName and @Disabled:

As you can capture from the below code snipept you can DisplayName I have just given two flavors of one method. This way you can provide your own custom display name to identify test easily.

@DisplayName("Happy Scenario")
void testSingleSuccessTest() 
System.out.println("Happy Scenario");

@DisplayName("Failure scenario")
void testFailScenario() {
System.out.println("Failure scenario")

To disable test cases which implementation not yet completed or some other reason to skip that

@Disabled("Under constructution")
void testSomething() {

@BeforeAll and @BeforeEach :

BeforeAll is like a setup method this will get invoked once before all test methods in this test class and BeforeEach get invoked before each test method in this class.

@BeforeAll annotation must be, static and it’s run once before any test method is run.

static void setup() {
System.out.println("@BeforeAll: get executes once before all test methods in this class");
System.out.println("This is like setup for tests methods");

void init() {
System.out.println("@BeforeEach: get executes before each test method in this class");
System.out.println("initialisation before each test method");

@AfterEach and @AfterAll:

AfterEach gets invoked after each test method in this class and AfterAll get invoked after all test cases get invoked. Afterall like finalization task.

@AfterAll annotation must be, static and it’s run once after all test methods have been run.

void tearDown() {
System.out.println("@AfterEach: get executed after each test method.");
@AfterAllstatic void finish() {
System.out.println("@AfterAll: get executed after all test methods.");

Assertions and Assumptions:

Assertions and assumptions are the base of unit testing. Junit5 taking full advantage of Java8 features such as lambda to make assertions simple and effective.


Junit5 assertions are part of a org.junit.jupiter.api.Assertions API and improvision have significantly as Java 8 is the base of Junit5 you can leverage all the features of Java8 primarily lambada expression. Assertions help in validating the expected output with the actual output of a test case.

void testLambdaExpression() {
assertTrue(Stream.of(4, 5, 9)
.mapToInt(i -> i)
.sum() > 18, () -> "Sum should be greater than 18");

As you are aware of using lambda expression because of lambda expression

All JUnit Jupiter assertions are static methods

void testCase()
Assertions.assertNotEquals(3, Calculator.add(2, 2));

Assertions.assertNotEquals(4, Calculator.add(2, 2), "Calculator.add(2, 2) test failed");

Supplier<String> messageSupplier = ()-> "Calculator.add(2, 2) test failed";
Assertions.assertNotEquals(4, Calculator.add(2, 2), messageSupplier);

With assertAll()which will report any failed assertions within the group with a MultipleFailuresError

Junit5 tutorial: Part-1

Junit5 tutorials for beginner.

Junit is the Java’s most popular unit testing library, recently has released a new version 5. JUnit 5 is a combination of several modules from three different sub-projects. 

JUnit 5 = JUnit Platform + JUnit Jupiter + JUnit Vintage

JUnit is an open-source Unit Testing Framework for JAVA. It is useful for Java Developers to write and run unit tests. Erich Gamma and Kent Beck initially develop it.

JUnit 5 is an evolution of JUnit 4, and did some further improves the testing experience. As we are aware Junit5 is the major version release and its aims to adapt java 8 styles of coding and to be more robust and flexible than the previous releases. 

JUnit 5 was to completely rewrite JUnit 4

Advantages:- Below are the few advancements in this version over older once.

  1. The entire framework was contained in a single jar library.
  2. In JUnit 5, we get more granularity and can import only what is necessary.
  3. JUnit 5 makes good use of Java 8 styles of programming and features.
  4. JUnit 5 allows multiple runners to work simultaneously.
  5. The best thing about Junit5 is backward Compatibility for JUnit 4.

Note:- JUnit 5 requires Java 8 (or higher) at runtime.

Moving from Junit 4 to Junit5:-

In this section let us explore the motivation behind the Junit5.

  1. Junit4 was developed a decade ago, now context has bit changed and programming style too.
  2. Junit4 is not compatible with JDK8 and its new functional programming paradigm.
  3. Junit4 is not modular. A single jar is a dependency for everything. 
  4. Test discovery and execution are tightly coupled in Junit4.
  5. The most important thing nowadays developer not only want unit testing but also they want integration testing and system testing.

These are few reasons to rewrite junit5 from scratch using java8 and introduced some new features.

You can still execute Junit3 and Junit4 unit test cases using Vintage module in Junit5.


Junit5 is modular architecture and the main three components are Platform, Jupiter and vintage.

Let us understand above this three module.

  1. Platform:- Platform, which serves as a foundation for launching testing frameworks on the JVM. It also provides an API to launch tests from either the console, IDEs, or build tools.
  2. Jupiter:- Jupiter is the combination of the new programming model and extension model for writing tests and extensions in JUnit 5. The name has been chosen from the 5th planet of our Solar System, which is also the largest one.
  3. Vintage:- Vintage in general something from the past. Vintage provides a test engine for running JUnit 3 and JUnit 4 based tests on the platform, ensuring the necessary backward compatibility.

As part of this tutorial, we are able to understand what is new Junit5 and what is it. In the next tutorial, we will explore more and we will take some examples. 

Enabling proguard for Android.

For more stories.

How to de-obfuscate stack Trace is here?

Enabling proguard in an android studio is a really easy task but I really encounter with question seamlessly and I am getting this question frequently on StackOverflow, this motivates me to write this simple article.

How to Enabling ProGuard obfuscation in Android Studio?

Questionis here on stackoverflow. I have already provided my answer here but now I want to explore a little bit what is proguard? and How to enable it?

What is this proguard…..?

Ref – Wikipedia.

ProGuard is an open source command-line tool that shrinks, optimizes and obfuscates Java code. It is able to optimize bytecode as well as detect and remove unused instructions. ProGuard is open source software.

Proguard was developed by Eric P.F. Lafortune.

As per the definition proguard will help you to not only obfuscation but also it optimizes, shrinks and remove unused instructions as well.

I asked many developers, why you want to apply proguard to your project, and I come across to only one answer, we used only for security purpose. Proguard is not only providing security to your code but also it provides many more features. Let us understand what is proguard and how to use it?

Features of Proguard.

  1. ProGuard also optimizes the bytecode, removes unused code instructions, and obfuscates the remaining classes, fields, and methods with short names.
  2. The obfuscated code makes your APK difficult to reverse engineer, which is especially valuable when your app uses security-sensitive features, such as licensing verification.

We will look more here How to enable it in the android studio instead of exploring how proguard works?

To enable ProGuard in Android Studio.

Below is the sample how to enable default ProGuard in Android Studio.

  1. Go to the build.gradlefile of the app
  2. enables the proguard minifyEnabledtrue
  3. enable shrinkResources true to reduce the APK size by shrinking resources.
  4. proguardFiles getDefaultProguardFile('proguard-android.txt') to enable the default one. If you want to use your own proguard file then use the below rules.
buildTypes {
release {
debuggable false
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android.txt'), ''

debug {
debuggable true
minifyEnabled true
shrinkResources true
proguardFiles getDefaultProguardFile('proguard-android.txt'), ''

The link contains ProGuard settings for Android and other library settings also:

This is simple try to say about enabling proguard in an android studio.

I have been following this path, love to hear from you.

Amazon Corretto!! Another JDK

Amazon Corretto!! Another JDK

For more stories.

No-cost, multiplatform, production-ready distribution of OpenJDK.

Amazon Corretto, a No-Cost Distribution of OpenJDK. From title and Subtitle, you got an idea that this is the openJDK distribution from Amazon.

It’s really great news for Java developers. Amazon has released blog post with below title that makes sense for each one why not this distribution.

Amazon Corretto, a No-Cost Distribution of OpenJDK with Long-Term Support.

“Amazon has a long and deep history with Java. I’m thrilled to see the work of our internal mission-critical Java team being made available to the rest of the world” — James Gosling

Amazon corretto is a no cost, multiplatform, production-ready open JDK distribution. It comes with long-term support including performance enhancements and security fixes. Amazon using corretto internally using in production on thousands of services. This means it is fully tested.

Just FYI Corretto is an Italian word, in English meaning is ” Correct “.

Corretto is certified as compatible with the Java SE standard and is used internally at Amazon for many production services. With Corretto, you can develop and run Java applications on operating systems such as Amazon Linux 2, Windows, and macOS. In response to AWS Linux’s long-term support for Java, AWS recently released the free OpenJDK Amazon Corretto to ensure that cloud users can get stable support. Secure the operation of Java workloads. To ensure compatibility, Arun Gupta, AWS’s chief open source technologist, said that every time Amazon Corretto is released, the development team will implement the TCK (Technology Compatibility Kit) to ensure that the component is compatible with the Java SE platform.

@arungupta said that the workload of Amazon’s internal formal environment also relies heavily on Amazon Corretto’s JDK to meet high performance and large-scale demand. Amazon Corretto can support multiple heterogeneous environments, including the cloud, local data centers, and user development environments. In addition, to expand the scope of application of developers, the platform supported by Amazon Corretto at this stage includes Amazon Linux 2, Windows, macOS and Docker image files. The official version of Amazon Corretto is expected to be released in the first quarter of 2019, and will be compatible with Ubuntu and Red Hat Enterprise Linux.

The JDK is now available for free download by open users, and AWS also promises that Amazon Corretto version 8 free security updates will be available at least until June 2023, while Amazon Corretto version 11 free updates will continue until 2024. August.

This is just a developer preview release if you are a developer go ahead and make your hands dirty with this one.


  1. Backed by Amazon.
  2. Production ready
  3. Multiplatform Support: Linux, Windows, Osx & Docker container too.
  4. No Cost.

You can find the source code for Corretto at

Official Documentation and download preview link

How to install on mac os:

Mac operating system version 10.10 or later. You must have administrator privileges to install and uninstall Amazon Corretto 8.

  1. Download amazon-corretto-jdk-8u192-macosx-x64.pkg.
  2. Double-click the downloaded file to start the installation wizard. Follow the steps in the wizard.
  3. Once the wizard completes, the Corretto 8 Preview will be installed in /Library/Java/JavaVirtualMachines/.

To get the complete installation path, run the following command in a terminal

/usr/libexec/java_home — verbose

4. Set the JAVA_HOME variable.

export JAVA_HOME=/Library/Java/JavaVirtualMachines/amazon-corretto-8.jdk/Contents/Home

And enjoy the coding.

For Docker:

Build a docker image with Amazon corretto 8.

docker build -t amazon-corretto-jdk-8

Your docker image is ready and a name is amazon-corretto-jdk-8. Run it using below command

docker run -it amazon-corretto-jdk-8

If you want to develop java application and want to use amazon corretto as a parent Image then follow below script.

let us create Hello world java app with amazon corretto.

  1. Create a Dockerfile with the following content.

FROM amazon-corretto-8 RUN

echo $’ \

public class Hello { \

public static void main(String[] args) { \

System.out.println(“Welcome to Amazon Corretto 8!”);

\ }

\ }’ >

RUN javac

CMD [“java”, “Hello”]

2. Build the image.

docker build -t hello-app .

3. Run the image

docker run hello-app

4. Output

Welcome to Amazon Corretto 8!

If you enjoyed this article, please don’t forget to Clap.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Progurad for android kotlin

Progurad for android kotlin

For more stories.

Something interesting for #AndroidDev

If already new to Proguard read my this two article which will help you to enable proguard for any android project.

  1. Enabling Proguard for Android
  2. How to de-obfuscate stack Trace is here?
This is it.

Google has announced to official kotlin support for Android development. which is really a very cool language. If you haven’t try now.

This is the link sample Project AndroidWithKotlin.

Now in this post, we will look into How we can use progurad in kotlin android project.

Before deep dive here I will recommend you to read my first article Enabling Proguard for Android, then move on.

Enabling proguard is the same process for any kind of the android project.

If you enable proguard for the android project and if you are using some of the kotlin extension libraries then you will come across some issues.

jackson-kotlin-module that provides deserializing kotlin classes and data classes which is a really cool feature.

The fix is here:-

This rule will help you to keep your annotation classes and it won’t warn for reflection classes.

-dontwarn kotlin.**
-dontwarn kotlin.reflect.jvm.internal.**
-keep class kotlin.reflect.jvm.internal.** { *; }

If you have an issue with kotlin MetaData. Especially in case of Jackson kotlin module.

-keep class kotlin.Metadata { *; }
-keepclassmembers public class com.mypackage.** {
    public synthetic <methods>;
-keepclassmembers class kotlin.Metadata {
    public <methods>;

For enum

-keepclassmembers class **$WhenMappings {

The consolidated rule for kotlin android project.

-keep class kotlin.** { *; }
-keep class kotlin.Metadata { *; }
-dontwarn kotlin.**
-keepclassmembers class **$WhenMappings {
-keepclassmembers class kotlin.Metadata {
    public <methods>;
-assumenosideeffects class kotlin.jvm.internal.Intrinsics {
    static void checkParameterIsNotNull(java.lang.Object, java.lang.String);

Above rule will sufficient for kotlin android Project. But if you are using different libraries in your project then you have to add those specific configuration based on the error and warnings.

for Moshi

-keep class kotlin.reflect.jvm.internal.impl.builtins.BuiltInsLoaderImpl

which will keep only the no-arg constructor of the service defined in META-INF/services/kotlin.reflect.jvm.internal.impl.builtins.BuiltInsLoader

Thanks, Jake Wharton for this config.

To Learn more about kolin,


I have been following this path, love to hear more from you.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.