Spring Boot, MongoDB REST API using Kotlin.

As part of this article our focus to develop simple REST API using spring boot and MongoDB. 

Getting started with this is the Spring Initialiser tool: https://start.spring.io/

In this example, I am considering gradle as build tool and MongoDB as Database.

Download and import Project into your favorite editor, I prefer intellij,

Either you can install MongoDB on your local or you can use MongoDB hosted solution https://mlab.com/.

I am using mlab.com for this example.

Let us provide the MongoDB connection details in the application.properties

spring.data.mongodb.host=localhost #for now I kept localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=mongo-rest-api-kotlin-demo

Let us create entity class as Patient.

@Document
data class Patient (
        @Id
        val id: ObjectId = ObjectId.get(),
        val name: String,
        val description: String,
        val createdDate: LocalDateTime = LocalDateTime.now(),
        val modifiedDate: LocalDateTime = LocalDateTime.now()
)

@Document annotation rather than @Entity is used here for marking a class which objects we’d like to persist to the mongodb. 

@Id: is used for marking a field used for identification purposes. 

Also, we have provided some default values for the created date and modified date.

Let us create a repository interface.

import org.bson.types.ObjectId
import org.springframework.data.mongodb.repository.MongoRepository

interface PatientRepository : MongoRepository<Patient, String> {
    fun findOneById(id: ObjectId): Patient
    override fun deleteAll()

}

The repository interface is ready to use, we don’t have to write an implementation for it. This feature is provided by SpringData JPA. Also, MongoRepository interface provides all basic methods for CRUD operations. For now, we will consider only finOneById.

Now our backend is ready, let us write down the REST Controller which will serve our request efficiently.

@RestController
@RequestMapping("/patients")
class PatientController(
        private val patientsRepository: PatientRepository
) {

    @GetMapping
    fun getAllPatients(): ResponseEntity<List<Patient>> {
        val patients = patientsRepository.findAll()
        return ResponseEntity.ok(patients)
    }

    @GetMapping("/{id}")
    fun getOnePatient(@PathVariable("id") id: String): ResponseEntity<Patient> {
        val patient = patientsRepository.findOneById(ObjectId(id))
        return ResponseEntity.ok(patient)
    }
}

Now our basic Controller is ready, let us write some test cases.

@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@ExtendWith(SpringExtension::class)
@TestInstance(TestInstance.Lifecycle.PER_CLASS)
class PatientControllerIntTest @Autowired constructor(
        private val patientRepository: PatientRepository,
        private val restTemplate: TestRestTemplate
) {
    private val defaultPatientId = ObjectId.get()

    @LocalServerPort
    protected var port: Int = 0

    @BeforeEach
    fun setUp() {
        patientRepository.deleteAll()
    }


    private fun getRootUrl(): String? = "http://localhost:$port/patients"

    private fun saveOnePatient() = patientRepository.save(Patient(defaultPatientId, "Name", "Description"))

    @Test
    fun `should return all patients`() {
        saveOnePatient()

        val response = restTemplate.getForEntity(
                getRootUrl(),
                List::class.java
        )

        assertEquals(200, response.statusCode.value())
        assertNotNull(response.body)
        assertEquals(1, response.body?.size)
    }

    @Test
    fun `should return single patient by id`() {
        saveOnePatient()

        val response = restTemplate.getForEntity(
                getRootUrl() + "/$defaultPatientId",
                Patient::class.java
        )

        assertEquals(200, response.statusCode.value())
        assertNotNull(response.body)
        assertEquals(defaultPatientId, response.body?.id)
    }
}

Here we are using spring boot test to do integration testing also SpringBootTest.WebEnvironment.RANDOM_PORT is used here.

Note:

https://kotlinlang.org/docs/reference/coding-conventions.html#naming-rules

Please consider the naming convention while writing test cases for kotlin.

In JVM world similar conventions are well-known in Groovy and Scalaworld.

Always start with simple steps first, we will write down get operation first, try to fetch all Patient details.

Run the application and hit the http://localhost:8090/patients endpoint.

Let us create a POST request.

Create one simple request Object that will help us to create entity in mango world.

class PatientRequest(
        val name: String,
        val description: String
)

Here we will Pass patient names and descriptions about treatment. 

Now go to the REST Controller and handle a POST request.

@PostMapping
fun createPatient(@RequestBody request: PatientRequest): ResponseEntity<Patient> {
    val patient = patientsRepository.save(Patient(
            name = request.name,
            description = request.description
    ))
    return ResponseEntity(patient, HttpStatus.CREATED)
}

Let us create a method to create a PUT method to handle amendments in a document.

@PutMapping("/{id}")
fun updatePatient(@RequestBody request: PatientRequest, @PathVariable("id") id: String): ResponseEntity<Patient> {
    val patient = patientsRepository.findOneById(ObjectId(id))
    val updatedPatient = patientsRepository.save(Patient(
            id = patient.id,
            name = request.name,
            description = request.description,
            createdDate = patient.createdDate,
            modifiedDate = LocalDateTime.now()
    ))
    return ResponseEntity.ok(updatedPatient)
}

Test Method for an Update operation.

@Test
fun `should update existing patient`() {
    saveOnePatient()
    val patientRequest = preparePatientRequest()

    val updateResponse = restTemplate.exchange(
            getRootUrl() + "/$defaultPatientId",
            HttpMethod.PUT,
            HttpEntity(patientRequest, HttpHeaders()),
            Patient::class.java
    )
    val patientRequest = patientRepository.findOneById(defaultPatientId)

    assertEquals(200, updateResponse.statusCode.value())
    assertEquals(defaultPatientId, patientRequest.id)
    assertEquals(patientRequest.description, patientRequest.description)
    assertEquals(patientRequest.name, patientRequest.name)
}

Now our update operation is ready.

Let us Delete records using Delete operation.

As the deleted document won’t be included in the response, the 204 code will be returned.

@DeleteMapping("/{id}")
fun deletePatient(@PathVariable("id") id: String): ResponseEntity<Unit> {
    patientsRepository.deleteById(id)
    return ResponseEntity.noContent().build()
}

Test method which is straight forward to test delete method.

@Test
fun `should delete existing patient`() {
    saveOnePatient()

    val delete = restTemplate.exchange(
            getRootUrl() + "/$defaultPatientId",
            HttpMethod.DELETE,
            HttpEntity(null, HttpHeaders()),
            ResponseEntity::class.java
    )

    assertEquals(204, delete.statusCode.value())
    assertThrows(EmptyResultDataAccessException::class.java) { patientRepository.findOneById(defaultPatientId) }
}

Now our all CRUD operations are ready, run the application

This is for now Code is available on Github

https://github.com/maheshwarLigade/springboot-mongodb.restapi/tree/master

Idempotent Kafka Consumer

In the last couple of articles, we have seen how Kafka guarantees message processing, as part of this we saw idempotent Kafka producers. Idempotence is something I appreciate, maybe the most, in data engineering. I know most of you have a better understanding of the word idempotent.

There are two questions always people do ask in the data industry. Data is the new oil.

How do we guarantee all messages are processed?

How do we avoid or handle duplicate messages?

In the last article, we have seen how producers should ensure that the “at least once” delivery semantics. The producer shouldn’t deliver duplicate messages.

simple message data flow.

Let us jump into the consumer part. Kafka provides consumer API to pull the data from kafka. When we consume or pull the data from kafka we need to specify the consumer group. Consumer group helps us to a group of consumers that coordinate to read data from a set of topic partitions. To save progress in reading data from Kafka, a consumer needs to save the offset of the next message it will read in each topic partition it is assigned to. Consumers are free to store their offsets wherever they want but by default and for all Kafka Streams applications, these are stored back in Kafka itself in an internal topic called _consumer_offsets. To use this mechanism consumers either enable automatic periodic commitment of offsets back to Kafka by setting the configuration flag enable.auto.commit to true or by making an explicit call to commit the offsets.

idempotency guarantees of consistent data and no duplicacy.

Delivery Semantics of Kafka Consumer:

There are different semantics, for more details go through the below image.

  1. At least Once: In this semantics, offset are committed after the message processed. If some exception occurs while processing so the message is not committed, so the consumer will be able to read the message again from kafka. This can leads to duplicate message processing. Make sure message processing twice won’t impact your system. 

2. At most Once: In this semantics offsets are committed as soon as the message batch is received. If the processing goes wrong the message will be lost(we are unable to read again) because we have committed the offset.

  1. The producer published the message successfully.
  2. The consumer is able to consume messages successfully and committed offset.
  3. But while processing messages some exception occurs or an error occurs, machine shutdown so we lost that message because we are unable to read the same message again due to offset were already committed. 

3. No guarantee: In this semantics, there is no guarantee of a message, which means the given message processed once, multiple times or no at all. This is a simple scenario where you will end up with these semantics is if you have a consumer with enable.auto.commit set to true (this is the default) and for each batch of messages, you async process and save the desired results into a database. The frequency of these commits is determined by the configuration parameter auto.commit.interval.ms.

4. Exactly Once: In 2007 confluent introduced exactly-once semantics. This can be achieved for kafka to kafka workflows using kafka stream API. This semantic ensures your message processing idempotent. 

By default, a consumer is at least once because of when we don’t set anything regarding offset commit then the default is auto-commit of the offset. 

Idempotent Consumer:

Kafka stream API will help us to achieve idempotent kafka consumers. Kafka Streams is a library for performing stream transformation on data from kafka. The exactly-once semantic feature was added to Kafka Streams in the 0.11.0 Kafka release. Enabling exactly once is a simple configuration change setting the streaming configuration parameter processing.guaranteeto exactly_once(the default is at_least_once). A general workload for a Kafka streams application is to read data from one or more partitions, perform some data transformations, update a state, then write some results to an output topic. When exactly-once semantics is enabled, Kafka Streams atomically updates consumer offsets, local state stores, state store changelog topics, and production to output topics altogether. If any one of these steps fail, all of the changes are rolled back.

There is another way to make it idempotent if you are not using kafka streams API.

Suppose you are pulling data from kafka and doing some processing and then persisting packet into the database. How you should guarantee data consistency using unique key i.e “id”.

There are two strategies to generate a unique id

  1. Kafka generic id:- You can take the help of kafka to generate unique id by appending simple strings like below.

 String id = record.topic()+”-“+record.partition()+”-“+record.offset();

2. Application-specific unique id logic:-

This could be based on your application this could be change based on domain and functionality with which you are dealing.

e.g.

  1. Suppose you are feeding twitter stream data then twitters feed specific id.
  2. Any unique transaction-id for any financial domain application.

CI and CD with GitHubActions!!

GitHub Actions make it easy to automate all your software workflows, now with world-class CI/CD.

github.com

Github actions are a tool to run workflow on any GitHub event.

In today’s era DevOps and Continous integration and continuous deployment. Every organization wants to become agile and develope features, build and deploy a daily or hourly basis. To do this every enterprise uses their own set of tools to watch over source version control such as git then generate a build and execute unit test cases, then functional, integration test cases after that based on threshold do some monkey testing in a simulated environment and then deploy to lower environment and then do promotion. 

The above one is the general flow in any enterprise software. Nowadays everyone wants to fail fast, this leads to come up with different tools such as Jenkins, chef, git, sonarqube and many more. Based on the coding language and deployment server you have to choose the tools. If you are using Docker or containers then different tools.

To make things simple yet powerful and efficient github come up with Github Actions. GitHub Actions features a powerful execution environment integrated into every step of your workflow. You can discover, create, and share actions to perform any job you’d like, and combine them to customize your workflow.

Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.


The best thing about this is you can do this in your github repository itself.

There are different sets of actions such as assign reviewers, revert commit, merge, package, publish, etc. for more details visit below pageGitHub Marketplace: actions to improve your workflow
Menu Types Categories Filters Verification An entirely new way to automate your development workflow. 1712 results…github.com

Some important points:-

  1. GitHub Actions support Node.js, Python, Java, Ruby, PHP, Go, Rust, .NET, and more.
  2. Save time with matrix workflows that simultaneously test across multiple operating systems and versions of your runtime.
  3. Run directly on a VM or inside a container. Use your own VMs, in the cloud or on-prem, with self-hosted runners. Hosted runners for every major OS.
  4. It’s one click to copy a link that highlights a specific line number to share a CI/CD failure. You will get live logs.
  5. Built-in secret store.
  6. Multi-container testing.
  7. Community-powered workflows.
  8. Write and Reuse the workflows.
  9. Built-in github package registry.
  10. Simple, pay-as-you-go pricing.

Pricing:- 

Free for open-source projects.

https://github.com/features/actions

Actions allow us to easily test multiple versions of your project in parallel.

Getting started:-

Let us create one sample repository on your github. Go to that repository and on the top section below your repository name, you will see “Actions” as a menu click on it.

When you click on it you can see the Actions page, where you will see list of predefined templates for it.

Choose the one which is suitable for your requirements. Click on “set up this workflow” click on this button.

It will redirect you to the actual workflow page, where we have to define workflow using “yml” 

Define workflow if require add some workflow tool from the marketplace and commit the code. If you want to preview your flow click on “Preview” button which is next to the edit file. Once you commit it will in the “Actions” menu we can see workflow like below. Here we can see status here also we can define a new workflow. The different workflow may be based on the environment like dev, QA, UAT or PROD.

Check the status and enjoy coding.

Sample yml:-

jobs:
  test:
    name: Test on node ${{ matrix.node_version }} and ${{ matrix.os }}
    runs-on: ${{ matrix.os }}
    strategy:
      matrix:
        node_version: [8, 10, 12]
        os: [ubuntu-latest, windows-latest, macos-latest]

    steps:
    - uses: actions/[email protected]

    - name: Use Node.js ${{ matrix.node_version }}
      uses: actions/[email protected]
      with:
        version: ${{ matrix.node_version }}
    
    - name: npm install, build and test
      run: |
        npm install
        npm run build --if-present
        npm test

Documentation:-

Automating your workflow with GitHub Actions
GitHub Actions features a powerful execution environment integrated into every step of your workflow. You can discover…help.github.com

For more stories

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Kubernetes 5 Free Learning resources.

If you don’t know docker read here.

Kubernetes(k8) is an orchestration platform to manage containers.

Kubernetes is the buzzword in the market because of the boom of containerization and microservices. We can have a microservices architecture with its pros and cons and plenty of containers but the question is how to manage those containers and the source of truth is K8.

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”.

The definition is from the k8 official website. 

In this article, we will list down 5 free resources to which will help you to learn k8.

Kubernetes is a container orchestration software

  1. Learn K8 basics:- Learn Kubernetes Basics is the official documentation by the developers of Kubernetes. This tutorial will help you to understand the basics step by step guide of k8 cluster orchestration system. Each module contains some background information on major Kubernetes features and concepts and includes an interactive online tutorial. I think this the source of truth to start learning k8.
  2. Learning Path kubernetes:- This course can give us from Kubernetes basics to advanced networking and workloads. This tutorial series introduced by IBM and its very good resource to deep dive into k8. If you’re new to Kubernetes and container orchestration and want to begin learning about it, this learning path covers everything from basic prerequisites to more advanced skills needed for containerization. This course will give you brief the idea about a container up to advance k8 concepts. After completion of this course, you will be able to understand the basics of containers, build containerized applications and deploy them onto Kubernetes, understand the advantages of a deployment that uses Helm with Kubernetes, deploy various microservices with Kubernetes, understand basic networking for applications that are running in Kubernetes and much more.
  3. A Tutorial Introduction to Kubernetes:- A Tutorial Introduction to Kubernetes is provided by Ulaş Türkmen on his blog. In this tutorial series, you will learn how to use Kubernetes using Minikube, how to configure kubectl, understanding nodes and namespaces, how to use the dashboard, deploying various container images in order to demonstrate Kubernetes feature, running service, etc.
  4. Coursera K8:- Architecting with Google Kubernetes Engine Specialisation this is the course name. Actually, this course is design and developed by Google. In this k8 course you will learn, how to implement solutions using Google Kubernetes Engine, or GKE, including building, scheduling, load balancing, and monitoring workloads, as well as providing for the discovery of services, managing role-based access control and security, and providing persistent storage to these applications.
  5. Fundamentals of Containers, Kubernetes, and Red Hat OpenShift:- This course will provide you with an introduction to container and container orchestration technology using Docker, Kubernetes, and Red Hat OpenShift Container Platform. You will learn how to containerize applications and services, test them using Docker, and deploy them on a Kubernetes cluster using Red Hat OpenShift. Additionally, you will build and deploy an application from source code using the Source-to-Image facility of Red Hat OpenShift. After the completion of this course, you will be able to create containerized services, manage containers and container images, create custom container images and deploy containerized applications on Red Hat OpenShift.

Kubernetes — Explained Like You’re Five

Summary:- Modern applications are increasingly built using containers, microservices packaged with their dependencies and configurations. K8s is open-source orchestration software for deploying and managing those containers at scale. With K8s, you can build, deploy, deliver and scale containerized apps faster and smoother.

This is just a list of the sources from where we can start our journey and makes hands dirty. There is a more complete list of concepts available on the Kubernetes website. I suggest you give them a quick look if you want to get a better grasp of who does what and what goes where.

This is it, for now, please let me know if you know more resources which are very simple and effective.

Git workflow.

Git workflow.

Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. Git is a SCM (source code management) in software development.

Git was created by Linus Torvalds in 2005 for development of the Linux kernel, with other kernel developers contributing to its initial development.[12] Its current maintainer since 2005 is Junio Hamano.

Three main workflows of git

When we are using git there are three main workflows that can be adopted.

  1. Centralized workflow
  2. Feature Branch Workflow
  3. Gitflow workflow

1.Centralized workflow:-

The most popular workflow among developers and the entry stage of every project.

The idea is quite simple. There is only one central master repository. Each developer clones the repository works locally on the code makes a commit with changes, and push it to the central master repository for other developers to pull and use in their work.

2. Feature Branch workflow:-

In feature branch workflow user create feature wise branch. This useful when you want to do some experiment. After complete development of that code user can then merge the code to the final branch. Branches are independent “tracks” of developing a project. For each new feature, a new branch should be created, where the new feature is developed and tested. Once the feature is completed and ready to live, the branch can be merged into the master branch.

3. Gitflow workflow:-

In this type, a user takes advantage of git branching. This is somehow similar to a feature branch workflow but there are some differences. In this type, a user creates the branch based on the software development lifecycle. Like separate branches for features once all the features are ready then merge those feature branch inside the development branch once everything is verified and tested then merge that branch to release branch or some SIT branch. Or we can create them based on the product version as well.

The above is a simple example. This, not a mandatory to workflow as it is.

You can customize the workflow based on project requirement, team size and team expertise using the git and teams fine grain structure. For my personal project, I use a centralized workflow. For large enterprise project and in CI and CD implementation scenario people prefer the git-flow workflow.

So which approach do you choose? and Why? Let me know in comments.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Micro service architecture for legacy code base.

Micro service architecture for legacy code base.

For more stories.

Refactoring the legacy code is art.

There is no prerequisite to understanding this article. But if you go through the below article it could be advantageous Manage Big Ball of Mud.

As new features and new functionality are introduced the complexity of this application can increase dramatically and harder to maintain code base and add new features too. This application becomes the Big Ball of Mud. Teams have been struggling to maintain the complex application and some suggest replace the complete application with new technology, new hosting and or new architecture pattern. Replace the complete solution is really hard.

There have been big buzzwords in the industry about micro service, serverless architecture. Starting the greenfield development is a quite easy task. What about legacy one?

Strangulation

Michiel Rook’s blog Gradually convert Monolith into micro-service app

Martin Fowler describes the Strangler Application:

One of the natural wonders of this area are the huge strangler vines. They seed in the upper branches of a fig tree and gradually work their way down the tree until they root in the soil. Over many years they grow into fantastic and beautiful shapes, meanwhile strangling and killing the tree that was their host.

This has been a long journey to convert your monolith application into micro-service or nano service architecture or any other architectural pattern that is well suitable for our application context.

Solution

In a 2004 article on his website, Martin Fowler defined the Strangler Application pattern as a way of handling the release of refactored code in a large web application. The fundamental strategy is EventInterception, which can be used to gradually move functionality to the strangler and to enable AssetCapture.

The Strangler Application is based on an inference to a vine that strangles a tree that it’s wrapped around. The idea is that you use the structure of an application — the fact that large apps are built out of individual URIs that map functionally to different dimensions of a business domain — to divide an application into different functional domains, and replace those domains with a new micro-services-based implementation one domain at a time. This creates two separate applications that live side by side in the same URI space. Over time, the newly refactored application “strangles” or replaces the original application until finally, you can shut off the monolithic application. Thus, the Strangler Application steps are

  1. Choose a particular specific piece of functionality which is solo within an application.
  2. Modify it or refactor and rebuild as a service.
  3. Deploy while deploying use proxy pattern to bypass the traffic. This will help us to run code for both legacy and new user. or Create simple facade to intercept or filter the requests going to the backend legacy system.
  4. Repeat above steps until an application is fully migrated.

The great thing about applying this pattern is that it creates incremental value in a much faster timeframe than if you tried a “big bang” migration in which you update all the code of your application before you release any of the new functionality. It also gives you an gradually approach for adopting micro-services — one where, if you find that the approach doesn’t work in your environment, you have a simple way to change direction if needed.

Strangler Pattern address the following problems

  1. Legacy code or Big ball of mud.
  2. complicated and complex architecture.
  3. Monolithic design
  4. Fragmented business rules
  5. Painful deployment process

There are certain aspects which are specific to those particular applications. If it is really old code then we need to figure out and refactor accordingly into some functional level separation and then apply the strangler pattern.

There are two sides either complain about the things and be a part of the problem or take this up find solution and be a part of solution.

For more ref:-

https://docs.microsoft.com/en-us/azure/architecture/patterns/strangler

This is just simple idea. Apply and let me know.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Manage Big Ball of Mud.

Manage Big Ball of Mud.

Everyone wants to develop a project from scratch no one wants to manage the big ball of mud. This article is for those people who wanted to

” Leave campground cleaner than you found it”.

What is Big Ball of mud? Ref- Wikipedia.

A big ball of mud is a software system that lacks a perceivable architecture.A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle.

Ball vs Blocks

Let us observe the picture to understand the difference between to manage the big ball & manage the block. Carrying the big ball is tedious instead of the blocks. When a ball is small we can easily carry and maintain. When its size increases it’s very difficult to manage. As we generally try to arrange the big balls to make some structure it is quite difficult? While If we try to arrange the blocks and structure them it is quite easy and maintainable.

How these forms?

Big ball of mud is an architectural disaster. This kind of systems has usually been developed over a long period of time, with different individuals are working on various pieces. The people who develop this kind of architecture with no formal training of what is software architecture or programming design pattern training.

There are many reasons

  1. No formal training of software architecture.
  2. No formal knowledge of design pattern.
  3. Financial or Time pressure.
  4. Throwaway code.
  5. Inherent Complexity.
  6. Change of requirements.
  7. Change of developers.
  8. Piecemeal Growth.

How to manage BBOM?

Sometimes the best solution is simply to rewrite the application catering to new requirements. But this is the worst case scenario.

The clumsy solution is that stop the new development and refactor whole system step by step. To do this

  1. Write down the test cases.
  2. Refactor code.
  3. Redesign & Re-architect the whole system.

To overcome this BBOM anti-pattern. You must have to follow below steps

  1. Code Review
  2. Code refactoring overtime period.
  3. Fallow best practices.
  4. Use design Pattern & architectural pattern.
  5. If you have time constraint while developing at least design code in a modular way (Single responsibility principle), so you can easily rearrange later.
  6. Use TDD(Test Driven Development).

Big ball of mud isn’t just absence of architecture rather its own architectural pattern, that has merits and trade-offs.

I am not saying here that this case is never gone happen, This happens many times with many peoples including me as well.

This article will give you a brief idea, How to manage & overcome this issue?

There is a will there is a way.

I have followed this path, love to hear from you folks.

How you are gone managing this big ball of mud?

For more stories.

Let’s connect on Stackoverflow, LinkedIn, Facebook& Twitter.

Architecture for IoT applications.

Architecture for IoT applications.

Architecture that’s built to heal.

If you are not aware of the software architecture then go through this tutorial Software architecture. In this tutorial, we are gone through software architecture, not the hardware architecture & electronic device connectivity.

What is IOT?

Before going into the what is IOT let me tell one story that we all know? The story of blind men & elephant originated in the Indian subcontinent. It is a story of a group of blind men (or men in the dark) who touch an elephant to learn what it is like. Each one feels a different part, but only one part, such as the side or the tusk. They then compare notes and learn that they are in complete disagreement. The same thing with IOT we did.

The internet of things (IoT) is the internetworking of physical devices, vehicles (also referred to as “connected devices” and “smart devices”), buildings and other items — embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data.

— wikipedia

If you overlook the definition, then we say there is nothing new it is just the stack of technologies which you already familiar. So someone says it is sensor programming, embedded programming, big data, machine learning, map reduce, etc. And one seeing man come and say this is NOT (this is the elephant).

The term “The Internet of Things” was coined by Kevin Ashton in a presentation to Proctor & Gamble in 1999. Ashton is a co-founder of MIT’s Auto-ID Lab. He pioneered RFID use in supply-chain management.

Now you are aware of what is IOT & software architecture. Then let us combine both things.

How we define IOT application architecture?

For defining architecture I will not go into the electronic & hardware part, I will keep you at the software stack and How we will need to arrange the software technologies as part of our dream IOT product.

This is just the blocks I arranged. Let us discuss in brief. We will explain from bottom to top.

  1. The bottom & core part of IOT application is the sensors and electronic devices that are able to connect to things & grab the data from it.

2. The sensor collects the data but that we need to convert it into understandable format & connect those sensor devices using some protocol that we need to configure here in layer two and also filter data i.e put some threshold for your data for taking a smart decision.

3. Network connectivity, connect your device with wireless connectivity or internet wired connection. This connectivity is changed based on context & domain.

4. We can say this layer as a security layer or application abstraction layer or data abstraction where we can apply security to our product. This layer position should be changeable based on the domain & How we want to apply abstraction to our application.

5. At this stage, we will persist our logic, use this data for taking a smart decision or for reporting purpose. This is the important layer, where our actual product & business logic comes into the picture.

6. This layer where we can say its presentation layer or decision was taken layer. Based on the requirement we can display reports or applying machine learning or some custom logic and takes a smart decision and send a signal back to the sensors.

7. This is where our God exists. For whom we are designing this whole product. The user interacts with this layer. This is the UI layer.

This is the brief description about we can arrange the blocks to design our IOT product.

IOT is simply connectivity of application, device and data.

For the more technical way, we can understand the same architecture.

There are number IOT device vendors & service provider every one has there own SDK & different protocols also there is a number of ways to connect a device to a network.

For more technical, I have designed simple IOT architecture using the Java stack. Here What we are following architecture for our use case.

On the left side is our sensors or core part of our IOT application.

Define architecture is art. There is a number perspective which is needed to be considered while designing architecture. This is a simple & general perspective to design IOT application architecture. Internet of Things for Architects.

This is just an idea. Dig deeper find more & let me know as well.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.

Software Architecture What, Why & How ???

Software Architecture What, Why & How ???

For more stories.

What is architecture ?

Architecture is arranging the blocks in modular, structured manner. Architecture is Art & architect is the artist.

Everything required architecture, it is not rocket science. Every one is the architect in a day to day life. In our house we are arranging our material, books, kitchen tools anything that we arrange in well manner is architecture. The shopkeeper arrange shop. Anything arranged in modular & structured manner is architecture.

Architecture is an art, so in art we have patterns & styles. The styles & patterns are changed based on the context, domain, and problem.

As per the Wikipedia which defines software architecture as “the set of structures needed to reason about the system, which comprise software elements, relations among them, and properties of both.”

Good software architecture is describe the applied patterns, layers or tiers which are used to define the clear separation of concerns of your business.

Why it is required ?

Less Is More. The WhatsApp Architecture Facebook Bought For $19 Billion.

As stated by Microsoft, “The goal of architecture is to identify the requirements that affect the structure of the application. Good architecture reduces the business risks associated with building a technical solution.

Architecture must be like switch, plug & play. Business is ivy it grows, just you need to manage. Good architecture is easy to understand & cheap to modify.

The success of business is depend on the architecture.

Benefits of architecture.

Benefits which we always curious about it. Without benefits there is no business.

Below are the benefits you will get if you fallow the architecture styles & patterns.

  1. High Productivity.
  2. Better maintainability.
  3. High adaptability.
  4. Makes It easier to reason about and manage change.
  5. Secure & Scalable.
  6. Deliver higher quality in lower cost.

Architecture defines set of rules and constraint that are specific to the system or project. Architecture enables the quality attribute of the system or we can say it defines the quality with every action.

How we design a good architecture ?

To become a expert we need to practice. Practice for perfection.

There are some important principles need to be consider while designing architecture.

  1. Common sense :- (What is it ?) is a basic ability to perceive, understand, and judge things.
  2. The system should be built to change instead of building to last.
  3. Learn from your past experience & current technology trends.
  4. There’s more way than one to do it. ( It is useful to find the optimise solution )
  5. Understand the end user context & business domain.
  6. Follow the Design patterns & styles.
  7. Follow coding best practices.
  8. Understand the business module, sub module, consider component & layers (tiers ) to abstract them & identify the key interfaces.
  9. Use iterative approach while designing architecture.

Software architecture & Software design are two different thing don’t mix them. Software architecture is skeleton while software design is meat.

Software architecture is more about the higher level & software design is more about there component, class or modules.

Software architecture patterns e.g MV* pattern & software design patterns e.g DAO, Factory.

Any software architecture has two key components.

  1. Architecture Patterns :- It defines the implementation strategies of the components.

2. Architecture Style :- It actually defines the components & connectors.

” Life is better when things are made for good. “

Software Architecture Category

  1. Communication
  2. Deployment
  3. Domain
  4. Structure

There is lot more to explore in software architecture. Would like to hear your suggestion, inputs on this post.

For more stories.

Lets connect on Stackoverflow , LinkedIn , Facebook& Twitter.