Cloud-native app development

Sanna Diana Tomren
12 min readNov 11, 2020

In today’s application development, it is common to encounter terms like microservices, containers, virtual machines (VMs), orchestrations, and last but not least cloud computing. I will, in corporation with Giridhar Srinivasan, give an introduction to these topics.

Microservices

In the development of an app for your company, your product, or service, decisions need to be made on the application’s foundation — its architecture.

Microservices, also known as microservice architecture, is one of many application architectures. This architectural style breaks an application into a series of modules, each responsible for a unique function, yet gives the end-user a cohesive experience.

The microservice architecture enables the rapid, frequent, and reliable delivery of large, complex applications. It also allows greater flexibility for the developer team in choosing its technology stack. Microservices involve a lot of moving parts / a high complexity of components and APIs that require a lot of effort and careful planning. Microservice architecture combine with an API-First Approach/strategy is not uncommon to see. Since this approach combining different technologies, components, and “moving parts” it is important to strategically apply automation to ensure communication, monitoring, security, testing, and deployment processes to run smoothly. A practice used to break an application into independently deployable functions, “moving parts” is the use of Containers to host the different modules. To read more about microservice architecture, its Disadvantages & Advantages click here.

Computers to host applications

An application and its components need a computation environment to be hosted on to work, so hardware (physical components that construct a computer) and operating system (software interface between computer users and the computer’s hardware) is needed for hosting. It is common to supply the necessary resources like hardware (HW) and operating systems (OS) through virtual environments, virtualization, with the use of so-called Containers and or Virtual machines.

Illustration of Hardware and its use in combination of OS and application

Virtual machines (VMs) and Containers

VMs are an abstraction over the hardware (virtual computation environment for the app to be hosted in), while Containers are an abstraction over Operating systems (OS from here). VMs give you a “share” of the underlying hardware itself, while containers give you a “share” of the underlying OS.

To illustrate this see the model below:

Abstraction and illustration of container and VM environment

Virtual machines (VMs)

Virtual Machine Definition: Virtual machines are software computers that provide the same functionality as physical computers but through software. Like physical computers, they run applications and an operating system. However, virtual machines are computer files that run on a physical computer and behave like a physical computer. In other words, a computer in a computer.

Why have a computer in a computer?

4 Reasons to use VMs

1. Improved isolation and security, Virtual Machines offer strong isolation, which is key to avoiding cross-contamination in projects. Developing inside a VM allows a developer to keep their environment dedicated to just information for that project. It can also prevent different version installs of FPGA development software from conflicting with each other. By sandboxing the VM, any sensitive corporate or personal information residing on the host computer is kept secure.

2. Flexible configurations, Virtual Machines lend themselves well to testing different configurations and setups. Developers can use VM snapshots to try various scenarios, and then quickly and easily restore the environment. This allows developers and software testers to identify configuration problems before end-users run into them.

3. Simulation options, Virtual Machines make great simulators for what in software engineering are called, multitier architecture (often referred to as n-tier architecture) or multilayered architecture which is a client-server architecture in which presentation, application processing, and data management functions are physically separated.

4. Ease of distribution, when the software or Field-programmable gate array (FPGA) code is ready to deploy and distribute, VMs can be packaged up and delivered. A flash drive or solid-state hard drive can contain an entire development environment, and end-users can simply download and run the VM. There are no complex installation instructions, no configuration headaches, no on-site engineers necessary, and all the settings, system variables, and environment variables are the same. The hardware abstraction also allows for simpler troubleshooting with it comes to support. An increasing number of server-based products, like Twiki for example, are being released as packaged VMs. This eliminates the need for compiling or OS configuration. Migration tools are available to convert between various VM formats. Tools are also available to deploy a virtual machine to a physical machine — albeit a non-trivial task.

This combination of factors makes virtualization an undeniable boon for developers and testers.

Illustrating the relationship

In the illustration below, you can see that the “Hardware” is at the bottom, which is abstracted by the VMs with the help of “Hyper-V/VMWare” virtualization software. Hyper-V or VM-Ware is virtualization software that helps to “carve” out a share of the hardware according to desired configurations so that a Virtual machine can be installed on top of the share of the hardware. As the illustration indicates, it is quite possible to have a Linux and Windows-based Virtual Machines coexisting while “unaware” of each other’s existence.

Virtual Machines are a copy of the whole Operating system software; therefore, when you require a new VM of say Windows or Linux, then the entire software has to be copied and installed on the partitioned hardware.

Another and different virtualization concept is containers.

Containers

“A container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another.”

Illustrating relationship

Containers are a higher-level abstraction (as opposed to VMs) of the underlying operating systems. In other words, containers are a lightweight platform for the application, containing only the necessary components and dependencies for the application(s) to run within. Container runtimes help to “carve-out” a slice of the underlying operating system and hardware for a standalone platform for your applications.

Containers are expected to be “lightweight,” portable, many at times short-lived and ephemeral — this means that containers can run on any machine like laptops or servers or in VMs as long as these machines are bundled with the container runtimes.

It is important to note that containers are dependent on the underlying OS; therefore, we cannot run linux based containers on windows based machines. VMs, on the other hand, are different since they rely on the underlying virtualization runtimes and hardware, thereby a windows based VM can comfortably run alongside a linux based VM on the same hardware.

Why do we use containers in software development?

Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.

Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a containerization platform.

Let us try to take a closer look at how containers look and work.

Lifecycle of containers

Let’s look at the illustration above. A container image is built by “layering,” an Application executable built and tested by the developer on top of the required “layer” of binaries and libraries of the OS. The built container image is an independent and self-contained artifact that can be deployed/executed now in any environment that has a container runtime installed. The Open Container Initiative is a community trying to govern and address uniformity and standardization of the Container runtimes and images. This is an important initiative for containers to be able to “build once and run anywhere”.

As applications grow to span multiple containers deployed across multiple servers, operating them becomes more complicated.

How do you coordinate and schedule many containers used for diffrent part in an microservice arcitecture for your app? How do all the different containers in your app talk to each other and coordinates? How do you scale many container instances in case of more computetion power needed? This is where Kubernetes can help”

Kubernetes (k8s), container orchestration

Kubernetes is an open-source container orchestration software. It provides an API to control how and where those containers will run. It allows you to run your Docker (one of man container distributer) containers and workloads and helps you to tackle some of the operating complexities when moving to scale multiple containers, deployed across multiple servers.

Kubernetes lets you orchestrate a cluster of virtual machines and schedule containers to run on those virtual machines based on their available compute resources and the resource requirements of each container.

Containers are grouped into pods, the basic operational unit for Kubernetes. These containers and pods can be scaled to your desired state, and you’re able to manage their lifecycle to keep your apps up running. Containers should only be scheduled together in a single Pod if they are tightly coupled and need to share resources such as disk.

kubernetes.io

The Cloud journey

Why should we know all of this and how can it help you and your company's app development today?

Traditionally app-, software-, and infrastructure development and its hosting have been done through mainframes physical hardware (Server rooms) and local-, networks, stationed at the office building, or close by. This approach requires a high initial cost, an IT-department that knows how to operate, improve, and develop each component. Not at least specific and updated competency on how to secure those resources from failure or unintended to intended attacks. From an investment perspective, the expected return of investment(ROI) is further down the road as illustrated below. Due to all equipment, training, resources, and competency needed to kick it off.

Nowadays it is more common to talk about Cloud Computing. Cloud computing is a term used to describe the use of hardware, software, and applications delivered via a network (usually through the Internet).

Why Cloud?

If we take a look at the Application, Software, and infrastructure cycle again, we see a significant change.

By using cloud infrastructure, you don’t have to spend huge amounts of money on purchasing and maintaining equipment, resources, and competency. This drastically reduces CapEx costs. You don’t have to invest in hardware, facilities, utilities, or building out one or many large data centers to grow your business nationally or internationally.

Other cloud benefits

Security, one of the major concerns of every business, regardless of size and industry, is the security of its data. Data breaches and other cybercrimes can devastate a company’s revenue, customer loyalty, and brand positioning. Cloud offers many advanced security features that guarantee that data is securely stored and handled.

Scalability, different companies have different needs, a large enterprise of 5000+ employees won’t have the same IT requirements as a start-up. A military will have different needs than a hairdresser and so on. Using the cloud is a great solution because it enables the enterprise to efficiently and quickly, scale-up/down their IT departments, complexity according to business demands. Where you only pay for exactly what your use of resources, without having to invest in physical infrastructure, operators, and developers first.

Mobility, cloud computing allows mobile access to corporate data via smartphones and devices, which is a great way to ensure that no one is ever left out of the loop. Staff with busy schedules, or who live a long way away from the corporate office, can use this feature to keep instantly up-to-date with clients and coworkers. Resources in the cloud can be easily stored, retrieved, recovered, or processed with just a couple of clicks. Users can get access to their works on-the-go, 24/7, via any devices of their choice, in any corner of the world as long as you stay connected to the internet. On top of that, all the upgrades and updates are done automatically, off-sight by the service providers. This saves time and team effort in maintaining the systems, tremendously reducing the IT- staff workloads.

Disaster recovery, data loss is a major concern for all organizations, along with data security. Storing your data in the cloud guarantees that data is always available, even if your equipment like laptops or PCs, is damaged. Cloud-based services provide quick data recovery for all kinds of emergency scenarios — from natural disasters to power outages. Cloud infrastructure can also help you with loss prevention. If you rely on the traditional on-premises approach, all your data will be stored locally, on office computers. Despite your best efforts, computers can malfunction for various reasons — from malware and viruses to age-related hardware deterioration, to simple user error. But, if you upload your data to the cloud, it remains accessible for any computer with an internet connection, even if something happens to your work computer.

Control, having control over sensitive data is vital to any company. You never know what can happen if a document gets into the wrong hands, even if it’s just the hands of an untrained employee. Cloud enables complete visibility and control over your data. You can easily decide which users have what level of access to what data. This gives you control, but it also streamlines work since staff will easily know what documents are assigned to them. It will also increase and ease collaboration. Since one version of the document can be worked on by different people, and there’s no need to have copies of the same document in circulation.

Cloud computing adoption is on the rise every year, and it doesn’t take long to see why. Enterprises recognize cloud computing benefits and see how they impact their production, collaboration, security, and revenue.

Thank you!

It is many ways to Rome and the same with app development. Microservice architecture, combined with an API-first strategy, hosted in the Cloud through the use of VMs and or Containers, orchestrated with Kubernetes, is just one of many ways in the road of app development. We hope this article gives you a better understanding of microservices, containers, virtual machines (VMs), orchestrations, and an understanding of cloud computing. With this sneak peek we hope you are intrigued to check this further out and see where you want to apply this into your own application development. Look down below for references and recommended readings:

--

--