Containerisation means that you can deploy only those services that you need, and only when you need them. Masstech CTO James Whitebread explains how this results in substantial operational savings.
Today all companies are looking to increase the efficiency of their technology track as they seek to drive down costs and as they react to difficult economic challenges. Those businesses looking to implement cloud technologies are aware of challenges around ensuring that server instances are only switched on and scaled up when in operation. The operational ideal is to avoid 24/7 platform operation, using server instances that can be de-activated outside of business hours or when operations aren’t required.
This is where containerisation and orchestration through services such as Kubernetes come in.
What is a container?
A container is rather like a virtual machine, something that many of us have become familiar with over the past decade or so. In fact containers have been around for a similar amount of time, but only started to grow in popularity since the introduction of Docker at Pycon in 2013.
A container is essentially a cut down method of virtualisation. Rather than each container running a full operating system and then sitting on top of a full hypervisor to virtualise the underlying hardware, a container instead virtualises the operating system so that it contains just the application and any dependencies it may have. This means that a container is lean; highly portable and simple to deploy. A container leverages the resources and capabilities of host operating system (these days the most pervasive is Linux, though Windows can additionally be supported).
How does a containerised environment work?
A container runs on top of un underlying host operating system which, as we move more into a microservices-based architecture, is usually running a significant number of similar containers. By design these are small, isolated pieces of code delivering discrete features. The host operating system provides the underlying hardware, the processing, disk devices, and memory, which probably all feels familiar to those who have worked with and come to understand the hypervisor-based virtual server-based environments. However, in this case we now layer on the container engines, each with their own cut down operating system, which is suited to the selected containerisation technology. And finally, layered on the hardware, host operating system and container engine, is the application in question. This consists of a variety of binaries and code libraries. In this way, each application has its own isolated environment or container.
What are the advantages of containerisation?
Here are a few:
More agile deployment and management
A container is a self-contained application, ready to scale and be deployed as needed)
Lower overheads / greater efficiency
Containers consume fewer resources, given the cut down OS and lean interaction with the host OS
Multi cloud ready
Containerisation provides a self-contained environment, only needing an orchestration engine to operate. They are essential cloud and data central neutral and can be deployed anywhere.
Resilience and availability
The underlying orchestration engine (Kubernetes for example) can manage resilience and availability of services, by stopping, starting and starting up new containers if others fail
Orchestrated workloads are scalable, deploying as and when they need to run
Containers are isolated from each other and even from the host system by design
Orchestration services (e.g. Kubernetes) deploy as required, making it easier to scale on demand
Can work on bare metal instances as well as virtualised infrastructure provided by many leading cloud vendors
Simple install and upgrade processes through the orchestration service
How might containerisation impact your deployment model for the better?
We’ve covered the benefits of the technology. But what does this mean from a technology and commercial perspective?
– There will likely be a move from visual server and instance-based infrastructure to a container-based infrastructure to provide the aforementioned benefits.
– Many more services (as a result of remote working requirements) will continue to move into the cloud, with those containerised services being operated in the cloud.
– As services such as Kumulate become containerised, this opens the door to a more on demand service. Companies will run minimal scaled platforms, scaling them up as demand rises, only for those periods.
– This pay for consumption model will allow businesses to leverage those additional benefits they have been working towards by moving to the cloud, but which are more difficult to optimise in traditional instance based approaches.
When will Masstech support containerised deployment?
Masstech is working hard to support a containerised approach. We recognise the benefits available for our customers who want to be able to utilise a hybrid cloud or multi cloud architecture, an architecture that allows on demand scalability with a minimal, always-on, cost effective footprint. In 2021 we’re accelerating our containerised approach.
If you would like to know more, or even to be an early adoption partner, please let us know.