Speed. Flexibility. Scalability. Maintainability. Resiliency. If these are some of the most desirable attributes in your organization cloud environment, you should be using containers if you aren't already. Every cloud vendor already has cloud container capabilities, you just need to understand how to use them.
What is a Container?
A container is a standard unit of software that packages code in order to make an application run more quickly across computing environments. These containers can be moved around easily, providing flexibility and mobility within the cloud, especially when combining them with virtualized, scaled hardware. This allows you to spin up your container software in microseconds instead of minutes. The container doesn't even need to understand the operating system, just the runtime, boosting scalability in and out of the cloud.
Containers are designed to be or do a small, specific task. For instance, a large application might include going to a database and getting information, getting input from a user, or doing some analysis - all steps that in a traditional application end up being sequential. But in a container framework, since they're all designed for different things, so they're broken up to act independently.
What is the difference between stateless and stateful containers?
Originally containers were designed to be stateless, meaning when code is executed, it simply gives you the results. It does not save or store information about the transaction, but merely provides an output to your input. With stateful containers, you can store information about a service and what it did. Of course this means containers must have external storage to store this information, which is where Compass comes in. Compass backs up and stores metadata about a transaction. In a stateless transaction there may be very little to protect or backup, but most apps eventually become stateful in some form, and even containers at rest must be backed up, creating a need for persistent storage.
How are containers used?
Container infrastructures can be as varied as the environments running them, but there are currently four main models of physical infrastructure:
- Legacy infrastructure relies on a physical host, also known as physical servers, bare machines, or even bare metal, depending on the source. The infrastructure runs a host operating system with the entire application running from the host. The app itself can be as simple as a database.
- By layering a hypervisor (or VMware) on top of the core host operating system and running the hypervisor, the hardware can be extracted from the guest operating systems. This model requires software such as vSphere to manage all the guest operating systems across multiple physical servers. The added layer of virtualization of hardware - the guest operating system - makes the application think it is running on hardware at the same level as a host operating system on a physical server. By virtualizing the hardware, you can share a single server with many applications, and all the applications will think that they are running on their own hardware. Cobalt Iron can run at the hypervisor level or at the guest operating system level for different solutions.
- A Container Service removes the need for a guest operating system. All of the containers, binaries, libraries, and apps run on a container runtime but share the host operating system. This makes the container much lighter than a virtual machine, and it takes a fraction of time to boot up and build out when needed. A typical container runtime is Docker. Kubernetes or Docker Swarm are commonly used to manage containers across multiple physical servers to spread workloads. Layered on top are the individual containers. At the app level, the app contains the app itself and the bins/libs.
- In the most typical but complex environment, a business will not consume a full physical host for a full container runtime, but run VMWare in between the host operating system and infrastructure. This allows multiple guest host at the hypervisor level with VMWare abstracting the infrastructure. Docker runs on top of those to abstract the operating system from the application, the container holds the apps and bins/libs services. Cloud services usually contain load balancing and autoscaling features across multiple environments, and the lighter weight of the container enhances the speed of response to these features. You don't want to wait five or ten minutes for VMWare to boot up when you need a simple burst of processing. Typically, a load balancer spins up between containers, and/or a response time driver helps increase efficiency in processing.
This model allows you to bring up another container service across existing and running infrastructure more quickly, and allows you to move the support of developmental organization away from the operations of infrastructure and operating systems meaning developers don't have to tie so closely to the guest/host operating system, and can manage infrastructure through code more easily.
But are these models really that different?
In virtual real estate, it's all about location, location, location. While these models may seem similar at first glance, there are distinct differences in where virtualization takes place. In the model on the left (below), physical hardware is virtualized at the hypervisor level to each guest operating system. The guest operating systems are considered "heavy," and the two stacks look similar in weight.
In the model on the right, the host operating system itself is virtualized for each container. The virtualization location dramatically enhances the speed across your environment, assisted at an even more granular level by microservices.
What is a Microservice?
A common misconception is that microservices are equivalent to Containers, but they're not.
Let's look at containers running on Docker. Typically an app is running in a container, but with microservices, little pieces of an app can sit at different levels of infrastructure within your environment. You may have a database you're considering that provides a service to your organization. Maybe this database supports ten applications. You can build each piece as a service with a basic function that is small enough to be used by multiple applications. Pull together these smaller functions to create a new application, and that in a nutshell is microservices architecture, a development approach to applications.
Containers combined with microservices provides technical flexibility, rapid deployment, resiliency, maintainability, and reusability across multiple environments. Leveraging these tools can make your data easier to manipulate and infinitely more manageable.