Containers

Containers - image of ship loaded with containers

Over the last few weeks we’ve looked at cloud computing and some of the technologies that support it, such as virtual machines and hypervisors. This week we’ll have a quick look at containers.

What is a Container?

A container is an executable software unit containing an application that has been packaged with the libraries and other dependencies it needs to run. A standard method of packaging can be used that allows the container to be run anywhere, whether it’s on a mainframe, a desktop computer, or in the cloud.

Containers are small, fast and portable because unlike a virtual machine, containers do not include a guest OS in every container instance. The host OS kernel is used to implement the guest OS. Containers use the resources allocated by the host OS. Applications running inside of a container can only see the container’s contents and the devices that have been assigned to the container by the host OS.

The host operating system uses a form of OS virtualization that isolates the running containers from each other. This OS virtualization also controls the resources that these processes can access. This includes memory, disk space and a proportion of the CPU execution time. The guest operating systems share the same running instance of the host OS.

A number of containers can be run on a physical computer. Each container is only allocated a subset of the physical computer’s resources. A container may contain a number of applications. These applications may run separately or concurrently, and can even interact with one another.

Containers have been around for many years. The first conceptual container was chroot developed between 1979 and 1982 on the UNIX operating system. FreeBSD expanded chroot for use in virtualization and introduced the jail command in 2000. IBM released Workload Partitions in 2007 on their AIX operating system. The start of the modern container era can be traced to the release of Docker in 2013.

There are many container implementations and products, including:

  • Containers (LXC, Docker, Podman).
  • Zones (Solaris containers).
  • Virtual private servers (OpenVZ).
  • Partitions (AIX Workload Partitions).
  • Virtual environments (VEs).
  • Virtual kernels (DragonFly BSD).
  • Jails (FreeBSD jail or chroot jail).

Virtual Machines vs Containers

Traditional virtual machines use a hypervisor to virtualize physical hardware. Each VM contains a virtual copy of the hardware, a copy of the guest OS, and the application with its associated libraries and dependencies.

With OS-level virtualization a physical computer is virtualized at the operating system level. This allows a single physical computer to run multiple isolated virtualized servers.

These virtual servers look like physical computers from the point of view of programs running in them. An application running on a normal operating system can see all the resources (CPU, memory, disks, files, directories, connected devices, network, etc.) of that computer. However, applications running inside a container can only see the contents of the container and the devices assigned to that container.

Linux containers are implemented using standard Linux kernel features that provide virtualization, isolation and resource management. These include implementations of the standard chroot mechanism, cgroups and Linux namespaces. cgroups (“control groups”) is a Linux kernel feature that limits and isolates the resource usage of a collection of processes. Namespaces partition the kernel resources so that different sets of processes see different sets of resources.

Docker and Kubernetes

Docker and Kubernetes are two technologies that are most often referred to when talking about containers. So what are they?

Docker and Kubernetes are complementary technologies: Docker builds containers, while Kubernetes runs them.

Docker is an open-source containerization platform. Docker provides the tools that allow developers to quickly and easily package applications into small, isolated containers. One of the reasons behind Docker’s success is its portability. Docker containers can run on any computer whether it’s a desktop, a mainframe or somewhere in the cloud. An application consists of many processes. Each container runs a single process. An application can run continuously across many containers while a part of the application in a separate container is being updated or fixed.

Kubernetes is an open-source container orchestration platform. Like a conductor leading an orchestra and keeping the musicians together, Kubernetes schedules and automates the deployment and management of containerized applications. Multiple containers operate together in a cluster. A Kubernetes cluster includes a master node that schedules work for the rest of the containers (worker nodes) in the cluster. The master node determines where to host containers and how to put them together. Kubernetes also provides load balancing, self-healing and automated rollouts and rollbacks.

Google released Kubernetes as an open source project in 2014. It is now managed by an open source software foundation called the Cloud Native Computing Foundation (CNCF).

What’s Next?

In the next few posts we’ll have a closer look at Docker and Kubernetes, and then move on to microservices.

Don’t forget to share your comments and experiences.

Stay safe, and I’ll see you next week!

1 thought on “Containers”

  1. As usual a very insightfull article for the uninformed. You are indeed a Guru both on and off the field.

Leave a Comment

Your email address will not be published. Required fields are marked *

Code like a Java Guru!

Thank You

We're Excited!

Thank you for completing the form. We're excited that you have chosen to contact us about training. We will process the information as soon as we can, and we will do our best to contact you within 1 working day. (Please note that our offices are closed over weekends and public holidays.)

Don't Worry

Our privacy policy ensures your data is safe: Incus Data does not sell or otherwise distribute email addresses. We will not divulge your personal information to anyone unless specifically authorised by you.

If you need any further information, please contact us on tel: (27) 12-666-2020 or email info@incusdata.com

How can we help you?

Let us contact you about your training requirements. Just fill in a few details, and we’ll get right back to you.

Your Java tip is on its way!

Check that incusdata.com is an approved sender, so that your Java tips don’t land up in the spam folder.

Our privacy policy means your data is safe. You can unsubscribe from these tips at any time.