What are containers?

A container is an executable package of software that includes code and all its dependencies—runtime, system tools, system libraries, configuration files, settings, etc.—so the application runs quickly and reliably from one computing environment to another. Container images become containers at runtime.  By containerizing the application platform and its dependencies, differences in OS distributions and underlying infrastructure are abstracted away. This solves the problem of how to get software to run reliably when moved from one computing environment to another, such as from a developer’s laptop to a test environment, from a staging environment into production, or from a physical machine in a data center to a virtual machine in a private or public cloud.

 

Containers vs. virtualization

A container is different than virtualization technology, where the package that can be passed around is a virtual machine. It includes an entire operating system as well as the application. A physical server running three virtual machines would have a hypervisor and three separate operating systems running on top of it. By contrast, a server running three containerized applications with Docker runs a single operating system, and each container shares the operating system kernel with the other containers. Shared parts of the operating system are read only, while each container has its own mount (i.e., a way to access the container) for writing. So, containers are more lightweight and use fewer resources than virtual machines.

 

What other benefits do containers offer?

  • Size: A container may be only tens of megabytes in size, whereas a virtual machine with its own entire operating system may be several gigabytes in size. Because of this, a single server can host far greater number of containers than virtual machines.
  • Resource efficiency: Virtual machines may take several minutes to boot up their operating systems and begin running the applications they host, while containerized applications can be started almost instantly. That means containers can be instantiated in a “just in time” fashion when they are needed and can disappear when they are no longer required, freeing up resources on their hosts.
  • Modularity: Rather than run an entire complex application inside a single container, the application can be split in to modules (such as the database, the application front end, and so on). This is the so-called microservices approach. Applications built in this way are easier to manage because each module is relatively simple, and changes can be made to modules without having to rebuild the entire application. Because containers are so lightweight, individual modules (or microservices) can be instantiated only when they are needed and are available almost immediately.

 

Suggested Reading and Related Topics

  • Microservices: Learn about microservices architectures and their benefits.
  • Serverless computing: Understand this cloud computer architecture in which the cloud provider runs the server and dynamically manages the allocation of machine resources.
  • Horizontal Scaling and Vertical Scaling: Learn about the different scaling strategies for improving application performance and meeting growing demands.