Your software development process simplified.

Illustration from Iron

One of the downsides of ever-evolving technology is that it’s hard to keep track of them. Especially when we’re in the middle of a project and not all of the team members have the same operating system. Not to mention the implications when that software is going to be largely shared. That makes environment and compatibility issues almost inevitable. Thankfully, there’s a thing called Docker.


Docker is a service platform that lets us build and deploy software efficiently inside virtual containerized environments. You’re probably wondering what do containers have to do with all of this. It’s actually pretty straightforward.

Containers, in the actual sense, let you store and move things from one place to another. Docker containers, on the other hand, allow you to package software with all the parts that it needs (libraries, and other dependencies) so that it can be safely shipped and used in different environments.

Because of this, developers can rest easy knowing that the software can function on other systems besides the one they used for writing and testing the code. Which is a recipe for simpler works and easy maintenance. Sounds cool, right?

How Docker Works

These docker containers run like micro computers inside your system where each of them virtualizes the operating system of a server. They typically have one responsibility where they could be scaled and networked together to build a working application.

Docker Images

Containers need to run an image to exist. What is an image? A docker image is like a snapshot of instructions in order to create a container. It contains application code, libraries, tools, and other dependencies needed for an application to run.

Docker Example

A dockerfile is a text file containing commands users could use on a command line to construct an image. Thus, what we’re gonna do first is to make them.

Each of the instructions has different functionality, in the snippet above we see:

  • FROM: Creates a layer from the python:3 docker image
  • ENV: Sets the environment variable PYTHONUNBUFFERED to the value 1
  • WORKDIR: Sets the working directory for the following instructions
  • COPY: Copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>
  • RUN: Executes command in a new layer on top of the base image and commit the results
  • USER: Sets the user name to use when running the image
  • CMD: Provide defaults for an executing container

After creating the dockerfile, the next thing we’ll do is execute it. You can specify a repository and tag at which to save the new image if the build succeeds.

docker build -t <respository> .

Let’s say I want to build the image to justika-umkm-legal with the latest tag.

docker build -t justika-umkm-legal .

After successfully building the image, we can view the image we’ve created by typing:

docker images
Output for ‘docker images’

You can see the image info such as repository name, tag, image ID, the time created, and size. There it is! we’ve managed to make a docker image for our own application to be able to be executed by a container.

Docker Container vs Virtual Machine

Docker uses virtualization to deliver containers, but different than virtual machines, they use fewer resources. This is because the resources are shared directly with the host operating system. Therefore, it allows you to run many containers where you may only be able to run a few virtual machines.

Docker Orchestration

Container orchestration is the automation of managing, scaling, and maintaining containerized software. As an example, I’m going to provide a case from my ongoing software development project as seen from the diagram below:

My team project’s tech stack

Suppose each of the elements of my project (frontend, backend, database) has been containerized and is later scaled up. In order to keep track and provide well-maintained software, we can’t rely on manual labor alone. This is where docker orchestration comes in as an automation tool to help you sustain your scaled application in case of container failures and accidents.

The tools that are responsible for orchestration are called orchestrators. A couple of those orchestrators that are well-known are Kubernetes and Docker Swarm.

An aspiring UI/UX Designer, also a Junior @ CSUI