Why it is so popular and when to use it?
What is Docker?
Docker is a platform for building, running, and shipping applications in a consistent manner. So if you are a developer and your application works perfectly in your development machine, it can run and function the same way on other machines.
If you have been working as a software developer you may face some situations like your application works on your development machine, but doesn’t somewhere else.
For that, there can be some reasons.
- If one or more files missing when deployment.
- Software version mismatch.
- Different configuration settings.
And this is where Docker comes to rescue.
With Docker, we can easily package our application with everything it needs and run it anywhere on any machine with Docker.
The Docker client
- The Docker client (
docker) is the primary way that many Docker users interact with Docker. When you use commands such as
docker run, the client sends these commands to
dockerd, which carries them out. The
dockercommand uses the Docker API. The Docker client can communicate with more than one daemon.
The Docker daemon
- The Docker daemon (
dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
- A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
- When you use the
docker runcommands, the required images are pulled from your configured registry. When you use the
docker pushcommand, your image is pushed to your configured registry.
Who is Docker for?
Docker is a tool that is designed to benefit both developers and system administrators, making it a part of many DevOps (developers + operations) toolchains. For developers, it means that they can focus on writing code without worrying about the system that it will ultimately be running on. It also allows them to get a head start by using one of the thousands of programs already designed to run in a Docker container as a part of their application. For operations staff, Docker gives flexibility and potentially reduces the number of systems needed because of its small footprint and lower overhead.
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. This section is a brief overview of some of those objects.
As we discussed earlier, with Docker we can package our entire application with all of its needs such as libraries, configs, files, etc. and we can easily run these packages in a different environment inside Docker.
This isolation environment for running an application is called as Containers. We can run multiple applications in isolation.
Containers are very lightweight because containers don’t need a fully completed Operating System. In fact, all containers in a single machine shared the OS of the host.(Windows, Linux, etc)
Dockerfile is a plain text file that includes the instructions that docker uses to package this application into an Image(more on that in a moment). Each Docker container starts with a Dockerfile. A Dockerfile specifies the operating system that will underlie the container, along with the languages, environmental variables, file locations, network ports, and other components it needs — and, of course, what the container will actually be doing once we run it.
Once you have your Dockerfile written, you invoke the Docker build utility to create an image based on that Dockerfile. Whereas the Dockerfile is the set of instructions that tells build how to make the image, a Docker image is a portable file containing the specifications for which software components the container will run and how.
Because a Dockerfile will probably include instructions about grabbing some software packages from online repositories, you should take care to explicitly specify the proper versions, or else your Dockerfile might produce inconsistent images depending on when it’s invoked. But once an image is created, it’s static.
A Docker image typically contains,
- A cut-down OS
- A runtime environment (Eg: Node)
- Application Files
- Third-Party Libraries
- Environment Variables and so on.
Containers vs Virtual Machines
One of the questions that often comes up is how are containers different from Virtual Machines. Do you want to know the differences?
Virtual Machine (VM)
As its name implies, a virtual machine is an abstraction of a machine (Physical Hardware). We can run several virtual machines in one physical machine using a tool called Hypervisor. Hypervisor is basically software used to create and manage virtual machines. Some examples for Hypervisors are Virtual Box, VMware, Hyper-v (Windows only), etc. All the running virtual machines running under the same machine but in different isolated environments.
Problems with Virtual Machine
- Each VM needs a fully-blown Operating System.
- Slow to start, because the entire OS has to be loaded just like starting your computer.
- Resource intensive, because each VM takes a slice of actual physical hardware resources. Like CPU, Memory, and disk space, etc.
Containers give us the same kind of isolation, so we can run multiple applications in isolation.
But containers are lightweight since it don’t use the separate OS for each and every container like VMs, instead, all containers in a single machine shared the Kernel of the Operating System of the host. So that means we need to license, patch, and monitor a single OS.
Also because the OS is already started in the host, a container can start up pretty quickly. Usually in a second.
Containers don’t need a slice of the hardware resources on the host. So we don’t need them to give a specific number of CPU cores, a slice of memory or disk space.
So in a single host, we can run ten, or even hundreds of containers side by side
So these are the differences between containers and virtual machines.
Docker Installation on Windows
1. Go to the website https://docs.docker.com/docker-for-windows/install/ and download the docker file.
- Note: A 64-bit processor and 4GB system RAM are the hardware prerequisites required to successfully run Docker on Windows 10.
2. Then, double-click on the Docker Desktop Installer.exe to run the installer.
- Note: Suppose the installer (Docker Desktop Installer.exe) is not downloaded; you can get it from Docker Hub and run it whenever required.
3. Once you start the installation process, always enable Hyper-V Windows Feature on the Configuration page.
4. Then, follow the installation process to allow the installer and wait till the process is done.
5. After completion of the installation process, click Close and restart.
Now let’s talk about development workflow when using docker.
So to start off let’s take an application (no matter what kind of it is or how it’s built) we take that application and dockerize it, which means we make a small change so that it can be run by docker. But how?
By simply adding the Dockerfile to the application. Dockerfile is a plain text file that includes the instructions that docker uses to package this application into an Image. This image typically containing all the things we need to run the application.
These images are downloaded from a Container Registry, a repository for storing images of containers. The most common of them is the Docker Hub.
Once we have the image we tell docker to start the container using that image.
When you run docker build . on the same directory as the Dockerfile, Docker daemon will start building the image and packaging it so you can use it. Then you can run docker run <image-name> to start a new container.
So then our application will get loaded inside that container and this is how we run our application run locally on our development machine.
So that's it for now. Hope you all got an idea about Docker and let’s put this knowledge into practice in the upcoming lessons.
Thank you very much.