Docker is a container manager, which means that is able to create and execute containers that represent specific runtime environments for your software. In contrast with virtual machines like VirualBox, Docker uses resource isolation features of the Linux kernel to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
The following table gives a direct comparison between virtual machines and containers:
Virtual Machines (VMs) | Containers |
---|---|
Represents hardware-level virtualization | Represents operating system virtualization |
Heavyweight | Lightweight |
Slow provisioning | Real-time provisioning and scalability |
Limited performance | Native performance |
Fully isolated and hence more secure | Process-level isolation and hence less secure |
A computer with docker can run multiple containers at the same time. Docker containers can easily to ship to remote location on start there without making entire application setup.
In other words, Docker is a tool to avoid the usual headaches of conflicts, dependencies and inconsistent environments, which is an important problem for distributed applications, where we need to install or upgrade several nodes with the same configuration.
An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
A container is a runtime instance of an image—what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
# archlinux pacman -S docker sudo systemctl start docker.service
Docker has a public repository of runtime environments(i.e. Docker images), which is called Docker Hub. In this repository allows Docker download and start an specific runtime environments for an specific software (e.g. MongoDB or MySQL) without any installation procedure.
If you're running Docker on Linux you need to run all the following commands as root or you can add your user to docker group and re-login:
sudo usermod -aG docker $(whoami)
You can learn about the Docker environment using
docker info
First of all search Docker container images from Docker hub. For example, below command will search all images with Debian and list as output.
docker search debian
Now download the Docker container with name Ubuntu on your local system using following commands.
docker pull debian
The following docker run
command will create a new container using the base ubuntu image. -t
will give us a terminal, and -i
will allow us to interact with it. We'll rely on the default command in the Ubuntu base image's Docker file, bash
, to drop us into a shell.
docker run -ti debian
Install nginx
to debian
apt-get update apt-get install nginx
Let’s see what containers we have at the moment:
docker ps -a
Save changes to image nginx-template
# get CONTAINER ID docker ps # save image docker commit 10388fa5cf2b nginx-template
List images
docker images
Run image
docker run -ti nginx-template nginx -v
Docker 1.13 introduce a few new commands to help facilitate visualising how much space the docker daemon data is taking on disk and allowing for easily cleaning up "unneeded" excess:
docker system df
presents a summary of the space currently used by different docker objectsdocker system df -v
expends on the previous with more detailsdocker system prune
will delete ALL unused data (i.e. In order: containers stopped, volumes without containers and images with no containers)docker container prune
will delete all stopped containersdocker image prune
will delete all image without associated containersShared folder
If you close the running container, all the data related to that container will be lost. This is normal in Docker world.
For example, in our container, nginx
writes all data in a path specified in /etc/nginx/nginx.conf
.
If you persist the data in a folder which is shared to a host computer that Docker runs, you can use the -v
option inside the run command. For example:
docker run -d -v /host/path/dir:/tmp nginx-template
This will map /host/path/dir
which is in your host machine to /tmp
which is in the container. Whenever data updated to the path inside the container, it will automatically be accessible inside the host machine. Even if you close the container, the data will stay in the path inside the host machine.
Port exposing
Let say that nginx
is running inside the container.
service nginx start
You can expose the nginx
port outside the container on port 80, and you can use the following command to access from outside the container with port 8000.
docker run -ti -p 8000:80 nginx-template
-p
specifies the port we are exposing in the format of -p local-machine-port:internal-container-port
. In this case we are mapping port 80 in the container to port 8000 on the server
Changing the docker's default storage disk
The following config will guide you through a process of changing the docker's default /var/lib/docker
storage disk space to another directory. There are various reasons why you may want to change docker's default directory from which the most obvious could be that ran out of disk space. The following guide should work for both Ubuntu and Debian Linux or any other systemd system. Make sure to follow this guide in the exact order of execution.
sudo systemctl stop docker.service
Let's get started by modifying systemd's docker start up script. Open file /lib/systemd/system/docker.service
with your favorite text editor and replace the following line where /new/path/docker
is a location of your new chosen docker directory
#FROM: ExecStart=/usr/bin/docker daemon -H fd:// #TO: ExecStart=/usr/bin/docker daemon -g /new/path/docker -H fd://
Reload systemd daemon
sudo systemctl daemon-reload sudo systemctl start docker.service
Writing dockerfile
To build a Docker image you need to create a Dockerfile. It is a plain text file with instructions and arguments. Here is the description of the instructions we’re going to use
FROM
- set base imageRUN
- execute command in containerENV
- set environment variableWORKDIR
- set working directoryVOLUME
- create mount-point for a volumeCMD
- set executable for containerLet’s create an image that will get the contents of the website with curl and store it to the text file. We need to pass website url via environment variable SITE_URL
. Resulting file will be placed in a directory mounted as a volume.
FROM debian:latest RUN apt-get update RUN apt-get install --no-install-recommends --no-install-suggests -y curl ENV SITE_URL https://google.com/ WORKDIR /data VOLUME /data CMD sh -c "curl -L $SITE_URL > /data/results"
Dockerfile is ready, it’s time to build the actual image.
Execute the following command to build an image:
docker build . -t test-curl