##Installing Docker on Linux Update system ```bash sudo apt update ``` Install packets for apt to work with https ```bash sudo apt install apt-transport-https ca-certificates curl software-properties-common ``` Add official docker repository GPG key to system ```bash curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - ``` Add docker repository to apt sources. Example below is for ubuntu ```bash sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu [ubuntu version name] stable" ``` For example for ubuntu 20 it is __focal__. For ubuntu 18 it is __bionic__. Check from where docker will be installed. ```bash apt-cache policy docker-ce ``` Upgrade system ```bash sudo apt upgrade ``` Install Docker ```bash sudo apt install docker-ce ``` Check the status of docker service ```bash sudo systemctl status docker ``` Docker creates its own groop in system. So every time user needs to run it as sudo. To avoid this user can be added to docker group. To check user`s current groups ```bash id -nG ``` To add user to docker group ```bash sudo usermod -aG docker ${USER} ``` ##Installing Docker on Windows Installer of Docker for Windows can be obtained on docker hub. Depending on what is used to run Linux containers user may need to activate some system components like `Hyper-V` and `Containers` features. Also Docker for Windows gives ability to run Windows containers. ##Using Docker To run an application in docker you need at first download an image of it or build the image from dockerfile. To download an image from docker hub ```bash docker pull [image-name] ``` If user has not logged in before he will be asked to do it. Manually we can do this by typing ```bash docker login ``` After image was downloaded we can list all images on our system ```bash docker images ``` Or ```bash docker image ls ``` To build docker image from Dockerfile. If docker images is built successfully it can be listed by docker images command. ```bash docker build --tag [image-name:[image-tag]] [path-to-docker-file] ``` Now to run an application we need to build container from image and start it. It can be done by 1 command ```bash docker run -p [port-on-host-machine]:[application-port-inside-container] [image-name or image-ID] docker run -p 8000:80 wordpress ``` To detach app from the terminal use __- d__ parameter ```bash docker run -d -p 8000:80 wordpress ``` __docker run__ command is an equivalent of __docker container create__ and __docker container start__. It can be done in other way. Create container from an image. ```bash docker container create -p [port-on-host]:[app-port-inside-container] [image-name] ``` It will be in stoped state. Then start it. It is run in detached mode by default so there is no need to specify -d key like when we use run docker run command. Actually there is no such key as -d in docker container create command. ```bash docker container start [container-ID or container-name] ``` Running container can be listed with the command ```bash docker ps ``` Or ```bash docker container ls ``` To execute a command in running container use __docker exec__ command. ```bash docker exec [options] [container-name or container-ID] [command] ``` Usually it is used to connect to container`s shell. In case it is Linux container we have to call /bin/bash. ```bash docker exec -it [container-name or container-ID] /bin/bash ``` Container`s shell will work as it is a real system shell. To stop an application ```bash docker container stop [container-ID or container-name] ``` After container was stopped it can`t be listed with the help of __docker ps__ or __docker container ls__. To see stopped containers use __-a__ key. ```bash docker ps -a ``` Or ```bash docker container ls -a ``` Stopped containers can be removed. ```bash docker container rm [container-ID or container-name] ``` ###Save And Load Docker Images In previous example image was downloaded from docker hub. But image can be transported from another machine. At first list all images. ```bash docker images ``` Now save image into tar archive ```bash docker save -o [archivename.tar] [image-ID or image-name] ``` This tar archive now can be uploaded on other machine and imported. To add this image to images list in docker use load command. ```bash docker load -i [archivename.tar] ``` ###Important !!!!!!!!! If the image is transported from Windows to Linux or otherwise it is important to use __-o__ for saving and __- i__ for loading keys instead of redirecting the output with __>>__. ###Making data persistent When docker runs container from an image all data is stored in writable layer of the container. If container is removed data will be lost. There are several options to save data and use it in docker containers later. We can save all the changes that happened with container to an image with the help of __commit__ command. At first list running containers ```bash docker ps ``` Now use __commit__ command ```bash docker commit [container-name or container-ID] [image-name[:image-tag]] ``` Now when images are listed there will be image with saved state. There are also another way of making data persistent. It can be achived with the help of __volumes__. ###Volumes To get list of volumes on system. ```bash docker volume ls ``` To create a volume ```bash docker volume create [volume-name] ``` To inspect a volume ```bash docker volume inspect [volume-name] ``` Now we have a volume and we can attach it to the container. Run an image and use __- v__ parameter to specify volume or mount a folder ``` docker run -v [volume-name]:[folder-inside-container`s filesystem] -p [port-on-host-system]:[app-port-inside-container] [image name] ``` Also we can use a folder on the host system and mount it to the folder inside the container ``` docker run -v [full-path-to-folder-on-host-system]:[folder-inside-container`s filesystem] -p [port-on-host-system]:[app-port-inside-container] [image name] ``` Now when app inside the container makes chages to the folder that mounted to volume or to the folder on host system changed will persist and app can restart from the state it was turned off. To remove a volume ```bash docker volume rm [volume-name] ``` To remove all volumes that are not used by containers ``` docker volume prune ``` ###Backup and Restore Data in Volumes There are also several options to get data from volume and to put it back. First options is to get a shell on running container. For Linux container ```bash docker exec -it [container-ID or container-name] /bin/bash ``` Then move to folder that mounted to volume or physical folder on host machine which contains application data ```bash cd [folder-with-app-data] ``` Put all data to tar archive ```bash tar -cvf [some-folder-in-container/archivename.tar] [folder-u-want-to-backup] ``` Now we can stop running shell on container`s system and use __docker cp__ command or we can launch another instance of terminal/cmd/powershell and use docker cp from there. To specify path to folder inside the container type [container-name]:[path-to-folder] ```bash ---- docker cp [from-where-to-get] [where-to-put] ---- docker cp [container-name]:[path-to-tar-archive] [path-to-folder-on-host-machine] ``` Now when we have backup data on host system we can put it to another container. Let`s assume that we successfully transported docker image and backup tar archive to another machine. At first create new volume. ```bash docker volume create [volume-name] ``` Create new container from an image ```bash docker run -d -p [port-on-host-system]:[port-inside-container] -v [volume-name]:[path-to-folder-where-app-stores-data] [image-name] ``` From another instance of cmd/powershell/terminal run ```bash docker cp [archivename.tar] [container-name]:[path-to-folder-inside-container] ``` Connect to container`s shell ```bash docker exec -it [container-ID or container-name] /bin/bash ``` Now untar it to the folder where application in container stores data ```bash tar -xvf [archivename.tar] -C [path-to-folder-where-app-stores-data] ``` Data from backup tar archive was put to folder which application uses to stored there data. This folder is mounted to volume. In case container is removed data will persist. ###Another way to backup volume For example there is an image which contains application and the volume `vol1` that contains application data. Give container a random name for example `test1` and run it. ```bash docker run -d --name test1 -p [port-on-host]:[app-port-inside-container] -v vol1:[path-to-folder-inside-container] [image-name] ``` Now we can start another container and backup data from test1 container to the host system. Somethimes it is better to use another image for second container. Commands which will be specified may not run in container that runs application. In this case we can use similar image. For example `ubuntu` image. At first pull if from docker hub. ```bash docker pull ubuntu:latest ``` Then run ubuntu image ```bash docker run --rm --volumes-from test1 -v [folder-on-host-system]:[some-new-backup-folder-inside-container] ubuntu tar -cvf [some-new-backup-folder-inside-container/archivename.tar] [folder-u-want-to-backup] ``` After running this command we will have archive on our host system. To restore data we also need to run an container from image with application and use second container to put data in the first container. Create new volume for this. ```bash docker volume create restore-volume ``` Run a new container with an application ``` docker run -d --name test2 -p [port-on-host]:[app-port-inside-container] -v restore-volume:[path-to-folder-inside-container] [image-name] ``` Create second container from ubuntu image and restore data from host system to restore-volume which is mounted to the test2 container. ``` docker run --rm --volumes-from test2 -v [folder-on-host-system]:[some-new-backup-folder-inside-container] ubuntu tar -xvf [archive.tar] -C [app-data-folder] ``` After executing this command application data will be in restore-volume. User may need to use `--strip-components` key with tar command in order to maintain needed folder hierarchy. ##Writing Dockerfile Dockerfile defines the instructions for building docker image. This file needs to have the name `Dockerfile` without extension. There are several commands that are used to write it - FROM - defines base layer. For example `FROM ubuntu:18.04`. Ubuntu 18.04 is used as base layer in docker image. All the layers that is created by commands will be added on top of base layer. Also there can be multiple FROM commands in Dockerfile resulting in multi-stage build. - AS - used in multi-stage builds to name them. `FROM ubuntu:18.04 AS first-build`. - LABEL - sets the metainformation (for example who created and maintains the image) - ENV - sets environmental variables inside container. Thay are used __when container runs__. - RUN - runs the command and creates new layer in docker image __when it is built__. Usually is used to install some additional packeges inside the container so when image is built all the user needs is allready there. __Creates new layer__. - COPY - copies files and folders from local system to container. __Creates new layer__. - ADD - similar to COPY. Also is used to copy files and folders to container but ADD can untar archives to containers and get remote resources (for example web). __Creates new layer__. - CMD - specifies command with arguments which will run __when container is launched__. __Arguments can be changed__. Can be only one CMD command in Dockerfile. When user creates the container from an image with the help of ` docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]` user can specify another command and replace what was written in CMD section. - WORKDIR - set the working directory for the next command. It is better to use absolute pathes to folders. - ARG - set the variables that is used by Docker __when image is built__. Also they can be set by user when `docker build` command runs with the the key `--build-arg`. They are not accessible when container runs. - ENTRYPOINT - specifies command with arguments which will be called when container runs. __Arguments cant be changed__. When user creates the container from an image with the help of ` docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]` user can specify command that will run on the start. If ENTRYPOINT was specified it cant be replaced, all other commands that were passed as command line arguments will run after ENTRYPOINT command. - EXPOSE - Gives information about what ports are planned to be open when container runs. User can open port using __-p__ key in `docker run` command. - VOLUME - specifies volume that is mounted to some folder inside the container or physical folder on host system that is mounted to some folder inside the container. Every layer that is created during build of docker image is a file that describes what changed in original image when previous layer was added. Mutable (Read / Write ) layer is added in last order. All other layers are read only. [Image Name 1]:https://secretnotes.space/articleimage?id=58 ![Image Name 1] >Image from official documentation Example of Dockerfile ```bash FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base WORKDIR /app EXPOSE 80 RUN apt-get update ; apt-get install -y git build-essential gcc make yasm autoconf automake cmake libtool checkinstall libmp3lame-dev pkg-config libunwind-dev zlib1g-dev libssl-dev RUN apt-get update \ && apt-get clean \ && apt-get install -y --no-install-recommends libc6-dev libgdiplus wget software-properties-common RUN wget https://www.ffmpeg.org/releases/ffmpeg-4.0.2.tar.gz RUN tar -xzf ffmpeg-4.0.2.tar.gz; rm -r ffmpeg-4.0.2.tar.gz RUN cd ./ffmpeg-4.0.2; ./configure --enable-gpl --enable-libmp3lame --enable-decoder=mjpeg,png --enable-encoder=png --enable-openssl --enable-nonfree RUN cd ./ffmpeg-4.0.2; make RUN cd ./ffmpeg-4.0.2; make install WORKDIR /app RUN mkdir Files FROM mcr.microsoft.com/dotnet/core/sdk:2.2-stretch AS build WORKDIR /src COPY ["WebApp/WebApp.csproj", "WebApp/"] RUN dotnet restore "WebApp/WebApp.csproj" COPY . . WORKDIR "/src/WebApp" RUN dotnet build "WebApp.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "WebApp.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "WebApp.dll"] ``` ##Docker Compose Docker compose is a tool for defining and running multi-container Docker applications. Configuration is written in `.yml` file. Usually Docker compose comes with Docker but sometimes it may require manual installation. Example of `.yml` file. ```yml #docker-compose file has to start from the version version: '3.5' #Two application in our docker-compose file has to work in one network. #So this network needs to be defined and later specified in service networks: localdev: name: localdev #Applications are defined in services section. #In this case we have web-app and db-server so here are defined 2 services. services: main-project: #Defining first of two services. It is web app build: ./MyApp #App can be built from docker file or docker image can be used instead container_name: myapp-container #User can give a name to container. restart: always #Whenever the Docker service is restarted, containers using the allways policy will be restarted regardless of whether they were running or now. This service depends on db-service so here may be error on boot. ports: - "8000:80" #Port exposing just like docker image is run depends_on: - db-server #Dependency between services. It is needed to run services in right order volumes: -myapp-vol:/app/MediaFiles #Mount a volume to the folder in container networks: - localdev #Specifying previously defined network db-server: #Defining second of two services. It is db server image: mcr.microsoft.com/mssql/server:2017-latest #Image instead of building from dockerfile container_name: db-server environment: #Environment variables. Some software requires variables to be set - ACCEPT_EULA=Y - MSSQL_SA_PASSWORD=Somepass-22 - MSSQL_TCP_PORT=1433 - MSSQL_PID=Express ports: - "5556:1433" volumes: - db-vol:/var/opt/mssql/data/ networks: - localdev #Same network as web-app #Here are volumes defined. They can be listed with docker volume ls command volumes: myapp-vol: db-vol: ``` The name of this file has to be `docker-compose.yml`. When writing this file it is necessary to use spaces for indentation (amount has to be 2 or 4) instead of tabulation (\t). When docker-compose.yml is ready go to that folder in terminal/cmd/powershell and build it ```bash docker-compose build ``` It it is successfully built user can run it ```bash docker-compose up ``` To shut down it user needs to call next command from the same folder where is configuration file is located ```bash docker-compose down ```