Docker

General Docker notes, concepts, commands, etc.


Last Updated: August 01, 2018 by Pepe Sandoval



Want to show support?

If you find the information in this page useful and want to show your support, you can make a donation

Use PayPal

This will help me to create more stuff and fix the existent content... or probably your money will be used to buy beer


Disclaimer

  • The notes and commands documented here are just personal notes that I use for quick reference, this means they can be wrong, unclear and confusing so they are NOT a reliable NOR an official source of information. I strongly suggest you to use the official docker documentation and never completely rely on what's written here
  • Please review the disclaimer of use and contents in our Terms and Conditions page for information about the contents in this page

Docker

Docker Setup - Linux

  1. Install Docker for Ubuntu
  2. Add your user to the docker group sudo usermod -aG docker $USER. Make sure to Log out and log back in to re-evaluate your group membership
  3. (Optional) Install docker-machine base=https://github.com/docker/machine/releases/download/v0.16.0 && curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/tmp/docker-machine && sudo install /tmp/docker-machine /usr/local/bin/docker-machine
    • Only needed if you want to manage virtual machines using this host
    • You will also need to install hypervisor like VirtualBox so for example you would need to get VirtualBox .deb and install it sudo apt-get install libqt5opengl5 libqt5printsupport5 && sudo dpkg -i virtualbox-5.2_5.2.22-126460~Ubuntu~bionic_amd64.deb

Docker Notes

  • Docker is available in two editions:

    • Community Edition (Docker CE): ideal for individual developers and small teams looking to get started with Docker
    • Enterprise Edition (Docker EE): designed for enterprise development and IT teams who build, ship apps at production scale
  • Docker keeps a container running as long as the process it started inside the container is still running. In this case a single command is passed the process exits as soon as the command finishes. This means the container stops. However, Docker doesn't delete resources by default, so the container still exists in the Exited state

  • A container is launched by running an image. An image is an executable package that includes everything needed to run an application (the code, a runtime, libraries, environment variables, and configuration files, etc.)

  • A container is a runtime instance of an image

    • A container runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, meanwhile a virtual machine (VM) runs a full-blown "guest" operating system with virtual access to host resources through a hypervisor.
    • So Linux containers require the Docker host to be running a Linux kernel and Windows containers need to run on a Docker host with a Windows kernel
  • In a distributed application, different pieces of the app are called services, e.g. service for storing application data in a database, a service for the front-end, etc.

  • A single container running in a service is called a task

  • bind mount a file or directory on the host machine is mounted into a container running on the same host.Usually used for development

  • A swarm is a group of machines that are running Docker and joined into a cluster (a group of machines/resources)
    • The swarm is abstracted since you can run Docker commands as usual, but now they are executed on a cluster by a swarm manager.
    • The swarm manager is the only machine in a swarm that can execute docker commands and also authorize other machines to join the swarm as workers
    • Machines in a swarm are called nodes and can be physical or virtual
    • The strategy used by the swarm to run containers depending on the number of machines/resources of the swarm must be defined in the docker-compose.yml file
    • Docker usually runs in single-host mode but can be switched into swarm mode (with docker swarm init), which enables the use of swarms and makes the current machine a swarm manager, then other machines join the swarm (with docker swarm join)
  • A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application

Dockerfile

  • The Dockerfile is the file that defines a Docker image, it defines what goes on in the environment inside your container.
  • In the Dockerfile you define what files need to be included in the environment
  • Usually created inside project folder and included as part of repository
  • Need to run docker build command in the folder where the Dockerfile is located

  • Dockerfile python3-flask example:

    ## To Build use: docker build -t pythonflasktest .
    ## To run use: docker run -p 4000:80 pythonflasktest
    
    # Use an official Python runtime as a parent image
    FROM python:3
    
    # Set the working directory to /app
    WORKDIR /app
    
    # Copy the current directory contents into the container at /app
    COPY . /app
    
    # Install any needed packages specified in requirements.txt
    RUN pip3 install --trusted-host pypi.python.org -r requirements.txt
    
    # Make port 80 available to the world outside this container
    EXPOSE 80
    
    # Define environment variable
    ENV FLASK_APP application.py
    ENV FLASK_DEBUG 1
    
    # Run app.py when the container launches
    CMD ["flask", "run", "--host", "0.0.0.0", "--port", "80"]

docker-compose.yml

  • The docker-compose.yml file is a YAML format file used to to define services of an app and how it should scale
  • It defines how Docker containers should behave in production.

  • docker-compose.yml python3-flask with replicas load balancer example:

    version: "3"
    services:
      web:
        # replace username/repo:tag with your name and image details
        image: sanpepe/get-started:part2
        deploy:
          replicas: 3
          resources:
            limits:
              cpus: "0.1"
              memory: 50M
          restart_policy:
            condition: on-failure
        ports:
          - "4000:80"
        networks:
          - webnet
    networks:
      webnet
  • Simple app specified in separate dockerfile that needs a mongo DB example:

    • Dockerfile:

      FROM node:10.13-alpine
      WORKDIR /usr/src/app
      COPY package*.json ./
      RUN npm install
      COPY . .
      EXPOSE 3000
      CMD node app.js
    • docker-compose.yml file:

      version: "2.1"
      services:
      app:
          build: .
          ports:
          - 3001:3000
      mongo:
          image: mongo
          ports:
          - "27017:27017"

Docker Commands Reference

  • Check version verbose and docker info: docker info
  • Check Running containers: docker ps

Generic WebApp Basic Setup: Build Image & Run Container

  1. Test Docker and create Dockerfile make sure to use EXPOSE command to expose a port
    • docker run hello-world
  1. Create docker image using current directory with Dockerfile
    • docker build -t <image_name_here> .
    • docker build -t <your_username>/<repository_name_here>:<tag_name_here> .
    • docker image build --tag sanpepe/linux_tweet_app:1.0 .
  1. Run container map and make sure to publish container port onto host port (traffic coming in to the Docker host on port host_port will be directed to port container_port in the container)
    • docker run -p <host_port>:<container_port> <image_name_here>
    • docker run -p 4000:80 mytestimage
    • Run in background/detached(-d or --detach): docker run -d -p 4000:80 mytestimage

Images: list, remove...

  • List images downloaded
    • docker image ls
    • docker image ls -a -q
  • Remove image
    • docker image rm <image id>
  • Remove all images
    • docker image rm $(docker image ls -a -q)

Containers: check, list, exec, remove...

  • List all containers in any state
    • docker container ls --all
  • Remove container and force remove (Ungracefully shutdown the container and permanently remove it from the Docker host.)
    • docker rm <docker_container_id>
    • docker container rm --force <docker_container_id>
  • Remove all containers
    • docker container rm $(docker container ls -a -q)
  • Stop the specified container (gracefully stop the container and leave it on the host)
    • docker container stop <docker_container_id>
  • Force shutdown specified container
    • docker container kill <docker_container_id>
  • Check container info:
    • docker container logs <docker_container_id/name>
    • docker container top <docker_container_id/name>
  • Start a container and run a command
    • docker container run <image_name_here> <command_here>
    • docker container run ubuntu hostname
  • Run interactive and remove after completion
    • docker container run --interactive --tty --rm ubuntu bash
  • Execute/run command inside container
    • docker exec -it <docker_container_id/name> <command_here>
    • docker exec -it mydb mysql --user=root --password=$MYSQL_ROOT_PASSWORD --version
  • Connect to a new shell process inside a running container
    • docker exec -it mydb bash
    • docker exec -it <docker_container_id/name> bash

Services

  • docker swarm init

  • deploy

    • docker stack deploy -c docker-compose.yml <app_name>
    • docker stack deploy -c docker-compose.yml getstartedlab
  • List services

    • docker service ls
  • List service processes / List tasks

    • docker service ps <app_name>_<service_name>
    • docker service ps getstartedlab_web
  • docker stack rm getstartedlab

  • docker swarm leave --force

docker-machine

  • Execute bash command on other machine:

    • docker-machine ssh <machine_name> "<bash_command_here>"
  • Copy files to other machine:

    • docker-machine scp <file> <machine_name>:<path>

Share image to registry (Push-Pull Image)

  1. Setup and account in a registry E.g. Docker Hub

  2. Login to registry docker login

  3. Tag image:

    • docker tag <local_image_name_here> <your_username>/<repository_name_here>:<tag_name_here>
    • docker tag mytestimage sanpepe/mytestrepo:test1
  1. Publish/upload the image
    • docker push <your_username>/<repository_name_here>:<tag_name_here>
    • docker push sanpepe/mytestrepo:test1
  1. Pull image (attempt to run it if not available locally it will pull it from repo):
    • docker run -p <host_port>:<container_port> <your_username>/<repository_name_here>:<tag_name_here>
    • docker run -p 4000:80 username/repository:tag

Setup a swarm in multiple VMs

  1. Create at least two VMs

    docker-machine create --driver virtualbox myvm1
    docker-machine create --driver virtualbox myvm2
  2. Set one to be the swarm-manager using swarm init on the VM. docker-machine ssh <vm_name_here> "docker swarm init --advertise-addr <vm_ip_here>. You will need one of the VM IP, you can use the docker-machine ls command to get it. E.g.:

    $ docker-machine ls
    NAME    ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
    myvm1   -        virtualbox   Running   tcp://192.168.99.100:2376           v18.09.0
    myvm2   -        virtualbox   Running   tcp://192.168.99.101:2376           v18.09.0
    $
    $
    $ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
    Swarm initialized: current node (w4q2okygu30t468ubq771poc8) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
     docker swarm join --token SWMTKN-1-0bocssujga2foc6t608xw4g2gxalip8d7666umtm0zria6gbyl-7sr1db5x232yf7ubray6y8uwy 192.168.99.100:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    
  1. Use the pre-configured docker swarm join command to set other machines as workers. E.g.
    $ docker-machine ssh myvm2 "docker swarm join --token SWMTKN-1-0bocssujga2foc6t608xw4g2gxalip8d7666umtm0zria6gbyl-7sr1db5x232yf7ubray6y8uwy 192.168.99.100:2377"
    This node joined a swarm as a worker.
  1. Optionally configure your current shell to talk to the Docker daemon on the VM (use eval $(docker-machine env <machine_name>)) to run docker commands in other machines but keep access to local files, make sure the machine is set to active. E.g.
    $ docker-machine env myvm1
    export DOCKER_TLS_VERIFY="1"
    export DOCKER_HOST="tcp://192.168.99.100:2376"
    export DOCKER_CERT_PATH="/home/jose/.docker/machine/machines/myvm1"
    export DOCKER_MACHINE_NAME="myvm1"
    # Run this command to configure your shell:
    # eval $(docker-machine env myvm1)
    $
    $ eval $(docker-machine env myvm1)
    $ docker-machine ls
    NAME    ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER     ERRORS
    myvm1   *        virtualbox   Running   tcp://192.168.99.100:2376           v18.09.0
    myvm2   -        virtualbox   Running   tcp://192.168.99.101:2376           v18.09.0
    $
  1. Deploy on swarm: docker stack deploy -c docker-compose.yml getstartedlab. Make sure to have the machine that is the swarm-manager as active
    • Check stack docker stack ps getstartedlab
    • Remove stack: docker stack rm getstartedlab
    • Remove worker: docker-machine ssh myvm2 "docker swarm leave"
    • Remove manager: docker-machine ssh myvm1 "docker swarm leave --force"
    • Stop machines docker-machine stop myvm1 myvm2

Reference

Want to show support?

If you find the information in this page useful and want to show your support, you can make a donation

Use PayPal

This will help me to create more stuff and fix the existent content... or probably your money will be used to buy beer