Skip to content

Instantly share code, notes, and snippets.

@ccrsxx
Created December 16, 2024 16:21
Show Gist options
  • Select an option

  • Save ccrsxx/564d5ae7049d3498ee423b8e5c3ea41a to your computer and use it in GitHub Desktop.

Select an option

Save ccrsxx/564d5ae7049d3498ee423b8e5c3ea41a to your computer and use it in GitHub Desktop.
My notes on docker

Learning Docker

Container

A container is a running instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state. Basically, a container is a very lightweight VM that only provides the resources and environment to run applications.

Building image

  • FROM. The first instruction in the Dockerfile must be FROM, which defines the base image to use to start the build process. The FROM instruction can appear multiple times within a single Dockerfile in order to create multiple images. Simply make a note of the last image ID output by the commit before each new FROM instruction.

  • WORKDIR. The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.

  • COPY. The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.

  • RUN. The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.

  • CMD. The CMD instruction has three forms:

    • CMD ["executable","param1","param2"] (exec form, this is the preferred form)
    • CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
    • CMD command param1 param2 (shell form)

    There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.

  • EXPOSE. The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if the protocol is not specified.

Running container

  • docker run -dp 127.0.0.1:3000:3000. It contains three flags:

    • The -d flag (short for --detach) runs the container in the background.
    • The -p flag (short for --publish) creates a port mapping between the host and the container.
    • The -p flag takes a string value in the format of HOST:CONTAINER, where HOST is the address on the host, and CONTAINER is the port on the container. The command publishes the container's port 3000 to 127.0.0.1:3000 (localhost:3000) on the host. Without the port mapping, you wouldn't be able to access the application from the host.

Updating container

  • docker stop <container_id>. Stop the container.
  • docker rm <container_id>. Remove the container.
  • docker build -t <image_name> .. Build the image.
  • docker run -dp 3000:3000 <image_name>. Run the container.
  • docker image prune. Remove unused images.
  • docker container prune. Remove all stopped containers.

Share the application

  • Login to Docker Hub: docker login.
  • Create a repository on Docker Hub.
  • Make sure the visibility to the public.
  • Create the repository on Docker Hub.
  • Push the image to Docker Hub: docker push <username>/<docker-image>. If there's no local image that matches the repository name. You need to tag the image first: docker tag <image> <username>/<docker-image>.
  • Pull the image from Docker Hub: docker pull <username>/<docker-image>.
  • Run the image: docker run -dp 0.0.0.0:3000:3000 <username>/<docker-image>. Binding port to 0.0.0.0 allows you to expose the port to the outside world not just to docker host. By default, the port is using 0.0.0.0 if host is omitted.

Persist the DB

There are two ways to persist the DB:

  • Volume mount

    Volume mount is a way to persist the data by mounting a directory from the host inside the container. The data will be stored in the host machine. The container will read and write the data from the host machine.

    You don't need to create the directory on the host machine. Docker will create it for you and manage it for you.

    Example command:

    docker run -dp 127.0.0.1:3000:3000 --mount type=volume,src=todo-db,target=/etc/todos getting-started

    The --mount flag using the type=volume option mounts a named volume called todo-db into the /etc/todos directory inside the container.

  • Bind mount

    Bind mount is a way to persist the data by mounting a directory from the host inside the container. The data will be stored in the host machine. The container will read and write the data from the host machine.

    You need to create the directory on the host machine. Docker will not create it for you.

    Example command:

    docker run -it --mount "type=bind,src=$pwd,target=/src" ubuntu bash

    The --it flag is used to run the container in interactive mode. The ubuntu is the image name. The bash is the command to run inside the container.

    The --mount flag using the type=bind option mounts the current directory, represented by the pwd variable, into the /src directory inside the container.

    An use case for development is to mount the source code directory into the container. So that you can edit the source code on the host machine and the changes will be reflected inside the container.

    Example command:

    docker run -dp 127.0.0.1:3000:3000 `
    -w /app --mount "type=bind,src=$pwd,target=/app" `
    node:18-alpine `
    sh -c "yarn install && yarn run dev"

    The --mount flag using the type=bind option mounts the current directory, represented by the pwd variable, into the /app directory inside the container.

    The -w flag sets the working directory inside the container to /app. So you don't need to cd into the directory.

    The node:18-alpine is the image name. The sh -c "yarn install && yarn run dev" is the command to run inside the container.

  • Multi container apps

    By default, Docker containers are isolated from each other. They can't communicate with each other. To allow them to communicate with each other, you need to create a network and attach the containers to the network.

    Creating a network:

    docker network create docker-uwu

    Example adding mysql container and attach it to the network:

    docker run -d `
    --network docker-uwu --network-alias mysql `
    -v todo-mysql-data:/var/lib/mysql `
    -e MYSQL_ROOT_PASSWORD=secret `
    -e MYSQL_DATABASE=todos `
    mysql:8.0

    The --network flag attaches the container to the network. The --network-alias flag sets the alias for the container. The -v flag creates a named volume called todo-mysql-data and mounts it to the /var/lib/mysql directory inside the container. The -e flag sets the environment variable.

    Confirm if the database is running:

    docker exec -it <container_id> mysql -p

    You can then troubleshoot the container networking with nicolaka/netshoot image:

    docker run -it --network docker-uwu nicolaka/netshoot

    You can look up the IP address of the mysql container:

    dig mysql

    This works because we set the --network-alias flag to mysql earlier. Basically, the --network-alias flag sets the hostname of the container.

    Run container with the network:

    docker run -dp 127.0.0.1:3000:3000 `
    -w /app -v "$(pwd):/app" `
    --name emilia `
    --network docker-uwu `
    -e MYSQL_HOST=mysql `
    -e MYSQL_USER=root `
    -e MYSQL_PASSWORD=secret `
    -e MYSQL_DB=emilia `
    node:18-alpine `
    sh -c "yarn install && yarn run dev"

Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration.

With it you also can automatically create a network as well as volume and attach the containers to the network.

Example docker-compose.yml:

services:
    app:
        image: node:18-alpine
        command: sh -c "yarn install && yarn run dev"
        ports:
            - 127.0.0.1:3000:3000
        working_dir: /app
        volumes:
            - ./:/app
        environment:
            MYSQL_HOST: mysql
            MYSQL_USER: root
            MYSQL_PASSWORD: secret
            MYSQL_DB: emilia

    mysql:
        image: mysql:8.0
        volumes:
            - todo-mysql-data:/var/lib/mysql
        environment:
            MYSQL_ROOT_PASSWORD: secret
            MYSQL_DATABASE: emilia

volumes:
    todo-mysql-data:

The services section defines the containers. The app and mysql are the container names. The image is the image name. The command is the command to run inside the container. The ports is the port mapping. The working_dir is the working directory inside the container. The volumes is the volume mount. The environment is the environment variables.

When you add names below the services section, it will create a network alias for the container. So you don't need to set the --network-alias flag.

We need to define the 'volumes' on the root level when using docker compose. So that we can use the named volume in the volumes section. It won't create the volume if we don't define it, unless you run manually. If using volume bind, you don't need to define it.

Run the docker compose:

docker compose up -d

The -d flag runs the container in the background.

Tear down the docker compose:

docker compose down

Image building best practices

  • Use .dockerignore file to ignore files that are not needed in the image. Useful for omitting the node_modules directory.

  • Always make sure to cache the dependencies. So that it won't install the dependencies every time you build the image when you change the source code. Example:

    FROM node:18-alpine
    
    WORKDIR /app
    
    COPY package.json yarn.lock ./
    
    RUN yarn install --production
    
    COPY . .

    Notice that we copy the package.json and yarn.lock first before running yarn install. So that it will install the cached dependencies. Then we copy the source code. Note that we already omit the node_modules directory in the .dockerignore file, so it won't copy the node_modules directory.

    This way, when you change the source code, it will only copy the source code and not install the dependencies again. Thus caching the dependencies.

  • Multi-stage builds. To reduce the size of the image, by separating the build stage and the production stage.

    Example when building the react static website, you only need node to build the website. But you don't need node to run the website. So you can separate the build stage and the production stage.

    # Build stage
    FROM node:18-alpine AS build
    WORKDIR /app
    COPY package* yarn.lock ./
    RUN yarn install --production
    COPY public ./public
    COPY src ./src
    RUN yarn run build
    
    # Production stage
    FROM nginx:alpine
    COPY --from=build /app/build /usr/share/nginx/html

    In this example, you use one stage (called build) to perform the actual build using node. In the second stage (starting at FROM nginx), you copy in files from the build stage. The final image is only the last stage being created.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment