- Static Website Hosting with Nginx and Docker
- Multi-Container Docker Applications
- one-database-shared-by-multiple-containers
Learn Docker from scratch to advanced, one day at a time.
Before Docker, developers faced the classic:
"It works on my machine!"
Different OS versions, missing libraries, and conflicting dependencies broke apps in production. Docker fixes this by packaging everything your app needs into one portable unit.
Docker is an open-source platform that lets you build, ship, and run applications inside containers β lightweight, isolated, and portable environments.
- Created by Solomon Hykes in 2013
- Written in Go
- Built on Linux kernel features:
namespaces+cgroups
| Feature | Container π³ | Virtual Machine π» |
|---|---|---|
| Boot Time | Seconds | Minutes |
| Size | MBs | GBs |
| OS | Shares host kernel | Full OS inside |
| Isolation | Process-level | Hardware-level |
| Performance | Near-native | Overhead |
Rule of thumb: Use containers for apps; use VMs when you need full OS isolation.
You (CLI)
β
βΌ
Docker Client ββREST APIβββΆ Docker Daemon (dockerd)
β
βββββββββββββββΌββββββββββββββ
βΌ βΌ βΌ
Images Containers Networks/Volumes
β
βΌ
Docker Registry (Docker Hub)
Key Components:
- Docker Client β The CLI you type commands into (
docker run,docker build) - Docker Daemon β Background service that manages everything
- Docker Registry β Remote store for images (e.g., Docker Hub, GHCR, ECR)
- Docker Objects β Images, Containers, Volumes, Networks
# Install Docker (Ubuntu)
sudo apt update && sudo apt install docker.io -y
# Verify installation
docker --version
# Run your first container!
docker run hello-worldA Docker image is a read-only, layered template used to create containers. Think of it like a recipe β the container is the cooked dish.
An image contains:
- Application code
- Runtime (Node, Python, Java, etc.)
- Libraries & dependencies
- Environment variables
- Startup instructions
Every image is made of stacked layers. Each Dockerfile instruction adds a new layer.
ββββββββββββββββββββββββ
β Your App Code β β Layer 4 (COPY . .)
ββββββββββββββββββββββββ€
β npm install β β Layer 3 (RUN npm install)
ββββββββββββββββββββββββ€
β package.json copied β β Layer 2 (COPY package.json)
ββββββββββββββββββββββββ€
β node:18-alpine β β Layer 1 (Base Image)
ββββββββββββββββββββββββ
π Layers are cached! Unchanged layers are reused, making builds faster.
# Pull an image from Docker Hub
docker pull nginx:latest
# List all local images
docker images
# Inspect image details
docker inspect nginx
# Remove an image
docker rmi nginx
# Remove all unused images
docker image prune -a
# Tag an image
docker tag myapp:latest myapp:v1.0Docker Hub is the default public registry for Docker images.
# Login to Docker Hub
docker login
# Push your image
docker push yourusername/myapp:1.0
# Search for images
docker search ubuntudocker pull node:18-alpine
docker images
docker inspect node:18-alpine
docker rmi node:18-alpineA Dockerfile is a plain text file with step-by-step instructions to build a Docker image. Every line is an instruction that creates a layer.
# βββ Base Image βββββββββββββββββββββββββββββββββββββββββββ
FROM node:18-alpine
# Sets the base image. Always the first instruction.
# βββ Metadata βββββββββββββββββββββββββββββββββββββββββββββ
LABEL maintainer="you@email.com"
LABEL version="1.0"
# Adds metadata (key=value) to the image.
# βββ Working Directory ββββββββββββββββββββββββββββββββββββ
WORKDIR /app
# Sets the working directory inside the container.
# All subsequent commands run from here.
# βββ Copy Files βββββββββββββββββββββββββββββββββββββββββββ
COPY package*.json ./
# Copies files from host to container.
# Use COPY over ADD unless you need tar extraction or URLs.
ADD https://example.com/file.tar.gz /tmp/
# Like COPY but also handles URLs and auto-extracts tar archives.
# βββ Run Commands βββββββββββββββββββββββββββββββββββββββββ
RUN npm install
# Executes a command at BUILD time. Creates a new layer.
# βββ Environment Variables ββββββββββββββββββββββββββββββββ
ENV NODE_ENV=production
ENV PORT=3000
# Sets environment variables inside the image/container.
# βββ Build Arguments ββββββββββββββββββββββββββββββββββββββ
ARG APP_VERSION=1.0
# Available only at BUILD time (not at runtime).
# Pass via: docker build --build-arg APP_VERSION=2.0
# βββ Expose Port ββββββββββββββββββββββββββββββββββββββββββ
EXPOSE 3000
# Documents which port the container listens on.
# Doesn't actually publish the port β use -p for that.
# βββ Volume βββββββββββββββββββββββββββββββββββββββββββββββ
VOLUME ["/data"]
# Creates a mount point for persistent data.
# βββ User βββββββββββββββββββββββββββββββββββββββββββββββββ
USER node
# Sets the user for subsequent RUN/CMD/ENTRYPOINT instructions.
# Best practice: don't run as root.
# βββ CMD vs ENTRYPOINT ββββββββββββββββββββββββββββββββββββ
CMD ["node", "server.js"]
# Default command when container starts.
# Can be overridden at runtime: docker run myapp npm test
ENTRYPOINT ["node"]
# Fixed command. Arguments can be appended.
# docker run myapp server.js β runs: node server.js
# βββ Health Check βββββββββββββββββββββββββββββββββββββββββ
HEALTHCHECK --interval=30s --timeout=5s \
CMD curl -f http://localhost:3000/health || exit 1
# Docker checks container health periodically.
# βββ Shell Form vs Exec Form ββββββββββββββββββββββββββββββ
# Shell form (runs via /bin/sh -c):
RUN npm install
CMD node server.js
# Exec form (preferred β no shell, signals work correctly):
RUN ["npm", "install"]
CMD ["node", "server.js"]FROM node:18-alpine
# Security: run as non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
# Copy deps first (layer caching benefit)
COPY package*.json ./
RUN npm ci --only=production
COPY . .
# Switch to non-root
USER appuser
EXPOSE 3000
HEALTHCHECK --interval=30s CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "server.js"]Like .gitignore β tells Docker what NOT to copy into the image.
node_modules
.git
.env
*.log
dist
coverage
# Build an image
docker build -t myapp:1.0 .
# Build with build args
docker build --build-arg NODE_ENV=production -t myapp:prod .
# View build layers
docker history myapp:1.0A container is a running instance of an image. If an image is a class, a container is an object. You can run multiple containers from the same image.
docker create β Created
docker start β Running
docker pause β Paused
docker stop β Stopped
docker rm β Deleted
# Run a container (creates + starts)
docker run nginx
# Run in detached (background) mode
docker run -d nginx
# Run with a name
docker run -d --name my-nginx nginx
# Run with port mapping (host:container)
docker run -d -p 8080:80 nginx
# Now visit http://localhost:8080
# Run with environment variables
docker run -d -e NODE_ENV=production myapp
# Run interactively (with terminal)
docker run -it ubuntu bash
# Run and auto-remove when stopped
docker run --rm ubuntu echo "Hello!"
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop a container
docker stop my-nginx
# Start a stopped container
docker start my-nginx
# Restart a container
docker restart my-nginx
# Remove a container
docker rm my-nginx
# Remove a running container (force)
docker rm -f my-nginx
# Remove all stopped containers
docker container prune# View container logs
docker logs my-nginx
# Follow logs in real time
docker logs -f my-nginx
# View resource usage (CPU, RAM)
docker stats
# Inspect full container config
docker inspect my-nginx
# View running processes inside container
docker top my-nginx# Open a shell inside a running container
docker exec -it my-nginx bash
# Run a single command
docker exec my-nginx cat /etc/nginx/nginx.conf
# Copy files to/from a container
docker cp myfile.txt my-nginx:/app/myfile.txt
docker cp my-nginx:/app/logs.txt ./logs.txtdocker run -d --name webserver -p 8080:80 nginx
docker ps
docker logs webserver
docker exec -it webserver bash
# Inside: ls, cat /etc/nginx/nginx.conf, exit
docker stop webserver
docker rm webserverWhen a container is deleted, all data inside it is lost. Volumes solve this by persisting data outside the container.
| Type | Description |
|---|---|
| Volume | Managed by Docker. Best for production data. |
| Bind Mount | Maps a host directory to a container path. |
| tmpfs Mount | Stored in host memory only. Not persisted. |
# Create a volume
docker volume create mydata
# List volumes
docker volume ls
# Inspect a volume
docker volume inspect mydata
# Run container with a volume
docker run -d --name db \
-v mydata:/var/lib/postgresql/data \
postgres:15
# Remove a volume
docker volume rm mydata
# Remove all unused volumes
docker volume pruneMap a host folder directly into the container. Great for development β changes on the host reflect instantly in the container.
# Syntax: -v /host/path:/container/path
docker run -d \
-v $(pwd)/src:/app/src \
-p 3000:3000 \
myapp
β οΈ Bind mounts depend on the host directory structure. Use volumes for production.
# Declare a volume mount point
VOLUME ["/app/data"]# Run a Postgres container with persistent volume
docker volume create pgdata
docker run -d \
--name postgres \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
# Stop and remove container β data is still in the volume!
docker stop postgres && docker rm postgres
# Restart with same volume β data is back!
docker run -d --name postgres \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
postgres:15Containers are isolated. Networking lets containers talk to each other, to the host, and to the internet.
| Driver | Description |
|---|---|
| bridge | Default. Containers on same bridge can talk to each other. |
| host | Container uses host's network directly. No isolation. |
| none | No networking. Fully isolated. |
| overlay | For multi-host networking (Docker Swarm / Kubernetes). |
| macvlan | Assigns container a real MAC address on your LAN. |
# List networks
docker network ls
# Create a custom bridge network
docker network create mynetwork
# Run containers on the same network
docker run -d --name app --network mynetwork myapp
docker run -d --name db --network mynetwork postgres:15
# Containers on the same network can talk by NAME:
# app container can reach db at: postgres://db:5432
# Inspect a network
docker network inspect mynetwork
# Connect a running container to a network
docker network connect mynetwork mycontainer
# Disconnect
docker network disconnect mynetwork mycontainer
# Remove a network
docker network rm mynetwork# -p hostPort:containerPort
docker run -p 8080:80 nginx # http://localhost:8080 β nginx:80
docker run -p 3000:3000 myapp
# Bind to specific host IP
docker run -p 127.0.0.1:8080:80 nginx
# Random host port
docker run -P nginx # Docker picks a random port
docker ps # Check which port was assigneddocker network create appnet
docker run -d --name backend --network appnet myapp
docker run -d --name database --network appnet \
-e POSTGRES_PASSWORD=secret postgres:15
# Test: backend can reach "database" by hostname
docker exec -it backend ping databaseDocker Compose lets you define and run multi-container apps using a single YAML file (docker-compose.yml). Instead of typing long docker run commands, you declare everything in one place.
version: '3.9' # Compose file version
services: # Define your containers here
app:
build: . # Build from Dockerfile in current dir
container_name: myapp
ports:
- '3000:3000'
environment:
- NODE_ENV=production
- DB_HOST=db
depends_on:
- db
volumes:
- ./src:/app/src
networks:
- appnet
restart: always # Restart policy
db:
image: postgres:15 # Use existing image
container_name: mydb
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: secret
POSTGRES_DB: myappdb
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- appnet
redis:
image: redis:7-alpine
networks:
- appnet
volumes: # Named volumes
pgdata:
networks: # Custom networks
appnet:
driver: bridge# Start all services (detached)
docker compose up -d
# Start and rebuild images
docker compose up -d --build
# Stop all services
docker compose down
# Stop and remove volumes too
docker compose down -v
# View logs
docker compose logs -f
# View logs for one service
docker compose logs -f app
# List running services
docker compose ps
# Run a command in a service
docker compose exec app bash
# Scale a service (run multiple instances)
docker compose up -d --scale app=3
# Pull latest images
docker compose pull
# Restart a single service
docker compose restart apprestart: no # Never restart (default)
restart: always # Always restart
restart: on-failure # Restart only on error
restart: unless-stopped # Restart always unless manually stoppedCreate a full stack app with Compose:
# docker-compose.yml
version: '3.9'
services:
web:
image: nginx:alpine
ports:
- '8080:80'
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:docker compose up -d
docker compose ps
docker compose logs db
docker compose downMulti-stage builds let you use multiple FROM statements in one Dockerfile. You build in one stage and copy only the final output to a smaller production image β keeping your final image lean and secure.
# βββ Stage 1: Build βββββββββββββββββββββββββββββββββββββββ
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build # Output: /app/dist
# βββ Stage 2: Production ββββββββββββββββββββββββββββββββββ
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package*.json ./
RUN npm ci --only=production
EXPOSE 3000
CMD ["node", "dist/server.js"]π₯ Result: Builder image might be 900MB. Final image is ~120MB!
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o main .
FROM scratch # Empty image β just the binary!
COPY --from=builder /app/main /main
ENTRYPOINT ["/main"]# Build and compare sizes
docker build --target builder -t myapp:builder .
docker build -t myapp:prod .
docker images | grep myapp
# See the size difference!# Bad β
RUN apt-get install curl
CMD ["node", "server.js"]
# Good β
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
CMD ["node", "server.js"]# Bad β (~900MB, many vulnerabilities)
FROM node:18
# Good β
(~120MB, minimal attack surface)
FROM node:18-alpine
# Best for compiled apps β
(~5MB)
FROM scratch# Docker's built-in scanner
docker scout cves myapp:latest
# Or use Trivy (popular open-source tool)
trivy image myapp:latest.env
.git
node_modules
*.log
secrets/
# Bad β
ENV DATABASE_PASSWORD=supersecret
# Good β
β pass at runtime
docker run -e DATABASE_PASSWORD=$DB_PASS myapp
# Best β
β use Docker Secrets (Swarm/K8s)
docker secret create db_password ./password.txtdocker run --read-only myapp
# Prevent container from writing to its filesystemdocker run \
--memory="256m" \
--cpus="0.5" \
myapp# .github/workflows/docker.yml
name: Build & Push Docker Image
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
push: true
tags: yourusername/myapp:latest| Registry | Best For |
|---|---|
| Docker Hub | Public images, open source |
| GitHub GHCR | GitHub-integrated projects |
| AWS ECR | AWS/ECS/EKS deployments |
| Google GCR / GAR | GCP deployments |
| Self-hosted | Private, on-prem |
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1# Check container health status
docker inspect --format='{{.State.Health.Status}}' myapp# docker-compose.yml
services:
app:
image: myapp
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M# ββ Images ββββββββββββββββββββββββββββββββββ
docker pull <image> # Download image
docker build -t name:tag . # Build image
docker images # List images
docker rmi <image> # Delete image
docker image prune -a # Delete unused images
# ββ Containers ββββββββββββββββββββββββββββββ
docker run -d -p 8080:80 nginx # Run container
docker run -it ubuntu bash # Interactive shell
docker ps # Running containers
docker ps -a # All containers
docker stop <container> # Stop
docker rm <container> # Remove
docker logs -f <container> # Follow logs
docker exec -it <c> bash # Shell into container
docker stats # Resource usage
# ββ Volumes βββββββββββββββββββββββββββββββββ
docker volume create vol # Create volume
docker volume ls # List volumes
docker volume rm vol # Remove volume
# ββ Networks ββββββββββββββββββββββββββββββββ
docker network create net # Create network
docker network ls # List networks
docker network inspect net # Inspect
# ββ Compose βββββββββββββββββββββββββββββββββ
docker compose up -d # Start all
docker compose down # Stop all
docker compose logs -f # Follow logs
docker compose ps # Status
docker compose exec app bash # Shell into service
# ββ System Cleanup ββββββββββββββββββββββββββ
docker system prune # Remove all unused resources
docker system prune -a # Including images
docker system df # Disk usageπ Next Steps after Docker:
- Docker Swarm β Native container orchestration
- Kubernetes (K8s) β Industry-standard orchestration
- Helm β Kubernetes package manager
- Terraform β Infrastructure as code
Happy containerizing! π³