spinny:~/writing $ less docker-containers-complete-guide.md
12Docker changed how we build, ship, and run software. Instead of "it works on my machine," Docker guarantees that your application runs the same way everywhere - on your laptop, on a colleague's machine, in CI/CD, and in production. In this guide, we'll go from zero to deploying a real application.34## What is Docker?56Docker is a platform that packages your application and all its dependencies into a standardized unit called a **container**. A container is an isolated, lightweight process that shares the host OS kernel but has its own filesystem, network, and process space.78```mermaid9graph TD10 subgraph "Traditional Deployment"11 A1[App 1] --> OS1[Guest OS]12 A2[App 2] --> OS2[Guest OS]13 OS1 --> HV[Hypervisor]14 OS2 --> HV15 HV --> HW1[Hardware]16 end1718 subgraph "Docker Deployment"19 B1[App 1] --> D1[Container]20 B2[App 2] --> D2[Container]21 D1 --> DE[Docker Engine]22 D2 --> DE23 DE --> HW2[Hardware]24 end25```2627### Containers vs Virtual Machines2829| Aspect | Containers | Virtual Machines |30|--------|-----------|-----------------|31| **Startup** | Seconds | Minutes |32| **Size** | MBs | GBs |33| **OS** | Shares host kernel | Full guest OS |34| **Isolation** | Process-level | Hardware-level |35| **Performance** | Near-native | Overhead from hypervisor |36| **Density** | Hundreds per host | Tens per host |3738## Installing Docker3940```bash41# macOS42brew install --cask docker4344# Ubuntu/Debian45curl -fsSL https://get.docker.com | sh46sudo usermod -aG docker $USER4748# Verify installation49docker --version50docker run hello-world51```5253## Core Concepts5455### Images5657An image is a read-only template with instructions for creating a container. Think of it as a snapshot of your application and its environment.5859```bash60# Pull an image from Docker Hub61docker pull node:20-alpine6263# List local images64docker images6566# Remove an image67docker rmi node:20-alpine68```6970### Containers7172A container is a running instance of an image. You can create, start, stop, and delete containers.7374```bash75# Run a container76docker run -d --name my-app -p 3000:3000 node:20-alpine7778# List running containers79docker ps8081# List all containers (including stopped)82docker ps -a8384# Stop a container85docker stop my-app8687# Remove a container88docker rm my-app8990# View logs91docker logs my-app9293# Execute a command inside a running container94docker exec -it my-app sh95```9697## Writing a Dockerfile9899A Dockerfile is a text file with instructions to build an image. Each instruction creates a layer.100101### Basic Dockerfile for a Node.js App102103```dockerfile104# Use an official Node.js runtime as base image105FROM node:20-alpine106107# Set working directory108WORKDIR /app109110# Copy package files first (better caching)111COPY package.json package-lock.json ./112113# Install dependencies114RUN npm ci --only=production115116# Copy application code117COPY . .118119# Expose the port the app runs on120EXPOSE 3000121122# Command to run the application123CMD ["node", "server.js"]124```125126### Building and Running127128```bash129# Build the image130docker build -t my-node-app .131132# Run the container133docker run -d -p 3000:3000 my-node-app134135# Visit http://localhost:3000136```137138## Multi-Stage Builds139140Multi-stage builds keep your production images small by separating the build environment from the runtime.141142```dockerfile143# Stage 1: Build144FROM node:20-alpine AS builder145WORKDIR /app146COPY package.json package-lock.json ./147RUN npm ci148COPY . .149RUN npm run build150151# Stage 2: Production152FROM node:20-alpine AS runner153WORKDIR /app154COPY --from=builder /app/dist ./dist155COPY --from=builder /app/node_modules ./node_modules156COPY --from=builder /app/package.json ./157EXPOSE 3000158CMD ["node", "dist/server.js"]159```160161This produces an image with only the compiled output and production dependencies - no source code, no dev dependencies, no build tools.162163### Next.js Multi-Stage Example164165```dockerfile166FROM node:20-alpine AS deps167WORKDIR /app168COPY package.json package-lock.json ./169RUN npm ci170171FROM node:20-alpine AS builder172WORKDIR /app173COPY --from=deps /app/node_modules ./node_modules174COPY . .175RUN npm run build176177FROM node:20-alpine AS runner178WORKDIR /app179ENV NODE_ENV=production180COPY --from=builder /app/public ./public181COPY --from=builder /app/.next/standalone ./182COPY --from=builder /app/.next/static ./.next/static183EXPOSE 3000184CMD ["node", "server.js"]185```186187## Volumes: Persistent Data188189By default, data inside a container is lost when the container is removed. Volumes solve this.190191```bash192# Create a named volume193docker volume create my-data194195# Run with a volume196docker run -d -v my-data:/app/data my-app197198# Bind mount (map host directory to container)199docker run -d -v $(pwd)/data:/app/data my-app200201# List volumes202docker volume ls203```204205## Networking206207Docker creates isolated networks for containers to communicate.208209```bash210# Create a custom network211docker network create my-network212213# Run containers on the same network214docker run -d --name api --network my-network my-api215docker run -d --name db --network my-network postgres:16216217# Containers can reach each other by name218# From "api" container: postgres://db:5432219```220221```mermaid222graph LR223 subgraph "my-network"224 API[api container\nport 3000] -- "db:5432" --> DB[db container\nport 5432]225 end226 User -- "localhost:3000" --> API227```228229## Docker Compose230231Docker Compose lets you define and run multi-container applications with a single YAML file.232233### docker-compose.yml234235```yaml236services:237 api:238 build: ./api239 ports:240 - "3000:3000"241 environment:242 - DATABASE_URL=postgres://user:pass@db:5432/mydb243 depends_on:244 - db245246 db:247 image: postgres:16-alpine248 environment:249 - POSTGRES_USER=user250 - POSTGRES_PASSWORD=pass251 - POSTGRES_DB=mydb252 volumes:253 - pgdata:/var/lib/postgresql/data254 ports:255 - "5432:5432"256257 redis:258 image: redis:7-alpine259 ports:260 - "6379:6379"261262volumes:263 pgdata:264```265266### Commands267268```bash269# Start all services270docker compose up -d271272# View logs273docker compose logs -f274275# Stop all services276docker compose down277278# Rebuild and restart279docker compose up -d --build280281# Scale a service282docker compose up -d --scale api=3283```284285## .dockerignore286287Like `.gitignore`, this file prevents unnecessary files from being copied into the image.288289```plaintext290node_modules291.git292.env293*.md294.next295dist296coverage297```298299## Production Best Practices300301### 1. Use Small Base Images302303```dockerfile304# Bad: 1GB+305FROM node:20306307# Good: ~180MB308FROM node:20-alpine309```310311### 2. Don't Run as Root312313```dockerfile314FROM node:20-alpine315RUN addgroup -S app && adduser -S app -G app316USER app317WORKDIR /home/app318COPY --chown=app:app . .319```320321### 3. Use Specific Image Tags322323```dockerfile324# Bad: can change unexpectedly325FROM node:latest326327# Good: pinned version328FROM node:20.11-alpine3.19329```330331### 4. Leverage Build Cache332333Order your Dockerfile instructions from least to most frequently changed:334335```dockerfile336FROM node:20-alpine337WORKDIR /app338339# These change rarely - cached340COPY package.json package-lock.json ./341RUN npm ci --only=production342343# This changes often - not cached344COPY . .345```346347### 5. Health Checks348349```dockerfile350HEALTHCHECK --interval=30s --timeout=3s --retries=3 \351 CMD wget -qO- http://localhost:3000/health || exit 1352```353354### 6. Use Environment Variables355356```dockerfile357ENV NODE_ENV=production358ENV PORT=3000359```360361## Common Docker Commands Cheat Sheet362363```bash364# Images365docker build -t name:tag . # Build image366docker images # List images367docker rmi image_name # Remove image368docker image prune # Remove unused images369370# Containers371docker run -d -p 3000:3000 image # Run detached372docker ps # List running373docker stop container_name # Stop374docker rm container_name # Remove375docker logs -f container_name # Follow logs376docker exec -it container sh # Shell into container377378# Compose379docker compose up -d # Start services380docker compose down # Stop services381docker compose logs -f # Follow all logs382docker compose ps # List services383384# Cleanup385docker system prune -a # Remove everything unused386```387388## From Docker to Kubernetes389390Docker handles individual containers. When you need to orchestrate hundreds of containers across multiple servers, you need Kubernetes. Docker and Kubernetes are complementary:3913921. **Docker**: builds and runs containers3932. **Kubernetes**: orchestrates containers at scale (scheduling, scaling, healing)394395If you're interested in the next step, check out my article on Introduction to Kubernetes.396397## Conclusion398399Docker is a fundamental skill for modern developers. It eliminates environment inconsistencies, simplifies deployment, and is the foundation for container orchestration with Kubernetes. Start with a simple Dockerfile, move to Docker Compose for multi-service apps, and adopt multi-stage builds and security best practices as you grow.400401The best way to learn Docker is to containerize a project you're already working on. Start today.402
:Docker for Beginners: From Your First Image to Production Deploylines 1-402 (END) — press q to close