spinny:~/writing $ vim docker-containers-complete-guide.md
1~2Docker changed how we build, ship, and run software. Instead of "it works on my machine," Docker guarantees that your application runs the same way everywhere - on your laptop, on a colleague's machine, in CI/CD, and in production. In this guide, we'll go from zero to deploying a real application.3~4## What is Docker?5~6Docker is a platform that packages your application and all its dependencies into a standardized unit called a **container**. A container is an isolated, lightweight process that shares the host OS kernel but has its own filesystem, network, and process space.7~8```mermaid9graph TD10 subgraph "Traditional Deployment"11 A1[App 1] --> OS1[Guest OS]12 A2[App 2] --> OS2[Guest OS]13 OS1 --> HV[Hypervisor]14 OS2 --> HV15 HV --> HW1[Hardware]16 end17~18 subgraph "Docker Deployment"19 B1[App 1] --> D1[Container]20 B2[App 2] --> D2[Container]21 D1 --> DE[Docker Engine]22 D2 --> DE23 DE --> HW2[Hardware]24 end25```26~27### Containers vs Virtual Machines28~29| Aspect | Containers | Virtual Machines |30|--------|-----------|-----------------|31| **Startup** | Seconds | Minutes |32| **Size** | MBs | GBs |33| **OS** | Shares host kernel | Full guest OS |34| **Isolation** | Process-level | Hardware-level |35| **Performance** | Near-native | Overhead from hypervisor |36| **Density** | Hundreds per host | Tens per host |37~38## Installing Docker39~40```bash41# macOS42brew install --cask docker43~44# Ubuntu/Debian45curl -fsSL https://get.docker.com | sh46sudo usermod -aG docker $USER47~48# Verify installation49docker --version50docker run hello-world51```52~53## Core Concepts54~55### Images56~57An image is a read-only template with instructions for creating a container. Think of it as a snapshot of your application and its environment.58~59```bash60# Pull an image from Docker Hub61docker pull node:20-alpine62~63# List local images64docker images65~66# Remove an image67docker rmi node:20-alpine68```69~70### Containers71~72A container is a running instance of an image. You can create, start, stop, and delete containers.73~74```bash75# Run a container76docker run -d --name my-app -p 3000:3000 node:20-alpine77~78# List running containers79docker ps80~81# List all containers (including stopped)82docker ps -a83~84# Stop a container85docker stop my-app86~87# Remove a container88docker rm my-app89~90# View logs91docker logs my-app92~93# Execute a command inside a running container94docker exec -it my-app sh95```96~97## Writing a Dockerfile98~99A Dockerfile is a text file with instructions to build an image. Each instruction creates a layer.100~101### Basic Dockerfile for a Node.js App102~103```dockerfile104# Use an official Node.js runtime as base image105FROM node:20-alpine106~107# Set working directory108WORKDIR /app109~110# Copy package files first (better caching)111COPY package.json package-lock.json ./112~113# Install dependencies114RUN npm ci --only=production115~116# Copy application code117COPY . .118~119# Expose the port the app runs on120EXPOSE 3000121~122# Command to run the application123CMD ["node", "server.js"]124```125~126### Building and Running127~128```bash129# Build the image130docker build -t my-node-app .131~132# Run the container133docker run -d -p 3000:3000 my-node-app134~135# Visit http://localhost:3000136```137~138## Multi-Stage Builds139~140Multi-stage builds keep your production images small by separating the build environment from the runtime.141~142```dockerfile143# Stage 1: Build144FROM node:20-alpine AS builder145WORKDIR /app146COPY package.json package-lock.json ./147RUN npm ci148COPY . .149RUN npm run build150~151# Stage 2: Production152FROM node:20-alpine AS runner153WORKDIR /app154COPY --from=builder /app/dist ./dist155COPY --from=builder /app/node_modules ./node_modules156COPY --from=builder /app/package.json ./157EXPOSE 3000158CMD ["node", "dist/server.js"]159```160~161This produces an image with only the compiled output and production dependencies - no source code, no dev dependencies, no build tools.162~163### Next.js Multi-Stage Example164~165```dockerfile166FROM node:20-alpine AS deps167WORKDIR /app168COPY package.json package-lock.json ./169RUN npm ci170~171FROM node:20-alpine AS builder172WORKDIR /app173COPY --from=deps /app/node_modules ./node_modules174COPY . .175RUN npm run build176~177FROM node:20-alpine AS runner178WORKDIR /app179ENV NODE_ENV=production180COPY --from=builder /app/public ./public181COPY --from=builder /app/.next/standalone ./182COPY --from=builder /app/.next/static ./.next/static183EXPOSE 3000184CMD ["node", "server.js"]185```186~187## Volumes: Persistent Data188~189By default, data inside a container is lost when the container is removed. Volumes solve this.190~191```bash192# Create a named volume193docker volume create my-data194~195# Run with a volume196docker run -d -v my-data:/app/data my-app197~198# Bind mount (map host directory to container)199docker run -d -v $(pwd)/data:/app/data my-app200~201# List volumes202docker volume ls203```204~205## Networking206~207Docker creates isolated networks for containers to communicate.208~209```bash210# Create a custom network211docker network create my-network212~213# Run containers on the same network214docker run -d --name api --network my-network my-api215docker run -d --name db --network my-network postgres:16216~217# Containers can reach each other by name218# From "api" container: postgres://db:5432219```220~221```mermaid222graph LR223 subgraph "my-network"224 API[api container\nport 3000] -- "db:5432" --> DB[db container\nport 5432]225 end226 User -- "localhost:3000" --> API227```228~229## Docker Compose230~231Docker Compose lets you define and run multi-container applications with a single YAML file.232~233### docker-compose.yml234~235```yaml236services:237 api:238 build: ./api239 ports:240 - "3000:3000"241 environment:242 - DATABASE_URL=postgres://user:pass@db:5432/mydb243 depends_on:244 - db245~246 db:247 image: postgres:16-alpine248 environment:249 - POSTGRES_USER=user250 - POSTGRES_PASSWORD=pass251 - POSTGRES_DB=mydb252 volumes:253 - pgdata:/var/lib/postgresql/data254 ports:255 - "5432:5432"256~257 redis:258 image: redis:7-alpine259 ports:260 - "6379:6379"261~262volumes:263 pgdata:264```265~266### Commands267~268```bash269# Start all services270docker compose up -d271~272# View logs273docker compose logs -f274~275# Stop all services276docker compose down277~278# Rebuild and restart279docker compose up -d --build280~281# Scale a service282docker compose up -d --scale api=3283```284~285## .dockerignore286~287Like `.gitignore`, this file prevents unnecessary files from being copied into the image.288~289```plaintext290node_modules291.git292.env293*.md294.next295dist296coverage297```298~299## Production Best Practices300~301### 1. Use Small Base Images302~303```dockerfile304# Bad: 1GB+305FROM node:20306~307# Good: ~180MB308FROM node:20-alpine309```310~311### 2. Don't Run as Root312~313```dockerfile314FROM node:20-alpine315RUN addgroup -S app && adduser -S app -G app316USER app317WORKDIR /home/app318COPY --chown=app:app . .319```320~321### 3. Use Specific Image Tags322~323```dockerfile324# Bad: can change unexpectedly325FROM node:latest326~327# Good: pinned version328FROM node:20.11-alpine3.19329```330~331### 4. Leverage Build Cache332~333Order your Dockerfile instructions from least to most frequently changed:334~335```dockerfile336FROM node:20-alpine337WORKDIR /app338~339# These change rarely - cached340COPY package.json package-lock.json ./341RUN npm ci --only=production342~343# This changes often - not cached344COPY . .345```346~347### 5. Health Checks348~349```dockerfile350HEALTHCHECK --interval=30s --timeout=3s --retries=3 \351 CMD wget -qO- http://localhost:3000/health || exit 1352```353~354### 6. Use Environment Variables355~356```dockerfile357ENV NODE_ENV=production358ENV PORT=3000359```360~361## Common Docker Commands Cheat Sheet362~363```bash364# Images365docker build -t name:tag . # Build image366docker images # List images367docker rmi image_name # Remove image368docker image prune # Remove unused images369~370# Containers371docker run -d -p 3000:3000 image # Run detached372docker ps # List running373docker stop container_name # Stop374docker rm container_name # Remove375docker logs -f container_name # Follow logs376docker exec -it container sh # Shell into container377~378# Compose379docker compose up -d # Start services380docker compose down # Stop services381docker compose logs -f # Follow all logs382docker compose ps # List services383~384# Cleanup385docker system prune -a # Remove everything unused386```387~388## From Docker to Kubernetes389~390Docker handles individual containers. When you need to orchestrate hundreds of containers across multiple servers, you need Kubernetes. Docker and Kubernetes are complementary:391~3921. **Docker**: builds and runs containers3932. **Kubernetes**: orchestrates containers at scale (scheduling, scaling, healing)394~395If you're interested in the next step, check out my article on Introduction to Kubernetes.396~397## Conclusion398~399Docker is a fundamental skill for modern developers. It eliminates environment inconsistencies, simplifies deployment, and is the foundation for container orchestration with Kubernetes. Start with a simple Dockerfile, move to Docker Compose for multi-service apps, and adopt multi-stage builds and security best practices as you grow.400~401The best way to learn Docker is to containerize a project you're already working on. Start today.402~
NORMAL · docker-containers-complete-guide.md [readonly]402 lines · :q to close