Docker changed how we build, ship, and run software. Instead of "it works on my machine," Docker guarantees that your application runs the same way everywhere - on your laptop, on a colleague's machine, in CI/CD, and in production. In this guide, we'll go from zero to deploying a real application.
What is Docker?
Docker is a platform that packages your application and all its dependencies into a standardized unit called a container. A container is an isolated, lightweight process that shares the host OS kernel but has its own filesystem, network, and process space.
Containers vs Virtual Machines
| Aspect | Containers | Virtual Machines |
|---|---|---|
| Startup | Seconds | Minutes |
| Size | MBs | GBs |
| OS | Shares host kernel | Full guest OS |
| Isolation | Process-level | Hardware-level |
| Performance | Near-native | Overhead from hypervisor |
| Density | Hundreds per host | Tens per host |
Installing Docker
# macOS brew install --cask docker # Ubuntu/Debian curl -fsSL https://get.docker.com | sh sudo usermod -aG docker $USER # Verify installation docker --version docker run hello-world
Core Concepts
Images
An image is a read-only template with instructions for creating a container. Think of it as a snapshot of your application and its environment.
# Pull an image from Docker Hub docker pull node:20-alpine # List local images docker images # Remove an image docker rmi node:20-alpine
Containers
A container is a running instance of an image. You can create, start, stop, and delete containers.
# Run a container docker run -d --name my-app -p 3000:3000 node:20-alpine # List running containers docker ps # List all containers (including stopped) docker ps -a # Stop a container docker stop my-app # Remove a container docker rm my-app # View logs docker logs my-app # Execute a command inside a running container docker exec -it my-app sh
Writing a Dockerfile
A Dockerfile is a text file with instructions to build an image. Each instruction creates a layer.
Basic Dockerfile for a Node.js App
# Use an official Node.js runtime as base image FROM node:20-alpine # Set working directory WORKDIR /app # Copy package files first (better caching) COPY package.json package-lock.json ./ # Install dependencies RUN npm ci --only=production # Copy application code COPY . . # Expose the port the app runs on EXPOSE 3000 # Command to run the application CMD ["node", "server.js"]
Building and Running
# Build the image docker build -t my-node-app . # Run the container docker run -d -p 3000:3000 my-node-app # Visit http://localhost:3000
Multi-Stage Builds
Multi-stage builds keep your production images small by separating the build environment from the runtime.
# Stage 1: Build FROM node:20-alpine AS builder WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci COPY . . RUN npm run build # Stage 2: Production FROM node:20-alpine AS runner WORKDIR /app COPY /app/dist ./dist COPY /app/node_modules ./node_modules COPY /app/package.json ./ EXPOSE 3000 CMD ["node", "dist/server.js"]
This produces an image with only the compiled output and production dependencies - no source code, no dev dependencies, no build tools.
Next.js Multi-Stage Example
FROM node:20-alpine AS deps WORKDIR /app COPY package.json package-lock.json ./ RUN npm ci FROM node:20-alpine AS builder WORKDIR /app COPY /app/node_modules ./node_modules COPY . . RUN npm run build FROM node:20-alpine AS runner WORKDIR /app ENV NODE_ENV=production COPY /app/public ./public COPY /app/.next/standalone ./ COPY /app/.next/static ./.next/static EXPOSE 3000 CMD ["node", "server.js"]
Volumes: Persistent Data
By default, data inside a container is lost when the container is removed. Volumes solve this.
# Create a named volume docker volume create my-data # Run with a volume docker run -d -v my-data:/app/data my-app # Bind mount (map host directory to container) docker run -d -v $(pwd)/data:/app/data my-app # List volumes docker volume ls
Networking
Docker creates isolated networks for containers to communicate.
# Create a custom network docker network create my-network # Run containers on the same network docker run -d --name api --network my-network my-api docker run -d --name db --network my-network postgres:16 # Containers can reach each other by name # From "api" container: postgres://db:5432
Docker Compose
Docker Compose lets you define and run multi-container applications with a single YAML file.
docker-compose.yml
services: api: build: ./api ports: - "3000:3000" environment: - DATABASE_URL=postgres://user:pass@db:5432/mydb depends_on: - db db: image: postgres:16-alpine environment: - POSTGRES_USER=user - POSTGRES_PASSWORD=pass - POSTGRES_DB=mydb volumes: - pgdata:/var/lib/postgresql/data ports: - "5432:5432" redis: image: redis:7-alpine ports: - "6379:6379" volumes: pgdata:
Commands
# Start all services docker compose up -d # View logs docker compose logs -f # Stop all services docker compose down # Rebuild and restart docker compose up -d --build # Scale a service docker compose up -d --scale api=3
.dockerignore
Like .gitignore, this file prevents unnecessary files from being copied into the image.
node_modules .git .env *.md .next dist coverage
Production Best Practices
1. Use Small Base Images
# Bad: 1GB+ FROM node:20 # Good: ~180MB FROM node:20-alpine
2. Don't Run as Root
FROM node:20-alpine RUN addgroup -S app && adduser -S app -G app USER app WORKDIR /home/app COPY . .
3. Use Specific Image Tags
# Bad: can change unexpectedly FROM node:latest # Good: pinned version FROM node:20.11-alpine3.19
4. Leverage Build Cache
Order your Dockerfile instructions from least to most frequently changed:
FROM node:20-alpine WORKDIR /app # These change rarely - cached COPY package.json package-lock.json ./ RUN npm ci --only=production # This changes often - not cached COPY . .
5. Health Checks
HEALTHCHECK \ CMD wget -qO- http://localhost:3000/health || exit 1
6. Use Environment Variables
ENV NODE_ENV=production ENV PORT=3000
Common Docker Commands Cheat Sheet
# Images docker build -t name:tag . # Build image docker images # List images docker rmi image_name # Remove image docker image prune # Remove unused images # Containers docker run -d -p 3000:3000 image # Run detached docker ps # List running docker stop container_name # Stop docker rm container_name # Remove docker logs -f container_name # Follow logs docker exec -it container sh # Shell into container # Compose docker compose up -d # Start services docker compose down # Stop services docker compose logs -f # Follow all logs docker compose ps # List services # Cleanup docker system prune -a # Remove everything unused
From Docker to Kubernetes
Docker handles individual containers. When you need to orchestrate hundreds of containers across multiple servers, you need Kubernetes. Docker and Kubernetes are complementary:
- Docker: builds and runs containers
- Kubernetes: orchestrates containers at scale (scheduling, scaling, healing)
If you're interested in the next step, check out my article on Introduction to Kubernetes.
Conclusion
Docker is a fundamental skill for modern developers. It eliminates environment inconsistencies, simplifies deployment, and is the foundation for container orchestration with Kubernetes. Start with a simple Dockerfile, move to Docker Compose for multi-service apps, and adopt multi-stage builds and security best practices as you grow.
The best way to learn Docker is to containerize a project you're already working on. Start today.