Docker has transformed the way we develop and deploy applications. But between use on the developer's workstation and production, the challenges differ. A complete tour of best practices for both contexts.
Before Docker, the classic "it works on my machine" was a developer joke and an ops nightmare. Development environments inevitably diverged from production β different language versions, missing system libraries, forgotten environment variables. Docker didn't invent containers, but it made their use accessible to everyone. In 2021, it's a tool every backend developer must master.
What Docker fundamentally solves
A Docker container packages an application with everything it needs to run: code, runtime, system libraries, environment variables. Where a virtual machine virtualizes all the hardware (heavy, slow), a container shares the host system's kernel and only isolates the process (lightweight, fast).
The result: the same artifact runs the same way on your laptop, in CI, in staging, and in production. No "but it works on my machine", no drift between environments.
Docker for the developer
Starting third-party services without installing anything
This is the first immediate benefit. Need PostgreSQL for this project? No more installing Postgres locally, managing multiple versions, having conflicts between projects.
docker run -d \
--name postgres-dev \
-e POSTGRES_PASSWORD=secret \
-e POSTGRES_DB=myapp \
-p 5432:5432 \
postgres:14
One command, one running service. Same for Redis, Elasticsearch, RabbitMQ, or any dependency.
Docker Compose β the developer's orchestrator
A docker-compose.yml file describes all the application's services and their dependencies. A docker compose up launches everything. It's the standard for development environments.
version: '3.9'
services:
app:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
DATABASE_URL: postgres://user:secret@db:5432/myapp
depends_on:
db:
condition: service_healthy
db:
image: postgres:14
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 5s
timeout: 5s
retries: 5
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
The healthcheck on the database is important: without it, the application may start before Postgres is ready to accept connections.
Hot reload and volumes
The volume .:/app mounts the local source code into the container. Any file change is immediately visible in the container β hot reload works without rebuilding the image.
Writing a quality Dockerfile
The Dockerfile is your image's blueprint. A bad Dockerfile produces heavy images, slow to build, and potentially insecure.
Multi-stage build β the fundamental best practice
# Stage 1: build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
# Stage 2: lightweight final image
FROM node:18-alpine AS runner
WORKDIR /app
# Don't run as root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/main.js"]
This multi-stage pattern ensures build tools (compilers, devDependencies) don't end up in the final image. A well-built Node.js image weighs 150β200 MB instead of 1 GB+.
Instruction order matters
Docker caches each layer. Place instructions that change infrequently at the top, those that change often at the bottom:
# Good order: deps first, code after
COPY package*.json ./
RUN npm ci
COPY . . # invalidated on every code change, but not the npm ci layer
.dockerignore
Essential. Without it, COPY . . bundles node_modules, .git, local config files, sometimes secrets.
node_modules
.git
.env
.env.local
*.log
dist
coverage
Docker in production
In production, the stakes change: availability, security, observability, scalability.
Never run as root
By default, processes in a container run as root. If the container is compromised, the attacker has root rights in the container β and potentially on the host through misconfigurations. Always create an unprivileged user.
Minimal images
Prefer alpine or distroless over full images. Fewer installed packages = smaller attack surface = fewer CVEs to fix.
| Image | Size |
|---|---|
node:18 |
~950 MB |
node:18-slim |
~240 MB |
node:18-alpine |
~170 MB |
Secret management
Never put secrets in the Dockerfile or in plain-text environment variables in docker-compose.yml in production. Use Docker Secrets, Vault, or cloud provider mechanisms (AWS Secrets Manager, GCP Secret Manager).
# In prod: not this
environment:
DATABASE_PASSWORD: supersecret # visible in docker inspect
# Prefer Docker Swarm secrets or an external manager
secrets:
db_password:
external: true
Healthchecks
A healthcheck in the Dockerfile lets the orchestrator (Docker Swarm, Kubernetes) know whether the container is genuinely ready to receive traffic.
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1
Resource limits
Without limits, a container can consume all the memory or CPU of the host node.
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
memory: 256M
Kubernetes: the next step
For high-traffic applications or those requiring high availability, Docker alone is not enough. Kubernetes orchestrates clusters of containers: scheduling, auto-scaling, self-healing, rolling deployments.
But Kubernetes has a steep learning curve. For many teams, Docker Swarm (built into Docker) is a good middle ground β simpler, sufficient for orchestrating a few dozen services.
In summary
Docker is not a deployment technology reserved for ops. It's a development tool as much as a production one. Mastering best practices β multi-stage builds, lightweight images, non-root, healthchecks, secret management β is what distinguishes naive use from professional use.
The original promise holds: an image built once, deployable everywhere, the same way.