Docker is one of the highest-leverage skills a returning developer can learn. It runs your homelab services, your Railway backend, your CI pipelines, and most production infrastructure in the modern web. This post is the from-zero guide.
What Docker actually is
Docker packages a piece of software plus all its dependencies into a container. The container runs the same way on your Mac, on a Linux server, on AWS — because the container brings its own environment. No more "works on my machine" but breaks in production.
Conceptually: a container is like a small, fast virtual machine that includes only what's needed to run one program. It's not a full OS; it shares the host's Linux kernel. That's why it's tiny (megabytes vs gigabytes) and fast to start (seconds vs minutes).
Why Docker matters for you specifically
- Self-hosting services. Nearly every self-hosted service ships as a Docker image.
docker run jellyfin/jellyfinand you have a media server. - Backends. Railway, AWS, GCP, Azure all accept Docker containers. Write once, deploy anywhere.
- Reproducible environments. "Set up a Postgres for testing" is one command, not 20 minutes of installation.
- Isolation. Run conflicting versions of Python / Node / databases side-by-side without conflict.
- Learning. Most modern infrastructure documentation assumes Docker knowledge.
Installing Docker
macOS: Docker Desktop from docker.com — or brew install --cask docker. Includes the daemon + CLI + Docker Compose.
Linux: Follow the official install for your distro. Docker Engine (the daemon) + Docker CLI + Compose plugin. No Desktop needed.
Windows: Docker Desktop with WSL2 backend. Avoid the Hyper-V backend if possible.
Verify with docker --version and docker run hello-world.
The four concepts you need
- Image — a template / blueprint. Like a class. Read-only. Examples:
postgres:16,node:20,nginx:alpine. - Container — a running instance of an image. Like an object. Read-write while running. You can have many containers from one image.
- Volume — persistent storage that survives container restarts. Without volumes, all data inside a container vanishes when it stops.
- Network — how containers talk to each other and to the outside world. By default, containers can talk to each other on a shared network; you expose ports to make them reachable from the host.
That's it. Image, container, volume, network. The rest is configuration on top.
The 10 commands that cover 95% of usage
docker run [options] image— start a new container. Most common option:-p 8080:80maps host port 8080 to container port 80.docker ps— list running containers. Add-ato see stopped ones too.docker stop <name>— stop a container.docker rm <name>— remove a stopped container.docker logs -f <name>— follow a container's logs (live).docker exec -it <name> bash— open a shell inside a running container. Useful for debugging.docker images— list local images.docker pull image— download an image without running it.docker compose up -d— start a multi-container stack defined indocker-compose.yml. Run in background.docker compose down— stop and remove the stack.
Master these and you can do almost anything you need with Docker.
Docker Compose (the real productivity boost)
Once you have more than one container, manually docker run-ing each one is tedious. Docker Compose lets you define your entire stack in a single YAML file.
Example docker-compose.yml for a Postgres + Adminer stack:
services:
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
POSTGRES_USER: app
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
adminer:
image: adminer
ports:
- "8080:8080"
depends_on:
- db
volumes:
pgdata:
Run with docker compose up -d. Stop with docker compose down. The whole stack — including persistent storage — is one command.
For homelab services, you'll have a different docker-compose.yml per service (or a single big one). The pattern: drop the YAML into a folder, cd in, docker compose up -d. Done.
Volumes and persistence
By default, a container's filesystem is ephemeral. Stop the container and any data written inside vanishes. Volumes solve this.
Two volume types:
- Named volumes (
pgdata:/var/lib/postgresql/data) — Docker manages the storage location. Best for databases. - Bind mounts (
./config:/etc/myapp) — mount a host path into the container. Best for config files you want to edit on the host.
Always use volumes for: databases, user-uploaded content, configuration that should survive container rebuilds. Never put real data only inside a container.
Networking
- Containers on the same Compose stack can reach each other by service name. The Postgres container above is at hostname
dbfrom anything in the same Compose file. - Port mapping (
-p 8080:80) makes a container port reachable from the host. - Reverse proxy (Caddy / Traefik / Nginx Proxy Manager) sits in front of multiple services, gives clean URLs, handles TLS. Standard pattern for homelabs.
- Custom networks for isolation between stacks. Most homelab use cases don't need this.
Patterns that make Docker click
- One service per container. Don't bundle Postgres and your app in the same container. They're easier to manage and update separately.
- Pin image versions (
postgres:16, notpostgres:latest).latestchanges underneath you and breaks things. - Use Compose for anything > 1 container. Even single-container deployments benefit from the documentation a Compose file provides.
- Treat containers as cattle, not pets. Recreate freely. State lives in volumes, not in containers.
- Read the image's README. Every official image on Docker Hub documents env vars, ports, volumes. Read it before configuring.
- Use restart policies.
restart: unless-stoppedin Compose means containers come back after a reboot. Critical for homelab.
Your first containerized service
Run Vaultwarden (self-hosted Bitwarden password manager) in 5 minutes:
mkdir -p ~/docker/vaultwarden && cd ~/docker/vaultwarden
cat > docker-compose.yml << 'EOF'
services:
vaultwarden:
image: vaultwarden/server:latest
restart: unless-stopped
environment:
ADMIN_TOKEN: pick-a-long-random-string
volumes:
- ./data:/data
ports:
- "8000:80"
EOF
docker compose up -d
Visit http://localhost:8000. You have a working password manager. The whole stack lives in ~/docker/vaultwarden/ — backups, migration, debugging are all in one folder.
Repeat this pattern for every service you want to self-host. One folder per service, one Compose file, one command to bring it up.
See: Home Lab 2026, Self-Hosted Media Server, Self-Hosting Linux.