Why Docker Compose, Not Kubernetes
Status: Decided. In production with 37 containers.
Context
Section titled “Context”The system runs 37 containers on a single Raspberry Pi 5. One person operates and maintains it. Containers need CPU and memory limits, restart policies, volume mounts, and environment configuration. Some are pulled from registries; some are built from source. A few have complex dependency chains.
Kubernetes and its lightweight variants (k3s, k0s) came up as candidates during initial planning. They provide orchestration capabilities that Docker Compose does not: pod scheduling, rolling deployments, health-based placement, horizontal scaling, and sophisticated network policies.
Decision
Section titled “Decision”Docker Compose, not Kubernetes.
The system uses two compose files per stack: a base file defining services and a Pi overlay (docker-compose.pi.yml) adding resource limits and port overrides for the ARM environment. GitOps convergence (scripts/maintenance/gitops-converge.sh) runs every 15 minutes to detect and correct image drift, providing the continuous reconciliation that is often cited as a Kubernetes advantage.
What Was Rejected and Why
Section titled “What Was Rejected and Why”k3s: k3s adds control plane overhead — etcd or SQLite state store, an API server, a scheduler, and a controller manager — for a single-node workload that has no need for pod scheduling. Pod scheduling is useful when you have multiple nodes to schedule across. With one node, it is machinery that adds nothing.
Rolling deployments: Kubernetes offers rolling updates with zero-downtime semantics. For 37 containers on a Pi 5, the deployment process already runs in under 2 minutes. The complexity of managing rolling updates across a dependency graph of this size (postgres must be healthy before n8n, n8n before mcp-proxy-n8n, etc.) would require significant Helm chart authorship or custom operator work. The GitHub Actions deploy pipeline handles ordered restarts with the same outcome.
Horizontal scaling: The application has no scale-out requirements. One person uses this system. The database, memory server, and API are not under load that justifies replicas. Adding replica management for a single-user system would optimize for a problem that does not exist.
Consequences
Section titled “Consequences”What works well:
- The compose files are readable. Any operator unfamiliar with the system can understand what is running and why within minutes.
- The Pi overlay pattern cleanly separates resource limits from service definitions without duplicating everything.
- Debugging is straightforward:
docker logs <container>,docker exec <container> sh,docker stats. - The
gitops-converge.shtimer provides drift detection without the overhead of a Kubernetes reconciliation loop. It handles the primary use case: detecting when a container is running a stale image and recreating it.
Tradeoffs accepted:
- No declarative health-based placement. If Caroline goes down, nothing restarts automatically on a different host. This is intentional: the single Pi is the production environment by design, not a limitation.
- No built-in rolling updates. The deploy pipeline restarts services in dependency order, which achieves the same result with explicit control.
- Compose file proliferation. Seven compose files across the repository requires discipline to keep in sync. The
current-state-freshness.ymlworkflow checks that documentation counts stay accurate.
If This Decision Were Revisited
Section titled “If This Decision Were Revisited”The calculus changes if the system expands to multiple production nodes. If HA failover or geographic redundancy become requirements, k3s on two or three Pi 5 units would be worth revisiting. At that scale, the orchestration overhead pays for itself.
For a single-node personal infrastructure operated by one person, Docker Compose is the right tool.