Architecture Overview
chris-os is a personal AI infrastructure monorepo. One person built it. It runs on a Raspberry Pi 5 in a network closet. It manages 37 Docker containers, 210 database tables, 80 automation workflows, and 14 MCP servers exposing 611 tools to AI assistants.
That is not a lab experiment. It is production infrastructure that processes real data every day: emails, messages, calendar events, health records, financial transactions, home automation, voice commands, and semantic memory that persists across hundreds of AI sessions.
The Host Fleet
Section titled “The Host Fleet”Five machines form the chris-os network. Each has a name and a role.
| Host | Hardware | Role | Always On? |
|---|---|---|---|
| Caroline | Raspberry Pi 5, 16GB RAM, 1TB NVMe | Production. Runs all 37 containers. | Yes |
| Atlas | Mac M4 Pro, 24GB RAM, 1TB SSD | Development workstation. Primary Ollama inference host for AI models. | Yes |
| Nightwatch | AMD 7900 XTX GPU | Voice services. Wakes on demand for GPU-intensive speech tasks. | On-demand |
| Relaxation-Vault | MacBook Air M1 | Headless media server. 14 arr-stack containers plus Plex. | Yes |
| Companion-Cube | Synology DS920+ NAS | Network storage and backup target. | On-demand |
Caroline is the center of gravity. Every service that matters runs on her. The other machines handle specialized workloads (AI inference, voice processing, media, storage) that either need specific hardware or benefit from physical separation.
What Runs on Caroline
Section titled “What Runs on Caroline”The 37 containers organize into six functional groups:
Core Data and Automation
Section titled “Core Data and Automation”PostgreSQL 16 sits at the center of everything. Every other service connects to it. The database holds 210 tables across 5 schemas, using pgvector for semantic vector search alongside standard relational data. One database, one source of truth.
n8n is the automation engine. 80 workflows (70+ active) handle email classification, calendar sync, WhatsApp ingestion, health data imports, GitHub event processing, morning briefings, and voice pipeline orchestration. If data moves between systems, n8n moves it.
Redis provides in-memory caching and job queues for the memory server’s embedding pipeline.
Reverse Proxy and Authentication
Section titled “Reverse Proxy and Authentication”Caddy is the single entry point for all traffic. Automatic wildcard TLS for *.ataraxis.cloud via Cloudflare DNS-01. Every request passes through Caddy.
Authelia handles single sign-on with TOTP two-factor authentication. One session cookie covers every subdomain. OIDC provider for Home Assistant, Grafana, and the Hudson iOS app.
Cloudflare Tunnel provides secure external access without opening inbound firewall ports. An encrypted outbound tunnel from Caroline to Cloudflare’s edge handles all external traffic.
MCP Layer (Model Context Protocol)
Section titled “MCP Layer (Model Context Protocol)”The MCP layer follows a three-tier pattern: proxy (wraps a service as MCP-over-HTTP), auth (validates credentials), and Caddy (TLS termination and routing). Four MCP endpoints expose the database, n8n, semantic memory, and Home Assistant to AI assistants like Claude.
This is what makes chris-os an AI infrastructure, not just a homelab. Claude can query the database, trigger workflows, search memories, and control the home, all through authenticated MCP connections.
Dashboard
Section titled “Dashboard”A React single-page application with 19 pages and 27 widgets, served through Nginx. Morning briefing, health data, message analytics, dispatch panel, document browser, container status, and more. PWA with web push notifications.
Observability
Section titled “Observability”A full Grafana stack: Prometheus for metrics, Loki for logs, Tempo for traces, plus exporters for the Pi (node-exporter), containers (cAdvisor), PostgreSQL (postgres-exporter), UniFi network (unpoller), and endpoint probing (blackbox-exporter). Alloy collects and ships logs from all 37 containers. OpenTelemetry Collector receives traces from the development workstation.
Home Automation and Voice
Section titled “Home Automation and Voice”Home Assistant manages smart home devices: Sonos speakers, Ecobee thermostat, Dyson air purifier, smart lights, plugs, and device tracking. Runs on the host network for mDNS device discovery.
The voice pipeline chains wake word detection (openWakeWord with a custom “hey GLaDOS” trigger), speech-to-text (Whisper), n8n processing, AI response generation, text-to-speech (Piper), and Sonos audio output.
The Service Map
Section titled “The Service Map”Here is how the pieces connect at a high level:
Internet / LAN | Caddy (TLS, routing, auth) | +-- Authelia (SSO for all browser traffic) | +-- n8n (workflow automation) | +-- Gmail, Calendar, WhatsApp, GitHub webhooks | +-- Pushover, Discord notifications | +-- Dashboard (React SPA + Fastify API) | +-- Grafana (metrics, logs, traces) | +-- MCP Auth -> MCP Proxy -> PostgreSQL +-- MCP Auth -> MCP Proxy -> n8n API +-- MCP Auth -> MCP Proxy -> Memory Server -> PostgreSQL + Ollama +-- MCP Auth -> Home Assistant MCP | +-- Cloudflare Tunnel (external MCP access)
PostgreSQL (center of everything) +-- n8n workflow state +-- Authelia sessions and TOTP +-- Grafana metadata +-- Memory server (semantic vectors) +-- Home Assistant recorder +-- All personal data (messages, health, calendar, etc.)
Voice Pipeline +-- Wake Word -> Whisper (STT) -> n8n -> AI -> Piper (TTS) -> SonosData Flow
Section titled “Data Flow”Network Isolation
Section titled “Network Isolation”Docker bridge networks enforce service boundaries. Five isolated networks prevent containers from reaching services they have no business talking to:
| Network | Purpose |
|---|---|
| net-data | Database tier. PostgreSQL, Redis, and services that need direct data access. |
| net-app | Application tier. n8n, Authelia, WhatsApp API, Caddy. |
| net-mcp | MCP proxy tier. All MCP proxy and auth containers, Caddy, Cloudflare Tunnel. |
| net-frontend | Public-facing tier. Caddy, dashboard, Grafana, Cloudflare Tunnel. |
| net-monitoring | Observability tier. Grafana stack, exporters, collectors. |
Services that bridge tiers (like Caddy, which sits on frontend, app, and MCP networks) do so intentionally. Caddy can route to applications and MCP endpoints, but it never touches the data tier directly.
By the Numbers
Section titled “By the Numbers”| Metric | Count |
|---|---|
| Docker containers | 37 |
| Database tables | 210 |
| Database schemas | 5 |
| Applied migrations | 302 |
| n8n workflows | 80 (70+ active) |
| MCP servers | 14 |
| MCP tools | 611 |
| Dashboard pages | 19 |
| Dashboard widgets | 27 |
| Grafana dashboards | 8+ |
| Prometheus scrape targets | 7+ |
| Blackbox probe targets | 7 |
| Smart home devices | 10+ |
| Voice wake words | 7 custom models |
| Docker networks | 5 isolated + host |
All of this runs on a Raspberry Pi 5 with 16GB of RAM and a 1TB NVMe drive. CPU and memory limits are set on every container. Resource constraints are not optional; they are how a single-board computer runs 37 containers without falling over.