Microservices Deployment
The Nomos pipeline runs as three microservices, each in its own Proxmox LXC container. This guide covers the infrastructure layout, container configuration, networking, and how to add new services.
Infrastructure Overview
Section titled “Infrastructure Overview”The pipeline runs on a homelab Proxmox cluster. Each service is isolated in an unprivileged LXC container with minimal resource allocation — these are lightweight API servers, not GPU workloads.
Container Specifications
Section titled “Container Specifications”| Service | Container | IP | OS | CPU | RAM | Disk | Port |
|---|---|---|---|---|---|---|---|
| Security Gate | lxc-gate | 192.168.0.82 | Debian 12 | 2 cores | 1 GB | 8 GB | 3001 |
| Router | lxc-router | 192.168.0.4 | Debian 12 | 2 cores | 2 GB | 8 GB | 3002 |
| Verifier | lxc-verifier | 192.168.0.50 | Debian 12 | 2 cores | 1 GB | 8 GB | 3003 |
The Router gets more RAM because it holds connection pools to multiple model provider APIs and manages concurrent plan executions.
Process Management
Section titled “Process Management”Each service runs under systemd with automatic restart on failure:
[Unit]Description=Nomos Security GateAfter=network.target
[Service]Type=simpleUser=nomosWorkingDirectory=/opt/nomos/gateExecStart=/opt/nomos/gate/nomos-gateRestart=alwaysRestartSec=5Environment=PORT=3001EnvironmentFile=/opt/nomos/gate/.env
[Install]WantedBy=multi-user.targetStandard operations:
# Check statussystemctl status nomos-gate
# View logsjournalctl -u nomos-gate -f
# Restartsystemctl restart nomos-gateNetworking
Section titled “Networking”Internal Network
Section titled “Internal Network”All containers are on the same flat network (192.168.0.0/24). Inter-service communication uses plain HTTP on internal IPs. There is no service mesh or internal TLS — the trust boundary is at the Caddy reverse proxy.
Container Network (192.168.0.0/24) | +-- 192.168.0.82 (lxc-gate) :3001 +-- 192.168.0.4 (lxc-router) :3002 +-- 192.168.0.50 (lxc-verifier) :3003 +-- 192.168.0.1 (caddy-proxy) :443Caddy Reverse Proxy
Section titled “Caddy Reverse Proxy”Caddy runs on the gateway host and handles TLS termination for all services. Certificates are provisioned automatically using the Cloudflare DNS challenge for the *.tismjedi-homelab.com wildcard domain.
{ acme_dns cloudflare {env.CF_API_TOKEN}}
gate.tismjedi-homelab.com { reverse_proxy 192.168.0.82:3001}
router.tismjedi-homelab.com { reverse_proxy 192.168.0.4:3002}
verifier.tismjedi-homelab.com { reverse_proxy 192.168.0.50:3003}Caddy automatically renews certificates and handles HTTPS redirects. No manual certificate management is required.
DNS Configuration
Section titled “DNS Configuration”DNS records are managed in Cloudflare. Each service has a CNAME or A record pointing to the homelab’s external IP, with Cloudflare proxy disabled (DNS-only mode) so that Caddy handles TLS directly.
gate.tismjedi-homelab.com A <external-ip> (DNS only)router.tismjedi-homelab.com A <external-ip> (DNS only)verifier.tismjedi-homelab.com A <external-ip> (DNS only)Firewall Rules
Section titled “Firewall Rules”The Proxmox host firewall allows inbound traffic on ports 80 and 443 only. Individual containers are not directly accessible from outside the network.
# On the Proxmox hostufw allow 80/tcpufw allow 443/tcpufw default deny incomingAdding a New Service
Section titled “Adding a New Service”To add a new service to the pipeline (e.g., a cache layer, a logging service, or a new processing stage):
1. Create the LXC Container
Section titled “1. Create the LXC Container”# On the Proxmox hostpct create <vmid> local:vztmpl/debian-12-standard_12.2-1_amd64.tar.zst \ --hostname lxc-<service-name> \ --memory 1024 \ --cores 2 \ --rootfs local-lvm:8 \ --net0 name=eth0,bridge=vmbr0,ip=192.168.0.<ip>/24,gw=192.168.0.1 \ --unprivileged 1
pct start <vmid>2. Install the Service
Section titled “2. Install the Service”# Enter the containerpct enter <vmid>
# Create service useruseradd -r -s /bin/false nomos
# Deploy the binary or applicationmkdir -p /opt/nomos/<service-name># ... copy binary, create .env, etc.3. Create the systemd Unit
Section titled “3. Create the systemd Unit”cat > /etc/systemd/system/nomos-<service-name>.service << 'EOF'[Unit]Description=Nomos <Service Name>After=network.target
[Service]Type=simpleUser=nomosWorkingDirectory=/opt/nomos/<service-name>ExecStart=/opt/nomos/<service-name>/nomos-<service-name>Restart=alwaysRestartSec=5Environment=PORT=<port>EnvironmentFile=/opt/nomos/<service-name>/.env
[Install]WantedBy=multi-user.targetEOF
systemctl enable nomos-<service-name>systemctl start nomos-<service-name>4. Add Caddy Configuration
Section titled “4. Add Caddy Configuration”Add a new block to the Caddyfile:
<service-name>.tismjedi-homelab.com { reverse_proxy 192.168.0.<ip>:<port>}Reload Caddy:
systemctl reload caddy5. Add DNS Record
Section titled “5. Add DNS Record”In Cloudflare, add an A record for <service-name>.tismjedi-homelab.com pointing to the external IP (DNS-only mode).
6. Verify
Section titled “6. Verify”# Health checkcurl https://<service-name>.tismjedi-homelab.com/health
# Check TLScurl -vI https://<service-name>.tismjedi-homelab.com 2>&1 | grep "SSL certificate"Monitoring
Section titled “Monitoring”Each service exposes a /health endpoint. A simple monitoring script checks all services:
#!/bin/bashSERVICES=( "gate.tismjedi-homelab.com" "router.tismjedi-homelab.com" "verifier.tismjedi-homelab.com")
for svc in "${SERVICES[@]}"; do STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://$svc/health") if [ "$STATUS" = "200" ]; then echo "[OK] $svc" else echo "[FAIL] $svc (HTTP $STATUS)" fidoneFor persistent monitoring, this script can run on a cron schedule or be integrated into an existing monitoring stack (Uptime Kuma, Prometheus blackbox exporter, etc.).
Backup
Section titled “Backup”The services are stateless API servers. Configuration and environment files are the only state that needs backing up:
# Backup all service configsfor svc in gate router verifier; do pct pull <vmid> /opt/nomos/$svc/.env ./backups/$svc.env pct pull <vmid> /etc/systemd/system/nomos-$svc.service ./backups/$svc.servicedoneThe Caddy configuration and Proxmox container configs should also be included in regular backups.