Skip to content

Microservices Deployment

The Nomos pipeline runs as three microservices, each in its own Proxmox LXC container. This guide covers the infrastructure layout, container configuration, networking, and how to add new services.

The pipeline runs on a homelab Proxmox cluster. Each service is isolated in an unprivileged LXC container with minimal resource allocation — these are lightweight API servers, not GPU workloads.

ServiceContainerIPOSCPURAMDiskPort
Security Gatelxc-gate192.168.0.82Debian 122 cores1 GB8 GB3001
Routerlxc-router192.168.0.4Debian 122 cores2 GB8 GB3002
Verifierlxc-verifier192.168.0.50Debian 122 cores1 GB8 GB3003

The Router gets more RAM because it holds connection pools to multiple model provider APIs and manages concurrent plan executions.

Each service runs under systemd with automatic restart on failure:

/etc/systemd/system/nomos-gate.service
[Unit]
Description=Nomos Security Gate
After=network.target
[Service]
Type=simple
User=nomos
WorkingDirectory=/opt/nomos/gate
ExecStart=/opt/nomos/gate/nomos-gate
Restart=always
RestartSec=5
Environment=PORT=3001
EnvironmentFile=/opt/nomos/gate/.env
[Install]
WantedBy=multi-user.target

Standard operations:

Terminal window
# Check status
systemctl status nomos-gate
# View logs
journalctl -u nomos-gate -f
# Restart
systemctl restart nomos-gate

All containers are on the same flat network (192.168.0.0/24). Inter-service communication uses plain HTTP on internal IPs. There is no service mesh or internal TLS — the trust boundary is at the Caddy reverse proxy.

Container Network (192.168.0.0/24)
|
+-- 192.168.0.82 (lxc-gate) :3001
+-- 192.168.0.4 (lxc-router) :3002
+-- 192.168.0.50 (lxc-verifier) :3003
+-- 192.168.0.1 (caddy-proxy) :443

Caddy runs on the gateway host and handles TLS termination for all services. Certificates are provisioned automatically using the Cloudflare DNS challenge for the *.tismjedi-homelab.com wildcard domain.

/etc/caddy/Caddyfile
{
acme_dns cloudflare {env.CF_API_TOKEN}
}
gate.tismjedi-homelab.com {
reverse_proxy 192.168.0.82:3001
}
router.tismjedi-homelab.com {
reverse_proxy 192.168.0.4:3002
}
verifier.tismjedi-homelab.com {
reverse_proxy 192.168.0.50:3003
}

Caddy automatically renews certificates and handles HTTPS redirects. No manual certificate management is required.

DNS records are managed in Cloudflare. Each service has a CNAME or A record pointing to the homelab’s external IP, with Cloudflare proxy disabled (DNS-only mode) so that Caddy handles TLS directly.

gate.tismjedi-homelab.com A <external-ip> (DNS only)
router.tismjedi-homelab.com A <external-ip> (DNS only)
verifier.tismjedi-homelab.com A <external-ip> (DNS only)

The Proxmox host firewall allows inbound traffic on ports 80 and 443 only. Individual containers are not directly accessible from outside the network.

Terminal window
# On the Proxmox host
ufw allow 80/tcp
ufw allow 443/tcp
ufw default deny incoming

To add a new service to the pipeline (e.g., a cache layer, a logging service, or a new processing stage):

Terminal window
# On the Proxmox host
pct create <vmid> local:vztmpl/debian-12-standard_12.2-1_amd64.tar.zst \
--hostname lxc-<service-name> \
--memory 1024 \
--cores 2 \
--rootfs local-lvm:8 \
--net0 name=eth0,bridge=vmbr0,ip=192.168.0.<ip>/24,gw=192.168.0.1 \
--unprivileged 1
pct start <vmid>
Terminal window
# Enter the container
pct enter <vmid>
# Create service user
useradd -r -s /bin/false nomos
# Deploy the binary or application
mkdir -p /opt/nomos/<service-name>
# ... copy binary, create .env, etc.
Terminal window
cat > /etc/systemd/system/nomos-<service-name>.service << 'EOF'
[Unit]
Description=Nomos <Service Name>
After=network.target
[Service]
Type=simple
User=nomos
WorkingDirectory=/opt/nomos/<service-name>
ExecStart=/opt/nomos/<service-name>/nomos-<service-name>
Restart=always
RestartSec=5
Environment=PORT=<port>
EnvironmentFile=/opt/nomos/<service-name>/.env
[Install]
WantedBy=multi-user.target
EOF
systemctl enable nomos-<service-name>
systemctl start nomos-<service-name>

Add a new block to the Caddyfile:

<service-name>.tismjedi-homelab.com {
reverse_proxy 192.168.0.<ip>:<port>
}

Reload Caddy:

Terminal window
systemctl reload caddy

In Cloudflare, add an A record for <service-name>.tismjedi-homelab.com pointing to the external IP (DNS-only mode).

Terminal window
# Health check
curl https://<service-name>.tismjedi-homelab.com/health
# Check TLS
curl -vI https://<service-name>.tismjedi-homelab.com 2>&1 | grep "SSL certificate"

Each service exposes a /health endpoint. A simple monitoring script checks all services:

check-pipeline.sh
#!/bin/bash
SERVICES=(
"gate.tismjedi-homelab.com"
"router.tismjedi-homelab.com"
"verifier.tismjedi-homelab.com"
)
for svc in "${SERVICES[@]}"; do
STATUS=$(curl -s -o /dev/null -w "%{http_code}" "https://$svc/health")
if [ "$STATUS" = "200" ]; then
echo "[OK] $svc"
else
echo "[FAIL] $svc (HTTP $STATUS)"
fi
done

For persistent monitoring, this script can run on a cron schedule or be integrated into an existing monitoring stack (Uptime Kuma, Prometheus blackbox exporter, etc.).

The services are stateless API servers. Configuration and environment files are the only state that needs backing up:

Terminal window
# Backup all service configs
for svc in gate router verifier; do
pct pull <vmid> /opt/nomos/$svc/.env ./backups/$svc.env
pct pull <vmid> /etc/systemd/system/nomos-$svc.service ./backups/$svc.service
done

The Caddy configuration and Proxmox container configs should also be included in regular backups.