Configuration Reference

Environment variables, resource limits, probe configuration, and DNS setup for the Compass API and Frontend.

This page documents all configuration for the Compass API and Frontend components. Configuration is primarily driven by environment variables set in Kubernetes deployment manifests.

TypeDB Connection

The Compass API connects to TypeDB (FFO) using the TypeDB gRPC driver.

Variable Default Description
TYPEDB_HOST localhost TypeDB server hostname
TYPEDB_PORT 1729 TypeDB gRPC port
DATABASE_NAME ffo TypeDB database name
TYPEDB_USER admin TypeDB username
TYPEDB_PASSWORD (required) TypeDB password

In Kubernetes, TypeDB runs as a ClusterIP service in the f3iai namespace. The API pod resolves it via internal DNS.

LLM Endpoint

The chat interface uses Ollama for LLM inference with native tool calling.

Variable Default Description
LLM_ENDPOINT (required) Ollama base URL (e.g., http://ollama.f3iai.svc:11434)
LLM_MODEL qwen3.5:35b Model name for /api/chat calls

The endpoint must point to an Ollama instance with the specified model loaded. The API calls /api/chat with stream: false and a 120-second timeout.

PostgreSQL

PostgreSQL stores the MCP server registry and related metadata.

Variable Default Description
POSTGRES_HOST localhost PostgreSQL hostname
POSTGRES_PORT 5432 PostgreSQL port
POSTGRES_DB compass Database name
POSTGRES_USER compass Database user
POSTGRES_PASSWORD (required) Database password

The primary table is mcp_servers, which stores the URL, name, and status of each registered MCP server. If PostgreSQL is unavailable, the API falls back to the hardcoded MCP_SERVERS dict in mcp_tools.py.

MCP Server URLs

Each MCP server has a dedicated environment variable for its endpoint URL. These are used when constructing tool calls.

Variable Example Value
KEYCLOAK_MCP_URL http://keycloak-mcp.f3iai.svc:50060
OPENSTACK_MCP_URL http://openstack-mcp.f3iai.svc:50061
FFO_MCP_URL http://ffo-mcp.f3iai.svc:50060
CEPH_MCP_URL http://ceph-mcp.f3iai.svc:50060
ARGOCD_MCP_URL http://argocd-mcp.f3iai.svc:50060
K8S_MCP_URL http://k8s-mcp.f3iai.svc:50060
HARBOR_MCP_URL http://harbor-mcp.f3iai.svc:50060
GITEA_MCP_URL http://gitea-mcp.f3iai.svc:50060
VAULT_MCP_URL http://vault-mcp.f3iai.svc:50060
TRIVY_MCP_URL http://trivy-mcp.f3iai.svc:50060
KOLLA_MCP_URL http://kolla-mcp.f3iai.svc:50061
TINKERBELL_MCP_URL http://tinkerbell-mcp.f3iai.svc:50060

MCP Server Resolution Order

  1. PostgreSQL mcp_servers table — Primary lookup. Queried at startup and periodically refreshed.
  2. Environment variables — Used if the server is not in PostgreSQL.
  3. Hardcoded MCP_SERVERS dict — Final fallback defined in mcp_tools.py.

DNS Configuration

Compass relies on internal Kubernetes DNS to resolve service names. For environments using Tailscale with CoreDNS, ensure that:

  • CoreDNS is configured to forward .svc.cluster.local queries to the cluster DNS
  • Tailscale MagicDNS does not intercept queries for cluster-internal domains
  • The API pod’s /etc/resolv.conf lists the cluster DNS server first

If MCP servers are on different networks (e.g., accessible via Tailscale but not cluster DNS), set their URLs to use Tailscale hostnames or IP addresses directly in the environment variables.

Resource Limits

API

resources:
  requests:
    memory: "512Mi"
    cpu: "250m"
  limits:
    memory: "2Gi"
    cpu: "1"

The API requires up to 2Gi of memory due to:

  • TypeDB driver connection overhead
  • LLM response buffering (large tool call results)
  • Concurrent request handling

Frontend

resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "512Mi"
    cpu: "500m"

The frontend is a static Next.js build served by Nginx, so resource requirements are minimal.

Health Probes

API

livenessProbe:
  httpGet:
    path: /health
    port: 8000
  initialDelaySeconds: 60
  periodSeconds: 30
  timeoutSeconds: 5
  failureThreshold: 3

readinessProbe:
  httpGet:
    path: /health
    port: 8000
  initialDelaySeconds: 15
  periodSeconds: 10
  timeoutSeconds: 5
  failureThreshold: 3
Probe Path Initial Delay Period Purpose
Liveness /health 60s 30s Restart pod if API process is hung
Readiness /health 15s 10s Remove from service until API is ready to accept requests

The liveness probe has a longer initial delay (60s) to account for TypeDB driver initialization and MCP server discovery at startup. The readiness probe starts checking after 15 seconds so the pod can begin receiving traffic as soon as it is ready.

Frontend

The frontend uses a simple TCP socket probe on port 80 (Nginx), as it serves static content and has no application-level health endpoint.

Example Deployment Environment Block

env:
  - name: TYPEDB_HOST
    value: "typedb.f3iai.svc.cluster.local"
  - name: TYPEDB_PORT
    value: "1729"
  - name: DATABASE_NAME
    value: "ffo"
  - name: TYPEDB_USER
    valueFrom:
      secretKeyRef:
        name: compass-secrets
        key: typedb-user
  - name: TYPEDB_PASSWORD
    valueFrom:
      secretKeyRef:
        name: compass-secrets
        key: typedb-password
  - name: LLM_ENDPOINT
    value: "http://ollama.f3iai.svc.cluster.local:11434"
  - name: LLM_MODEL
    value: "qwen3.5:35b"
  - name: POSTGRES_HOST
    value: "postgres.f3iai.svc.cluster.local"
  - name: POSTGRES_PASSWORD
    valueFrom:
      secretKeyRef:
        name: compass-secrets
        key: postgres-password
  - name: FFO_MCP_URL
    value: "http://ffo-mcp.f3iai.svc.cluster.local:50060"
  - name: KEYCLOAK_MCP_URL
    value: "http://keycloak-mcp.f3iai.svc.cluster.local:50060"

Sensitive values (passwords, credentials) should always be sourced from Kubernetes Secrets rather than hardcoded in the deployment manifest.