Vitro Architecture
Architecture of the Federal Frontier Vitro platform — Kolla-Ansible deployment, CAPO workload clusters, K-ORC GitOps resource provisioning, and FMC fleet management integration.
Vitro Architecture
Vitro is a layered platform. Kolla-Ansible deploys OpenStack services as Docker containers on bare-metal hosts. CAPO (Cluster API Provider OpenStack) provisions RKE2 Kubernetes workload clusters on top of OpenStack VMs. K-ORC manages ad-hoc OpenStack resources through ArgoCD GitOps. The Fleet Management Cluster (FMC) ties it all together.
Architecture Diagram
Dell PowerEdge / HPE ProLiant] --> KA[Kolla-Ansible
Docker Containers] KA --> OS[OpenStack Services
Nova · Neutron · Cinder · Keystone
Glance · Heat · Octavia · Horizon] OS --> Ceph[Ceph Quincy
Block · Object · File Storage] OS --> OVN[OVN Networking
Distributed Virtual Router] OS --> VMs[OpenStack VMs] VMs --> CAPO[CAPO
Cluster API Provider OpenStack] CAPO --> RKE2[RKE2 Workload Clusters
Kubernetes 1.28+] FMC[Fleet Management Cluster
RKE2 on Bare Metal] --> ArgoCD[ArgoCD
GitOps Sync] ArgoCD --> CAPO ArgoCD --> KORC[K-ORC
OpenStack Resource Controller] KORC --> OS FMC --> FKP[Frontier Kubernetes Platform
Multi-Cluster Management] FMC --> OutpostAI[OutpostAI
Mission Control GUI] FMC --> F3Iai[F3Iai Agent Platform
MCP Servers · FFO · Compass] style BM fill:#1a202c,stroke:#718096,color:#e2e8f0 style KA fill:#2d3748,stroke:#4299e1,color:#e2e8f0 style OS fill:#2d3748,stroke:#4299e1,color:#e2e8f0 style Ceph fill:#2c7a7b,stroke:#38b2ac,color:#e2e8f0 style OVN fill:#2d3748,stroke:#4299e1,color:#e2e8f0 style VMs fill:#2d3748,stroke:#4299e1,color:#e2e8f0 style CAPO fill:#553c9a,stroke:#805ad5,color:#e2e8f0 style RKE2 fill:#2b6cb0,stroke:#4299e1,color:#fff style FMC fill:#2b6cb0,stroke:#4299e1,color:#fff style ArgoCD fill:#2d3748,stroke:#4299e1,color:#e2e8f0 style KORC fill:#553c9a,stroke:#805ad5,color:#e2e8f0 style FKP fill:#2b6cb0,stroke:#4299e1,color:#fff style OutpostAI fill:#2b6cb0,stroke:#4299e1,color:#fff style F3Iai fill:#2b6cb0,stroke:#4299e1,color:#fff
Kolla-Ansible Deployment Model
Kolla-Ansible deploys all OpenStack services as Docker containers across bare-metal hosts. Each service runs in its own container with health checks, restart policies, and log forwarding. This is not Kubernetes-based — the containers are managed directly by Docker on the host OS.
Deployment topology:
| Role | Services | Minimum Nodes |
|---|---|---|
| Control | Keystone, Nova API, Neutron, Glance, Heat, Horizon, MariaDB, RabbitMQ, HAProxy | 3 (HA) |
| Compute | Nova Compute, Neutron OVN Agent, Ceph OSD | 3+ |
| Storage | Ceph MON, Ceph MGR, Ceph OSD, RadosGW | Co-located with compute |
| Network | Neutron OVN Controller, HAProxy, Keepalived | Co-located with control |
In converged (HCI) mode, all roles run on the same physical servers. A minimum Vitro deployment requires 3 servers with 6 OSDs for Ceph quorum and HA.
CAPO — Kubernetes on OpenStack
Cluster API Provider OpenStack (CAPO) provisions RKE2 Kubernetes clusters as OpenStack VMs. The CAPO controller runs on the FMC and creates:
- OpenStack VMs with pre-baked RKE2 node images (built with Packer)
- Security groups for Kubernetes API, etcd, and node communication
- Floating IPs for API server access
- Cinder volumes for persistent storage
Workload clusters are declared as YAML manifests and managed through ArgoCD GitOps. Adding a cluster = merging a manifest. Deleting a cluster = removing it. CAPO reconciles the desired state.
Pre-baked VM images are critical for air-gapped deployments. The Packer build includes:
- RKE2 binaries and server/agent artifacts
- All CNI container images pre-pulled (Canal/Calico, CoreDNS, metrics-server)
- System packages (open-iscsi, nfs-common, cryptsetup)
- CIS benchmark hardening and FIPS kernel parameters
No image is pulled from the internet at boot time. The cluster starts from local disk.
K-ORC — OpenStack Resource Controller
Kubernetes OpenStack Resource Controller (K-ORC) extends the Kubernetes API with OpenStack resource types. It allows operators to declare OpenStack resources (VMs, networks, security groups, volumes) as Kubernetes custom resources and manage them through ArgoCD GitOps.
apiVersion: openstack.k-orc.cloud/v1alpha1
kind: Server
metadata:
name: web-server-01
spec:
flavor: m1.medium
image: ubuntu-22.04
network: production-network
securityGroups:
- web-tier
K-ORC reconciles the desired state against OpenStack APIs. If a VM is deleted manually, K-ORC recreates it. If a security group rule is changed, K-ORC reverts it. This is GitOps for OpenStack — the Git repository is the source of truth, not the OpenStack dashboard.
Fleet Management Cluster (FMC)
The FMC is an RKE2 Kubernetes cluster running directly on bare metal (not on OpenStack VMs). It hosts the control plane for the entire platform:
| Component | Purpose |
|---|---|
| ArgoCD | GitOps continuous deployment for all manifests |
| CAPO Controllers | Provision and manage workload clusters on OpenStack |
| K-ORC | Manage ad-hoc OpenStack resources via GitOps |
| OutpostAI | Operator GUI for mission control and cluster management |
| F3Iai Agent Platform | MCP servers, FFO knowledge graph, Compass, SRE dispatch |
| Keycloak | Identity federation (FAS realm, CAC authentication) |
| Grafana + Prometheus | Observability stack |
| Harbor | Private container registry |
| Gitea | Self-hosted Git for GitOps |
The FMC is the only cluster that must survive infrastructure failures. Workload clusters can be destroyed and recreated from manifests. The FMC cannot — it holds the state that defines everything else.
Integration with FKP
The Frontier Kubernetes Platform (FKP) provides multi-cluster lifecycle management on top of Vitro. FKP operators use the Frontier CLI or OutpostAI GUI to:
- Create workload clusters (CAPO provisions VMs, installs RKE2)
- Scale worker node pools (CAPO MachineDeployment scaling)
- Install add-ons (CNI, CSI, ingress, monitoring)
- Manage cluster lifecycle (upgrade, drain, cordon, delete)
FKP abstracts the infrastructure provider. The same FKP workflow that creates clusters on Vitro (OpenStack) also creates clusters on AWS (EKS), Azure (AKS), or bare metal (Tinkerbell). The operator doesn’t need to know OpenStack APIs.
Related
- Vitro Overview — What is Vitro, compliance posture, technology stack
- F3Iai Agent Integration — AI agent management of Vitro infrastructure
- Deployment Guide — Installation steps
- Frontier Kubernetes Platform — Multi-cluster management