Vitro Deployment Guide
Step-by-step guide to deploying Federal Frontier Vitro — prerequisites, hardware sizing, Kolla-Ansible deployment, and FMC registration.
Vitro Deployment Guide
This guide covers deploying a Federal Frontier Vitro environment from bare metal through workload cluster provisioning. The deployment sequence is: prepare hardware, deploy OpenStack via Kolla-Ansible, deploy the Fleet Management Cluster, register with FKP, and provision workload clusters via CAPO.
Prerequisites
Hardware
Vitro requires a minimum of 3 bare-metal servers for a converged (HCI) deployment with Ceph storage and OpenStack HA.
| Component | Minimum (3-node HCI) | Recommended (Production) |
|---|---|---|
| Servers | 3 | 5+ (separate control/compute) |
| CPU per server | 16 cores | 32+ cores |
| RAM per server | 128 GB | 256+ GB |
| OS disk | 480 GB SSD | 960 GB SSD (mirrored) |
| Ceph OSD disks | 2x 1 TB NVMe per server | 4x 2 TB NVMe per server |
| Network | 2x 25 GbE (bond) | 2x 25 GbE + 2x 100 GbE (storage) |
| Management | 1 GbE IPMI/iDRAC/iLO | Same |
Validated hardware: Dell PowerEdge R750, Dell PowerEdge R660, HPE ProLiant DL380 Gen10+.
Network
| Network | Purpose | VLAN | Subnet (example) |
|---|---|---|---|
| Management | IPMI, SSH, Ansible | VLAN 10 | 192.168.10.0/24 |
| Provider | OpenStack external/floating IPs | VLAN 20 | 10.10.0.0/24 |
| Tenant | VM-to-VM traffic (OVN geneve) | VLAN 30 | Auto (OVN managed) |
| Storage | Ceph replication traffic | VLAN 40 | 192.168.40.0/24 |
| API | OpenStack API endpoints | VLAN 50 | 192.168.50.0/24 |
Software
| Component | Version |
|---|---|
| Host OS | Ubuntu 22.04 LTS |
| Python | 3.10+ |
| Docker | 24.0+ |
| Kolla-Ansible | Dalmatian (2024.2) |
| Ansible | 2.16+ (ansible-core) |
Air-Gap Preparation
For air-gapped deployments, pre-stage all required artifacts:
- Kolla container images — pull all images on a connected system, export as tarballs, transfer to air-gapped registry (Harbor)
- RKE2 node images — build with Packer including all CNI images pre-cached
- Python packages — create an offline pip repository for Kolla-Ansible and dependencies
- OS packages — mirror Ubuntu 22.04 repository for the air-gapped network
Deployment Steps
Step 1: Prepare Hosts
Install Ubuntu 22.04 on all servers. Configure networking, NTP, and SSH access for Ansible.
# On each host:
sudo apt update && sudo apt upgrade -y
sudo apt install -y python3 python3-pip docker.io open-iscsi nfs-common
# Enable Docker
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
# Configure NTP (critical for Ceph and Keystone)
sudo timedatectl set-ntp true
Step 2: Deploy Kolla-Ansible
From the deployment host (can be one of the control nodes or a separate bastion):
# Install Kolla-Ansible
pip install kolla-ansible==2024.2
# Generate configuration
kolla-ansible install-deps
cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla
kolla-genpwd
# Edit globals.yml for your environment
# Key settings:
# kolla_internal_vip_address: <API VIP>
# network_interface: <management NIC>
# neutron_external_interface: <provider NIC>
# enable_ceph: "yes"
# enable_ceph_rgw: "yes"
# enable_octavia: "yes"
# openstack_release: "2024.2"
# Edit inventory for your hosts
vim /etc/kolla/inventory
# Bootstrap servers
kolla-ansible -i /etc/kolla/inventory bootstrap-servers
# Pre-deployment checks
kolla-ansible -i /etc/kolla/inventory prechecks
# Deploy
kolla-ansible -i /etc/kolla/inventory deploy
# Post-deploy (generates admin-openrc.sh)
kolla-ansible -i /etc/kolla/inventory post-deploy
Step 3: Verify OpenStack
# Source admin credentials
source /etc/kolla/admin-openrc.sh
# Verify services
openstack service list
openstack compute service list
openstack network agent list
openstack volume service list
# Verify Ceph
sudo docker exec ceph_mon ceph -s
# Expect: HEALTH_OK or HEALTH_WARN with known warnings
Step 4: Deploy Fleet Management Cluster
The FMC runs RKE2 directly on bare metal — not on OpenStack VMs. It can share hardware with the Vitro control plane or run on dedicated servers.
# On the first FMC node:
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="server" sh -
systemctl enable --now rke2-server
# Get kubeconfig
mkdir -p ~/.kube
cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
chmod 600 ~/.kube/config
# Install Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Step 5: Register with FKP
# Install CAPO (Cluster API Provider OpenStack)
clusterctl init --infrastructure openstack
# Create cloud-config secret for OpenStack credentials
kubectl create secret generic cloud-config \
--from-file=clouds.yaml=/etc/openstack/clouds.yaml \
-n capo-system
# Apply the CAPO MachineTemplate for workload clusters
kubectl apply -f deploy/overlays/fmc/capo-clusters/
Step 6: Provision Workload Clusters
# Example: workload-cluster-1.yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: workload-1
namespace: default
spec:
clusterNetwork:
pods:
cidrBlocks: ["10.244.0.0/16"]
services:
cidrBlocks: ["10.96.0.0/12"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha7
kind: OpenStackCluster
name: workload-1
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: RKE2ControlPlane
name: workload-1-control-plane
# Apply via ArgoCD (preferred) or kubectl
kubectl apply -f workload-cluster-1.yaml
# Monitor provisioning
kubectl get clusters,machines,openstackmachines -A
Post-Deployment
After deployment, register the Vitro environment with the F3Iai agent platform:
- Deploy MCP servers (OpenStack, Kolla, Ceph) in the
f3iainamespace - Register servers in Compass Postgres:
INSERT INTO mcp_servers (name, url, enabled) VALUES (...) - Configure HyperSync CronJobs for continuous FFO population
- Verify data appears in Compass Explorer and the OutpostAI chatbot
Related
- Vitro Overview — Platform description and compliance posture
- Vitro Architecture — CAPO, K-ORC, FMC integration
- F3Iai Agent Integration — AI agent management
- Frontier Kubernetes Platform — Multi-cluster management