Modern Provisioning (CAPI)

The modern Cluster API-based provisioning path for Federal Frontier Kubernetes clusters — multi-hyperscaler templates, CAPI providers on the Fleet Management Cluster, and the end-to-end flow from operator click to working PVCs. This section replaces the legacy Frontier CLI / Outpost GUI cluster lifecycle workflow.

The Federal Frontier Kubernetes Platform has two generations of cluster provisioning workflows. This section documents the modern path. The legacy Frontier CLI and Outpost GUI workflow under “Frontier CLI of FKP — User Guide” and “Frontier Outpost GUI of FKP” remains documented for sites still operating on it, but the modern path is the supported direction for all new deployments.

Why “Modern” Provisioning

The legacy FKP workflow was designed around a single-target cluster lifecycle: a CLI or GUI that talked to a backend which spoke to one infrastructure provider. Adding a new hyperscaler meant writing a new backend module. The modern path replaces that with Cluster API (CAPI) as the universal lifecycle engine and a template system that turns any CAPI provider into a wizard click or a chatbot tool.

The result:

  • Four hyperscalers supported out of the box — OpenStack/Vitro (CAPO), AWS EKS (CAPA), Azure AKS (CAPZ), Oracle Cloud OKE (CAPOCI)
  • Zero-code-change provider expansion — adding vSphere, GCP, IBM Cloud, or Equinix Metal is a Postgres template insert
  • GitOps-native — every cluster is described in Gitea, reconciled by ArgoCD onto the Fleet Management Cluster, and the appropriate CAPI provider provisions it
  • Audit trail built in — every render of every cluster is recorded in the cluster_renders table, answering “what manifest was generated for this cluster” for the lifetime of every cluster
  • Operator UX preserved — the OutpostAI cluster creation wizard and the unified chatbot both speak this engine; operators don’t need to know CAPI exists

What’s in this section

Page What it covers
Cluster Template System The Postgres-backed Jinja2 template engine, the four shipped hyperscaler templates, the JSON Schema-driven wizard, the API endpoints, and the “adding a new hyperscaler” workflow.
Cluster Bootstrap Flow End-to-end walkthrough from operator click → render → Gitea → ArgoCD → CAPI controller → cloud provider → first PVC mount. Includes the failure modes table.
Packer Image Build Process How the RKE2 node images consumed by CAPO are built — pre-cached container images, hardened OS packages, build manifest, and operator workflow.
Storage Architecture (Cinder + Ceph) How CAPO clusters mount PVCs in HCI environments. Cinder is the storage controller, Ceph RBD is the backend, Cinder CSI is the default — and why direct Ceph CSI is the wrong choice for HCI.
CAPI Providers on the FMC What CAPI providers are installed on the Fleet Management Cluster, how they got there, and how to add a new one.

How the modern path interacts with the legacy path

The modern path does not break existing legacy clusters. Clusters provisioned via the Frontier CLI or the legacy Outpost GUI continue to be managed by the legacy backend. The modern path produces clusters in parallel that are managed by CAPI controllers on the FMC.

A migration plan for existing legacy clusters is out of scope for this section. New cluster creates should go through the modern path.

Where the operator interface lives

The modern provisioning engine is exposed through three operator-facing surfaces:

  1. OutpostAI Cluster Creation Wizard — see OutpostAI — HIL Dispatch Console under the Federal Frontier AI Platform section. The wizard’s Step 2 is schema-driven and renders form fields dynamically from each template’s JSON Schema.
  2. Unified Chat (chatbot) — see Unified Chat. The chatbot exposes list_cluster_templates and create_kubernetes_cluster tools that wrap the same engine.
  3. Direct APIcurl POST /api/v1/cluster-templates/{id}/render against the TrailbossAI API. Used by automation, CI pipelines, and operators who prefer the command line. See the Cluster Template System page for the full endpoint reference.