CAPI Providers on the Fleet Management Cluster
Which Cluster API infrastructure providers are installed on the FMC, what they reconcile, the credentials they need, and how to add a new one.
For the Cluster Template System to actually provision real Kubernetes clusters, the corresponding Cluster API (CAPI) infrastructure providers must be running on the Fleet Management Cluster (FMC). This page documents which providers are installed, what they do, the credentials they expect, and how to add a new one.
What is a CAPI Provider?
Cluster API has a layered architecture:
- CAPI Core — the controllers that own provider-agnostic resources like
Cluster,Machine,MachineDeployment, andMachinePool - Bootstrap Provider — turns a Machine into a kubelet-ready node (e.g.
kubeadm,rke2) - Control Plane Provider — owns the Kubernetes control plane lifecycle (e.g.
KubeadmControlPlane) - Infrastructure Provider — translates CAPI resources into actual cloud API calls. One per hyperscaler.
Each infrastructure provider runs as its own controller-manager pod in its own namespace on the FMC. When ArgoCD syncs a rendered cluster manifest from Gitea into the FMC, the appropriate infrastructure provider sees its CRDs and starts reconciling against the cloud API.
What’s installed on the FMC today
| Provider | Namespace | Version | Status |
|---|---|---|---|
| CAPI Core | capi-system |
v1.11.0 | Running |
| Kubeadm Bootstrap Provider | capi-kubeadm-bootstrap-system |
(with core) | Running |
| Kubeadm Control Plane Provider | capi-kubeadm-control-plane-system |
(with core) | Running |
| CAPO (OpenStack) | capo-system |
(existing) | Running |
| CAPA (AWS) | capa-system |
v2.10.2 | Running 1/1 |
| CAPZ (Azure) | capz-system |
v1.23.0 | Running (capz-controller-manager + Azure Service Operator) |
| CAPOCI (Oracle Cloud) | cluster-api-provider-oci-system |
v0.24.0 | Installed; scaled to 0 pending real OCI credentials |
CAPA and CAPZ tolerate empty credentials at startup — their controllers run idle until a workload cluster manifest references them. CAPOCI does not tolerate empty credentials and will crash-loop on startup until the capoci-auth-config Secret is populated with a real OCI tenancy/user/fingerprint/key.
Per-Provider Credentials
Each provider needs different credentials, scoped to the project / account / subscription / tenancy where workload clusters will be created.
| Provider | Identity Resource | Required Secret |
|---|---|---|
| CAPO | cloud-config Secret per cluster |
OpenStack Application Credentials (INI format) |
| CAPA | AWSClusterStaticIdentity/default |
capa-default-credentials (AWS access key ID, secret access key, optional session token) |
| CAPZ | AzureClusterIdentity/default-azure-identity |
capz-default-credentials (Service Principal client secret); plus tenant ID and client ID on the Identity CR |
| CAPOCI | OCIClusterIdentity/default-oci-identity |
capoci-default-credentials (tenancy OCID, user OCID, region, fingerprint, PEM private key, optional passphrase) |
Stub Secrets and Identity templates ship in deploy/overlays/fmc/capi-providers/ in the federal-frontier-platform Gitea repo. They contain empty values by design. Operators populate them out-of-band via:
- Sealed Secrets — encrypt the real values and commit
- External Secrets Operator — pull from Vault, AWS Secrets Manager, Azure Key Vault, or OCI Vault
- Manual
kubectl patch— for dev environments only
Never commit real credentials to git in plaintext.
Populating CAPA credentials
# AWS access key for the default identity
kubectl -n capi-system patch secret capa-default-credentials --type merge -p \
'{"stringData":{"AccessKeyID":"AKIA...","SecretAccessKey":"..."}}'
# Bootstrap credentials (used by clusterawsadm to create the IAM role stack)
clusterawsadm bootstrap credentials encode-as-profile # outputs base64
kubectl -n capi-system patch secret capa-bootstrap-credentials --type merge -p \
"{\"stringData\":{\"AWS_B64ENCODED_CREDENTIALS\":\"$(clusterawsadm bootstrap credentials encode-as-profile)\"}}"
Populating CAPZ credentials
kubectl -n capi-system patch azureclusteridentity default-azure-identity --type merge -p \
'{"spec":{"tenantID":"<tenant-uuid>","clientID":"<client-uuid>"}}'
kubectl -n capi-system patch secret capz-default-credentials --type merge -p \
'{"stringData":{"clientSecret":"<sp-client-secret>"}}'
Populating CAPOCI credentials
# Generate API key + fingerprint via the OCI CLI
oci setup keys
kubectl -n capi-system patch secret capoci-default-credentials --type merge -p \
"{\"stringData\":{
\"tenancy\":\"ocid1.tenancy.oc1..\",
\"user\":\"ocid1.user.oc1..\",
\"region\":\"us-gov-ashburn-1\",
\"fingerprint\":\"<fingerprint>\",
\"key\":\"$(cat ~/.oci/oci_api_key.pem | sed ':a;N;$!ba;s/\\n/\\\\n/g')\"
}}"
# Then bring CAPOCI online
kubectl -n cluster-api-provider-oci-system scale deployment/capoci-controller-manager --replicas=1
How the providers were installed
The current FMC providers were installed via clusterctl init with a matched-version clusterctl binary against the FMC’s CAPI Core v1.11.0 release. Future installs should use the same approach — clusterctl init handles cert-manager caBundle injection and ${VAR} substitution that direct kubectl apply of the upstream infrastructure-components.yaml does not.
The install runs out of an in-cluster pod (an ephemeral bitnami/kubectl pod with HOME=/tmp for clusterctl’s config directory) so the FMC API server credentials come from the pod’s ServiceAccount, not from a local kubeconfig. The pod is GitOps-managed via the ArgoCD application ffp-capi-providers which syncs deploy/overlays/fmc/capi-providers/ from Gitea.
Adding a new CAPI provider
The Federal Frontier Platform is designed to support any Cluster API infrastructure provider. To add one (e.g. CAPV for vSphere, CAPG for GCP, CAPIBM for IBM Cloud):
1. Install the provider on the FMC
Edit the clusterctl init invocation in deploy/overlays/fmc/capi-providers/install-job.yaml to add the new provider name. For example, to add vSphere:
clusterctl init \
--infrastructure aws \
--infrastructure azure \
--infrastructure oci \
--infrastructure vsphere
Re-run the Job. clusterctl init is idempotent and skips already-installed providers.
2. Author a cluster template
Create three files under common/tools/cluster_templates_seed/ in the federal-frontier-f3iai repo:
<provider>-<distro>-default.j2— the Jinja2 source with# FILE:markers<provider>-<distro>-default.schema.json— the JSON Schema for input validation<provider>-<distro>-default.meta.json—{"provider": "...", "k8s_distro": "...", "description": "..."}
See Cluster Template System for the authoring contract.
3. Restart TrailbossAI
The seed loader inserts new templates into Postgres on first connect. Restart frontier-cluster-api, and the OutpostAI wizard automatically picks up the new provider — no frontend code changes.
4. Populate per-provider credentials
Add a stub Secret + Identity template to deploy/overlays/fmc/capi-providers/credentials-secret-templates.yaml and identity-resources.yaml so operators have a known location to populate real credentials.
Verifying provider health
# All CAPI provider pods, all namespaces
kubectl get pods -A | grep -iE 'capa-|capz-|capoci|cluster-api-provider'
# Specific provider's recent events
kubectl -n capa-system get events --sort-by=.lastTimestamp | tail -20
kubectl -n capz-system get events --sort-by=.lastTimestamp | tail -20
kubectl -n cluster-api-provider-oci-system get events --sort-by=.lastTimestamp | tail -20
# Check the ArgoCD application that owns the install
kubectl -n argocd get app ffp-capi-providers
Related Documentation
- Cluster Template System — the engine that produces manifests for these providers to reconcile
- Cluster Bootstrap Flow — end-to-end flow from operator click to working PVCs
- Storage Architecture — how Cinder CSI works for CAPO clusters
- Packer Image Build Process — the RKE2 node images CAPO consumes
- OutpostAI — HIL Dispatch Console — the wizard that uses these providers