No description
Find a file
2026-02-13 04:46:55 +00:00
addons Fix instance label overlap for KubeVirt masquerade mode 2026-02-13 04:14:32 +00:00
argocd Add cluster: slop-factory (namespace: slop-factory, workers: 2) 2026-02-13 04:46:55 +00:00
bootstrap Update repo URLs and fix versions for real deployment 2026-02-13 03:04:08 +00:00
clusters Add cluster: slop-factory (namespace: slop-factory, workers: 2) 2026-02-13 04:46:55 +00:00
docs firstush 2026-02-13 02:57:23 +00:00
scripts Update repo URLs and fix versions for real deployment 2026-02-13 03:04:08 +00:00
.gitignore first commit 2026-02-13 02:56:03 +00:00
README.md Update repo URLs and fix versions for real deployment 2026-02-13 03:04:08 +00:00

Spector — GitOps Cluster-as-Code Demo Kit

Clusters defined as code in Git → ArgoCD syncs → CAPI provisions identical Kubernetes clusters on KubeVirt → consistent addons on every cluster.

Prerequisites

  • A KubeVirt-enabled management cluster (already running)
  • kubectl, helm, clusterctl, kustomize, git installed
  • ~12 CPU cores and ~24GiB RAM available on the management cluster

Quick Start

# 1. Clone and configure
git clone <this-repo> && cd spector
export SPECTOR_REPO_URL=ssh://git@git.arsalan.io/anaeem/k8s-platofrm.git

# 2. Validate environment
./scripts/validate.sh

# 3. Bootstrap everything
./scripts/demo-setup.sh

# 4. Watch clusters provision
kubectl get clusters -A -w
kubectl get virtualmachines -A -w

Repository Structure

spector/
├── bootstrap/          # Management cluster setup scripts
├── clusters/           # CAPI cluster definitions (Kustomize base + overlays)
│   ├── base/           # Shared template (7 CAPI resources)
│   └── overlays/       # Per-environment overrides (dev, staging)
├── argocd/             # ArgoCD Application layer
│   ├── bootstrap-app.yaml       # Root App-of-Apps
│   ├── cluster-apps/            # One Application per cluster
│   └── addon-appset.yaml        # ApplicationSet for addons
├── addons/             # Addon stubs deployed to tenant clusters
├── scripts/            # Demo helper scripts
└── docs/               # Walkthrough, architecture, troubleshooting

Architecture

GitOps flow:

Git repo (clusters/overlays/dev)
  → ArgoCD Application (dev-cluster)
    → CAPI Cluster + KubeVirt VMs
      → argocd-capi-controller registers cluster
        → ApplicationSet deploys addons (kyverno, monitoring, ingress)
Component Version Purpose
CAPI v1.9.x Cluster lifecycle management
CAPK v0.11.1 KubeVirt infrastructure provider
Kubernetes v1.31.4 Tenant cluster version
ArgoCD v3.2.x GitOps continuous delivery
cert-manager v1.16.x Certificate management (CAPI prerequisite)

Demo Scripts

Script Purpose
scripts/demo-setup.sh One-shot bootstrap (runs all bootstrap scripts)
scripts/add-cluster.sh <name> <ns> Live demo: git push creates a new cluster
scripts/get-kubeconfig.sh <name> <ns> Extract tenant kubeconfig from CAPI secret
scripts/validate.sh Pre-flight environment checks
scripts/demo-reset.sh Delete tenant clusters, keep infrastructure

Adding a New Cluster

# Creates overlay + ArgoCD app, commits, pushes — ArgoCD takes over
./scripts/add-cluster.sh prod prod 3

Day-2 Operations

Scale workers by editing the Kustomize overlay and pushing:

# Edit clusters/overlays/dev/kustomization.yaml
# Change MachineDeployment replicas patch value, then:
git add -A && git commit -m "Scale dev workers to 3" && git push
# ArgoCD syncs → CAPI scales → new VMs spin up

Cleanup

# Reset (keep infrastructure)
./scripts/demo-reset.sh

# Full teardown
./bootstrap/99-teardown.sh

Documentation