Install: No Cluster, No Docker¶
Three ways to try KubeIntellect without a cluster or Docker:
| Browser demo | Option A — kube-q CLI | Option B — Local cluster | |
|---|---|---|---|
| Setup | None | ~2 min | ~5 min |
| Install | Nothing | kube-q only |
Docker + kubeintellect |
| Speed | Slower† | Fast | Fast |
| Access | Read-only | Read-only | Full (HITL-gated) |
| RCA scenarios | Yes | Yes | Yes (if selected) |
† The browser terminal shares a single backend instance — responses may be slower under load.
Try it in your browser (zero install)¶
No install, no terminal. Open kubeintellect.com/demo and start querying immediately.
Slower and limited
The demo terminal shares a single hosted instance. Responses are slower under concurrent load, and access is read-only — destructive operations (delete, restart, scale) are disabled.
Option A — kube-q CLI¶
Install only the thin CLI client and connect it to our hosted KubeIntellect instance.
kq already defaults to https://api.kubeintellect.com, so all you need is a personal API key.
Requirements: Python 3.12+
1. Get your personal API key¶
Go to kubeintellect.com/demo, enter your email, and your key appears instantly on the page and is emailed to you. It looks like:
Keys expire after 30 days — request a new one at any time from the same page.
2. Install kube-q¶
kq: command not found? Add~/.local/binto your PATH:
3. Connect¶
Or save it permanently so you never type it again:
Read-only access
The demo cluster is shared. Destructive operations (delete, restart, scale) are disabled. For full access use Option B or connect to your own cluster.
Option B — Local Cluster¶
Install Docker, then let kubeintellect init handle everything else: Kind cluster creation, sample
workloads, optional observability stack, and RCA practice scenarios.
Requirements: Python 3.12+, an LLM API key (OpenAI or Azure OpenAI)
1. Install Docker¶
Install Docker Desktop, launch it, and wait for the whale icon in the menu bar before continuing.
Install Docker Desktop with WSL 2 integration enabled, then run the remaining steps inside your WSL terminal.
2. Install KubeIntellect¶
Ubuntu 22.04 — Python 3.10 ships by default, you need 3.12+
kubeintellect: command not found? Fix PATH:
3. Configure and start¶
Prefer editing a file directly?
Create ~/.kubeintellect/.env from the
pip install template — fill in your LLM key,
save, then run kubeintellect serve. Skip the rest of this step.
Otherwise, run the interactive wizard:
Because no ~/.kube/config exists yet, the wizard offers to create a cluster automatically.
Recommended answers:
| Prompt | Answer |
|---|---|
| LLM provider | 1 OpenAI or 2 Azure OpenAI |
| API key (and endpoint for Azure) | Your key |
| Create a local Kind cluster with sample workloads? | Y |
| (kind, kubectl, helm installed automatically if missing) | — |
| Install observability stack (Prometheus, Grafana, Loki)? | Y |
| Create RCA demo scenarios? | Y — 5 broken pods to practice root-cause analysis |
| Install as background service? | Y — server starts automatically on every login |
When init finishes it:
- Creates a 1-node Kind cluster named
kubeintellect - Deploys sample workloads in the
demonamespace (nginx ×2, httpbin ×1) - Installs Prometheus + Grafana (NodePort 30090 / 30080) and Loki (NodePort 30100) — if selected
- Deploys 5 RCA practice scenarios in
demo-rcanamespace — if selected - Configures cluster DNS so
svc.cluster.localresolves from your host - Writes
~/.kubeintellect/.envwith all URLs set automatically - Configures
kube-q(~/.kube-q/.env) with your API key - Installs a systemd service so the server starts on every login
4. Open a new terminal and start querying¶
No API key to copy — init configured everything automatically.
5. Verify¶
Expected output (with Kind cluster and observability):
Config: ✓ ~/.kubeintellect/.env
LLM: ✓ openai / gpt-4o
DB: ✓ sqlite ~/.kubeintellect/kubeintellect.db
kubectl: ✓ found
Kube: ✓ ~/.kube/config context: kind-kubeintellect
Auth: ✓ enabled
admin ki-admin-xxxxxxxxxxxxxxxxxxxx
Prometheus:✓ http://172.18.0.2:30090 reachable
Loki: ✓ http://172.18.0.2:30100 reachable
Grafana: ✓ http://172.18.0.2:30080 reachable
kube-q: ✓ found
Try the RCA scenarios¶
Ask questions like:
- "what pods are broken in the demo-rca namespace?"
- "why is crash-loop crashing and how do I fix it?"
- "why is resource-hog pending?"
- "why does the api-server service have no endpoints?"
The 5 scenarios cover: CrashLoopBackOff, OOMKilled, ImagePullBackOff, Pending (resource exhaustion), and a service with no endpoints.
Managing the service (Option B)¶
kubeintellect service status # check if server is running
kubeintellect service logs # tail live logs
kubeintellect service stop # stop the server
kubeintellect service start # start it again
kubeintellect service uninstall # remove the service entirely
Next steps¶
- Already have a cluster? → Connect to an existing cluster
- Want full monitoring + Langfuse for local dev? → Kind dev environment
- Deploy to production (AKS, EKS, GKE)? → Helm / cloud