Deploy: Docker Compose¶
Run the full KubeIntellect stack on your laptop with a single command. No Kubernetes cluster is needed to run the server — KubeIntellect connects to your cluster via the kubeconfig on your host machine.
Requirements: Docker (with Compose v2), a kubeconfig with cluster access, an LLM API key.
How it works¶
docker compose up pulls ghcr.io/mskazemi/kubeintellect:latest and starts these services:
| Service | Always? | What it is |
|---|---|---|
kubeintellect |
✓ core | The KubeIntellect API server |
postgres |
✓ core | Database for conversation memory and audit log |
prometheus |
--profile monitoring |
Metrics collection |
loki |
--profile monitoring |
Log aggregation |
grafana |
--profile monitoring |
Dashboard UI — pre-wired with Prometheus + Loki |
langfuse |
--profile tracing |
LLM call tracing UI |
Postgres is always included — you don't need to install or configure it separately.
Monitoring and Langfuse are optional. If your Kubernetes cluster already has Prometheus/Loki running, skip --profile monitoring and point PROMETHEUS_URL / LOKI_URL at your existing endpoints instead. If you don't need LLM call tracing, skip --profile tracing entirely.
All configuration is read from the .env file — nothing is baked into the image.
1. Clone the repository¶
2. Configure¶
Open .env. The file is fully documented — you only need to fill in three things to get started:
# 1. LLM provider — choose one:
# OpenAI:
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
# Azure OpenAI:
# LLM_PROVIDER=azure
# AZURE_OPENAI_API_KEY=...
# AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
# 2. Database password (Postgres is started automatically by Compose):
POSTGRES_PASSWORD=changeme # use something stronger than this
# 3. An API key so kube-q can authenticate:
KUBEINTELLECT_ADMIN_KEYS=ki-admin-$(openssl rand -hex 10)
Kubeconfig: By default the container reads
~/.kube/configfrom your host. To use a different file:KUBECONFIG=/path/to/config docker compose up -dTip:
.envis already in.gitignore— it will never be committed.
3. Start¶
The first run pulls ghcr.io/mskazemi/kubeintellect:latest (~400 MB). Subsequent starts are instant.
Verify the server is healthy:
4. Install kube-q and connect¶
If
kqis not found after install:
5. Add monitoring (optional)¶
Option A — your cluster already has Prometheus/Loki:
Skip --profile monitoring. Just add the URLs to .env and restart:
Option B — spin up a local Prometheus/Loki/Grafana with Compose:
Then add to .env:
Restart to pick up the new values:
Grafana is available at http://localhost:3000 — datasources are pre-configured.
6. Add Langfuse LLM tracing (optional)¶
Starts a self-hosted Langfuse instance for inspecting every LLM call the agent makes:
Visit http://localhost:3001 → create an account → Settings → API Keys.
Add to .env:
LANGFUSE_ENABLED=true
LANGFUSE_HOST=http://localhost:3001
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
Restart:
Start everything at once¶
Use a different kubeconfig¶
The host kubeconfig is mounted read-only at /home/app/.kube/config inside the container.
To use a different kubeconfig: