Skip to content

Instantly share code, notes, and snippets.

@initcron
Last active November 19, 2025 05:13
Show Gist options
  • Select an option

  • Save initcron/69059c0119cfa316767c3d3f6ff7149b to your computer and use it in GitHub Desktop.

Select an option

Save initcron/69059c0119cfa316767c3d3f6ff7149b to your computer and use it in GitHub Desktop.

Lab: Using kubectl-ai --mcp-server with Cursor to Inspect the atharva-ml Namespace

0. Lab Goals

By the end of this lab you’ll be able to:

  • Run kubectl-ai as an MCP server.

  • Wire it into Cursor via mcp.json.

  • Use Cursor chat + kubectl-ai tools to:

    • See what’s running in atharva-ml.
    • Run kubectl top via natural language.
    • Drill into resources (logs, describe, events) using the MCP tools.

Think of it as the first slice of “AI SRE cockpit” for your Atharva cluster.


1. Prerequisites

You should already have:

  1. A working Kubernetes cluster with the Atharva workloads in namespace atharva-ml.

    • Quick sanity check:

      kubectl get ns
      kubectl get pods -n atharva-ml
  2. kubectl configured and pointing at that cluster.

  3. kubectl-ai installed (GoogleCloudPlatform/kubectl-ai).(GitHub)

    • Recommended (quick install):

      curl -sSL https://raw.githubusercontent.com/GoogleCloudPlatform/kubectl-ai/main/install.sh | bash
      kubectl-ai version
  4. An LLM API key (I’ll assume Gemini, since kubectl-ai defaults to it):(GitHub)

    export GEMINI_API_KEY="your_key_here"
  5. Cursor installed with MCP support (recent builds). Cursor reads MCP config from:

    • Global: ~/.cursor/mcp.json
    • Project: .cursor/mcp.json in repo root(DEV Community)

2. Step 1 – Verify kubectl-ai Against Your Cluster

Before involving Cursor, confirm kubectl-ai can talk to the cluster and LLM.

Run:

kubectl-ai --quiet "list all resources in atharva-ml namespace"

You should see kubectl-ai:

  • Print the kubectl command(s) it plans to run.
  • Execute them and show output from your cluster (pods/deployments/services, etc.).(GitHub)

If that works, you’re ready to expose it via MCP.


3. Step 2 – Run kubectl-ai as an MCP Server

kubectl-ai can run in MCP server mode, exposing its kubectl tools to external MCP clients such as Cursor.(GitHub)

For local dev with Cursor, we’ll use stdio transport (Cursor spawns the process).

You don’t manually start it — Cursor will, using the command and args you specify in mcp.json. The command we want Cursor to run is essentially:

kubectl-ai --mcp-server

We’ll encode that into Cursor config in the next step.


4. Step 3 – Configure Cursor MCP to Use kubectl-ai

4.1 Create MCP config

Create (or edit) ~/.cursor/mcp.json:

mkdir -p ~/.cursor

cat > ~/.cursor/mcp.json << 'EOF'
{
  "mcpServers": {
    "kubectl-ai": {
      "type": "stdio",
      "command": "kubectl-ai",
      "args": [
        "--mcp-server"
      ]
    }
  }
}
EOF

Notes:

  • type: "stdio" matches how Cursor expects to talk to local MCP servers.(DEV Community)
  • If kubectl-ai is not on PATH, set command to its absolute path, e.g. /opt/homebrew/bin/kubectl-ai.

Restart Cursor after editing mcp.json.

4.2 Verify that Cursor sees the MCP server

In Cursor:

  1. Open Settings → Cursor Settings → MCP and confirm kubectl-ai shows up as a configured server.(DEV Community)

  2. Open the chat (Cmd+L on macOS).

  3. Type something like:

    list mcp tools

Cursor should show that it discovered some tools from kubectl-ai (they might be named around kubectl operations/logs/etc., depending on how the server exposes them).


5. Step 4 – Basic Tasks via Cursor + kubectl-ai

From here on, everything is done from Cursor chat, using natural language. Watch for the “Run tool” / “Run MCP” button each time – that’s Cursor invoking kubectl-ai as an MCP tool.

Task 4.1 – “What’s running in atharva-ml namespace?”

In Cursor chat, prompt:

Using the kubectl-ai MCP tools, show me everything running in the atharva-ml namespace – pods, deployments, services, and HPAs. Summarize the overall state (ready/not ready, restarts, images).

Expect Cursor to:

  1. Decide it needs the kubernetes tools from kubectl-ai.

  2. Offer to run tools that translate to something like:

    kubectl get all -n atharva-ml
    kubectl get hpa -n atharva-ml
  3. Return a structured answer summarizing what’s running.

Checkpoint: You can see all Atharva-related resources without typing kubectl manually.


Task 4.2 – Get kubectl top data for atharva-ml

Prompt:

Using kubectl-ai tools, run kubectl top to show CPU and memory usage for all pods in the atharva-ml namespace. Sort them by CPU usage and highlight the top 3 pods.

The MCP server should end up running something equivalent to:

kubectl top pods -n atharva-ml

Then the model will:

  • Parse the table.
  • Sort by CPU.
  • Respond with a short report of the heaviest pods.

Checkpoint: You have a natural-language interface to kubectl top via Cursor.


Task 4.3 – Deep dive into one noisy pod

Prompt:

From the top 3 CPU-heavy pods in atharva-ml, pick one and:

  • Show its pod YAML (describe).
  • Fetch recent events.
  • Summarize why it might be using high CPU.

This should trigger a chain like:

kubectl describe pod <name> -n atharva-ml
kubectl get events -n atharva-ml --sort-by=.lastTimestamp

Then Cursor will synthesize:

  • Container image(s).
  • Resource requests/limits.
  • Recent restarts or throttling.
  • Any obvious cause (e.g., heavy traffic, pending liveness probe, etc.).

Checkpoint: You’ve seen kubectl-ai do a mini RCA from inside Cursor without you hand-writing any commands.


6. Step 5 – Compare AI vs Direct kubectl

To build trust in the agent, do a quick manual validation.

  1. From your terminal, run:

    kubectl get all -n atharva-ml
    kubectl top pods -n atharva-ml
  2. Compare the outputs to what Cursor summarized.

You want learners to internalize:

  • kubectl-ai is not magic – it’s just planning and executing real kubectl under the hood.(GitHub)

7. Stretch Tasks (“More”)

You can add these as optional exercises in the lab:

  1. Filter-specific workloads

    Show only the pods in atharva-ml whose name contains api. For each, show current CPU/memory from kubectl top and whether they have readiness probes defined.

  2. Check service-to-pod wiring

    For every Service in atharva-ml, check that its targetPort matches the container ports exposed by the backing pods. Flag any mismatch.

    (This pushes the MCP tools to combine kubectl get svc, kubectl get endpoints, and kubectl describe–style info.)

  3. Watch for spikes (manual loop)

    Run kubectl top pods -n atharva-ml every 30 seconds for the next 5 minutes and tell me if any pod’s CPU usage spikes more than 50% from its baseline.

    For now, learners can manually re-run the same prompt; in a later lab you can script this via Agno or an agent workflow.

  4. Read-only safety mode

    Reconfigure kubectl-ai (or your kubeconfig) to use a read-only ServiceAccount / context (RBAC limited to get, list, watch) per the patterns discussed for Kubernetes MCP servers.(Red Hat Developer) Then:

    Verify that kubectl-ai in MCP mode can still inspect the cluster but fails when asked to modify resources.


8. Clean-up & Notes

Nothing to tear down beyond:

  • You can remove the MCP config by editing ~/.cursor/mcp.json and deleting the kubectl-ai block.

  • If you want to “pause” kubectl-ai usage, just unset your LLM API key:

    unset GEMINI_API_KEY

If you’d like, I can now turn this into:

  • A README-style lab (with outcomes/quiz).
  • Or a step-by-step “Week-end Project” doc in your RealOps style with screenshots placeholders and “Expected Output” blocks.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment