Skip to content

Instantly share code, notes, and snippets.

@mrchrisadams
Created January 24, 2026 17:14
Show Gist options
  • Select an option

  • Save mrchrisadams/2380a4f8633e2a468b32ceeb231685e5 to your computer and use it in GitHub Desktop.

Select an option

Save mrchrisadams/2380a4f8633e2a468b32ceeb231685e5 to your computer and use it in GitHub Desktop.
Research: Energy and Carbon Tracking for OpenCode - Analysis of instrumenting OpenCode with energy/carbon data similar to llm-greenpt and llm-neuralwatt

Energy and Carbon Tracking for OpenCode

Research Document: Instrumenting OpenCode with Energy/Carbon Data

Date: January 2025
Author: Research analysis based on llm-greenpt, llm-neuralwatt, and OpenCode codebase


Executive Summary

This document analyzes the feasibility of adding energy consumption and carbon emissions tracking to OpenCode, similar to how llm-greenpt and llm-neuralwatt plugins instrument Simon Willison's llm CLI tool.

Key Finding: A plugin-based approach is partially feasible but has significant limitations. Full energy tracking would require either:

  1. A plugin that intercepts headers and captures extended response data (limited)
  2. Modifications to OpenCode's core provider system (comprehensive but invasive)
  3. Using a proxy/gateway that adds energy data to responses (external solution)

Table of Contents

  1. How llm-greenpt and llm-neuralwatt Work
  2. OpenCode Architecture Analysis
  3. Plugin System Capabilities
  4. Feasibility Assessment
  5. Recommended Approaches
  6. Implementation Roadmap
  7. Appendix: Code Examples

How llm-greenpt and llm-neuralwatt Work

Core Mechanism

Both plugins work by subclassing the OpenAI client and intercepting SSE (Server-Sent Events) streams to capture energy data that providers send alongside model responses.

GreenPT Approach

class ImpactCapturingSSEDecoder(SSEDecoder):
    """Captures GreenPT impact data from the final SSE data event."""
    def decode(self, line: str) -> ServerSentEvent | None:
        if line.startswith("data: ") and line != "data: [DONE]":
            data = json.loads(line[6:])
            if "impact" in data:
                self.impact_data = data["impact"]
        return super().decode(line)

GreenPT sends impact data in a standard SSE data: field:

{
  "impact": {
    "version": "20250922",
    "inferenceTime": {"total": 156, "unit": "ms"},
    "energy": {"total": 40433, "unit": "Wms"},
    "emissions": {"total": 1, "unit": "ugCO2e"}
  }
}

Neuralwatt Approach

class EnergyCapturingSSEDecoder(SSEDecoder):
    """Captures Neuralwatt energy data from SSE comments."""
    def decode(self, line: str) -> ServerSentEvent | None:
        if line.startswith(": energy "):
            self.energy_data = json.loads(line[9:])
            return None
        return super().decode(line)

Neuralwatt sends energy data as an SSE comment (: energy {...}):

{
  "energy_joules": 0.5,
  "energy_kwh": 0.000000139,
  "avg_power_watts": 150,
  "duration_seconds": 0.003,
  "attribution_method": "direct",
  "attribution_ratio": 1.0
}

Key Requirements for This Approach

  1. Custom HTTP client that can intercept raw SSE streams
  2. Access to response storage to persist the captured data
  3. Provider-specific handling (each provider formats energy data differently)

OpenCode Architecture Analysis

Technology Stack

  • Language: TypeScript/JavaScript (Bun runtime)
  • AI SDK: Vercel AI SDK (ai package) with provider-specific SDKs
  • HTTP Layer: Fetch API (wrapped by AI SDK)
  • Storage: Custom file-based storage system
  • UI: Solid.js web app + TUI (terminal)

LLM Request Flow

┌─────────────────────────────────────────────────────────────────┐
│                        OpenCode Architecture                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                   │
│  ┌──────────┐    ┌──────────┐    ┌──────────┐    ┌──────────┐   │
│  │  User    │───▶│  Session │───▶│   LLM    │───▶│ Provider │   │
│  │  Input   │    │  Prompt  │    │  Module  │    │  Module  │   │
│  └──────────┘    └──────────┘    └──────────┘    └──────────┘   │
│                                        │               │          │
│                                        │               ▼          │
│                                        │       ┌──────────────┐   │
│                                        │       │  AI SDK      │   │
│                                        │       │  (Vercel)    │   │
│                                        │       └──────┬───────┘   │
│                                        │              │           │
│                                        ▼              ▼           │
│                                 ┌─────────────────────────┐       │
│                                 │   streamText()          │       │
│                                 │   - Handles SSE         │       │
│                                 │   - Returns usage data  │       │
│                                 └───────────┬─────────────┘       │
│                                             │                     │
│  ┌──────────┐    ┌──────────┐              │                     │
│  │  Plugin  │◀───│ Processor│◀─────────────┘                     │
│  │  Hooks   │    │ (finish- │    Usage: {inputTokens,            │
│  │          │    │   step)  │     outputTokens, ...}             │
│  └──────────┘    └──────────┘                                    │
│                                                                   │
└─────────────────────────────────────────────────────────────────┘

Key Code Locations

Component Path Purpose
Provider System packages/opencode/src/provider/provider.ts Creates AI SDK clients
LLM Streaming packages/opencode/src/session/llm.ts Calls streamText()
Response Processing packages/opencode/src/session/processor.ts Handles stream events
Usage Calculation packages/opencode/src/session/index.ts getUsage() function
Plugin System packages/opencode/src/plugin/index.ts Hook execution
Plugin Types packages/plugin/src/index.ts Hook definitions

How Usage Data Flows

  1. streamText() returns a stream with finish-step events
  2. finish-step contains usage (tokens) and providerMetadata
  3. Session.getUsage() calculates cost from usage + model pricing
  4. Data stored in StepFinishPart and AssistantMessage
// From processor.ts
case "finish-step":
  const usage = Session.getUsage({
    model: input.model,
    usage: value.usage,           // LanguageModelUsage
    metadata: value.providerMetadata,  // Provider-specific data
  })
  // ...
  await Session.updatePart({
    type: "step-finish",
    tokens: usage.tokens,
    cost: usage.cost,
  })

Plugin System Capabilities

Available Hooks

OpenCode's plugin system provides these hooks (from packages/plugin/src/index.ts):

export interface Hooks {
  event?: (input: { event: Event }) => Promise<void>
  config?: (input: Config) => Promise<void>
  tool?: { [key: string]: ToolDefinition }
  auth?: AuthHook
  
  // Chat lifecycle hooks
  "chat.message"?: (input, output) => Promise<void>
  "chat.params"?: (input, output) => Promise<void>
  "chat.headers"?: (input, output) => Promise<void>
  
  // Tool hooks
  "tool.execute.before"?: (input, output) => Promise<void>
  "tool.execute.after"?: (input, output) => Promise<void>
  
  // Experimental hooks
  "experimental.chat.messages.transform"?: (input, output) => Promise<void>
  "experimental.chat.system.transform"?: (input, output) => Promise<void>
  "experimental.session.compacting"?: (input, output) => Promise<void>
  "experimental.text.complete"?: (input, output) => Promise<void>
}

What Plugins CAN Do

Modify request headers via chat.headers hook
Modify request parameters via chat.params hook
Subscribe to events (session.idle, message.updated, etc.)
Add custom tools
Log data via SDK client
Execute shell commands

What Plugins CANNOT Do

Intercept raw HTTP responses (no access to SSE stream)
Modify the AI SDK client (created internally in Provider)
Add custom fields to response storage (schema is fixed)
Access providerMetadata from hooks (not exposed)
Hook into finish-step event directly


Feasibility Assessment

Approach 1: Pure Plugin (Limited)

Feasibility: ⚠️ Partial

A plugin can:

  • Add headers to route requests through an energy-tracking proxy
  • Subscribe to message.updated events to log token usage
  • Calculate estimated energy based on public model efficiency data

A plugin cannot:

  • Capture provider-specific energy data from SSE streams
  • Store custom energy fields in OpenCode's database
// What's possible: Estimated energy tracking
export const EnergyEstimatorPlugin: Plugin = async (ctx) => {
  return {
    event: async ({ event }) => {
      if (event.type === "message.updated" && event.properties.info.role === "assistant") {
        const msg = event.properties.info
        const estimatedEnergy = estimateEnergy(msg.tokens, msg.modelID)
        await ctx.client.app.log({
          service: "energy-estimator",
          level: "info", 
          message: `Estimated energy: ${estimatedEnergy.kWh} kWh`,
          extra: { sessionID: msg.sessionID, tokens: msg.tokens }
        })
      }
    }
  }
}

Approach 2: Proxy Gateway (External)

Feasibility: ✅ Good

Use an external proxy (like Helicone, LiteLLM, or a custom gateway) that:

  1. Intercepts requests and responses
  2. Adds energy measurement headers
  3. Logs to an external database

OpenCode already supports this pattern:

  • opencode-helicone-session plugin injects Helicone headers
  • Cloudflare AI Gateway support is built-in
// Example: Route through energy-tracking proxy
export const EnergyProxyPlugin: Plugin = async (ctx) => {
  return {
    "chat.headers": async (input, output) => {
      output.headers["X-Energy-Tracking"] = "enabled"
      output.headers["X-Session-ID"] = input.sessionID
    }
  }
}

Approach 3: Core Modification (Comprehensive)

Feasibility: ✅ Full but Invasive

Modify OpenCode's core to:

  1. Create custom AI SDK clients that capture energy data
  2. Extend StepFinishPart schema to include energy fields
  3. Add energy data to UI components

This requires changes to:

  • packages/opencode/src/provider/provider.ts - Custom client creation
  • packages/opencode/src/session/message-v2.ts - Schema extension
  • packages/opencode/src/session/processor.ts - Data capture
  • packages/app/src/components/session-context-usage.tsx - UI display

Recommended Approaches

Option A: Estimation Plugin (Simplest)

Effort: Low (1-2 days)
Accuracy: Approximate
Maintenance: Low

Create a plugin that estimates energy based on:

  • Token counts (already tracked)
  • Model-specific energy coefficients (from research papers)
  • Grid carbon intensity (from APIs like ElectricityMaps)
Estimated Energy (kWh) = tokens × energy_per_token × PUE
Estimated Carbon (gCO2e) = energy × grid_carbon_intensity

Option B: Proxy Integration (Recommended)

Effort: Medium (1 week)
Accuracy: Measured (if proxy supports it)
Maintenance: Medium

  1. Set up an energy-tracking proxy (e.g., extend LiteLLM or build custom)
  2. Create OpenCode plugin to route requests through proxy
  3. Build dashboard to visualize energy data from proxy logs

Option C: Core Enhancement (Most Complete)

Effort: High (2-3 weeks)
Accuracy: Measured (provider-dependent)
Maintenance: High (must track OpenCode updates)

  1. Fork OpenCode
  2. Add energy capture to provider system
  3. Extend data schema
  4. Add UI components
  5. Submit as PR or maintain fork

Implementation Roadmap

Phase 1: Estimation Plugin (MVP)

Week 1:
├── Day 1-2: Research energy coefficients per model
├── Day 3-4: Build estimation plugin
├── Day 5: Add logging to external file/service
└── Deliverable: opencode-energy-estimator plugin on npm

Phase 2: Proxy Integration

Week 2-3:
├── Set up LiteLLM or custom proxy with energy tracking
├── Create opencode-energy-proxy plugin
├── Build simple dashboard for energy visualization
└── Deliverable: End-to-end energy tracking solution

Phase 3: Core Enhancement (Optional)

Week 4-6:
├── Implement custom SSE decoder in provider system
├── Extend message schema for energy data
├── Add UI components for energy display
├── Submit PR to OpenCode
└── Deliverable: Native energy tracking in OpenCode

Appendix: Code Examples

A. Energy Estimation Plugin

// opencode-energy-estimator/src/index.ts
import type { Plugin } from "@opencode-ai/plugin"

// Energy coefficients (kWh per 1M tokens) - example values
const MODEL_ENERGY: Record<string, { input: number; output: number }> = {
  "gpt-4": { input: 0.0003, output: 0.0006 },
  "gpt-4-turbo": { input: 0.00025, output: 0.0005 },
  "claude-3-opus": { input: 0.0004, output: 0.0008 },
  "claude-3-sonnet": { input: 0.0002, output: 0.0004 },
  "claude-3-haiku": { input: 0.00005, output: 0.0001 },
  default: { input: 0.0002, output: 0.0004 },
}

// Average grid carbon intensity (gCO2e/kWh)
const DEFAULT_CARBON_INTENSITY = 400

export const EnergyEstimatorPlugin: Plugin = async (ctx) => {
  const sessionEnergy: Record<string, { totalKwh: number; totalCo2g: number }> = {}

  return {
    event: async ({ event }) => {
      if (event.type !== "message.updated") return
      const msg = event.properties.info
      if (msg.role !== "assistant") return

      const modelKey = Object.keys(MODEL_ENERGY).find(k => msg.modelID.includes(k)) || "default"
      const coefficients = MODEL_ENERGY[modelKey]

      const inputEnergy = (msg.tokens.input / 1_000_000) * coefficients.input
      const outputEnergy = (msg.tokens.output / 1_000_000) * coefficients.output
      const totalKwh = inputEnergy + outputEnergy
      const totalCo2g = totalKwh * DEFAULT_CARBON_INTENSITY

      // Accumulate for session
      if (!sessionEnergy[msg.sessionID]) {
        sessionEnergy[msg.sessionID] = { totalKwh: 0, totalCo2g: 0 }
      }
      sessionEnergy[msg.sessionID].totalKwh += totalKwh
      sessionEnergy[msg.sessionID].totalCo2g += totalCo2g

      await ctx.client.app.log({
        service: "energy-estimator",
        level: "info",
        message: `Energy: ${(totalKwh * 1000).toFixed(4)} Wh, CO2: ${totalCo2g.toFixed(4)} g`,
        extra: {
          sessionID: msg.sessionID,
          messageID: msg.id,
          model: msg.modelID,
          tokens: msg.tokens,
          energy: { kwh: totalKwh, wh: totalKwh * 1000 },
          carbon: { gCo2e: totalCo2g },
          sessionTotals: sessionEnergy[msg.sessionID],
        },
      })
    },
  }
}

B. Proxy Header Plugin

// opencode-energy-proxy/src/index.ts
import type { Plugin } from "@opencode-ai/plugin"

export const EnergyProxyPlugin: Plugin = async (ctx) => {
  const proxyUrl = process.env.ENERGY_PROXY_URL
  
  return {
    "chat.headers": async (input, output) => {
      // Add tracking headers for proxy
      output.headers["X-OpenCode-Session"] = input.sessionID
      output.headers["X-OpenCode-Project"] = ctx.project.id
      output.headers["X-Energy-Tracking"] = "enabled"
    },
    
    config: async (config) => {
      // Could modify provider URLs to route through proxy
      // Note: This would require modifying config before it's used
    },
  }
}

C. Core Modification Sketch

If modifying OpenCode core, the key changes would be:

// packages/opencode/src/provider/provider.ts
// Add custom fetch wrapper to capture energy data

options["fetch"] = async (input: any, init?: BunFetchRequestInit) => {
  const response = await fetch(input, init)
  
  // Capture energy headers if present
  const energyHeader = response.headers.get("X-Energy-Data")
  if (energyHeader) {
    // Store for later retrieval
    lastEnergyData.set(requestId, JSON.parse(energyHeader))
  }
  
  return response
}
// packages/opencode/src/session/message-v2.ts
// Extend StepFinishPart schema

export const StepFinishPart = PartBase.extend({
  type: z.literal("step-finish"),
  // ... existing fields ...
  energy: z.object({
    kwh: z.number().optional(),
    joules: z.number().optional(),
    source: z.enum(["measured", "estimated"]).optional(),
  }).optional(),
  carbon: z.object({
    gCo2e: z.number().optional(),
    gridIntensity: z.number().optional(),
  }).optional(),
})

Conclusion

Adding energy and carbon tracking to OpenCode is feasible through multiple approaches with different trade-offs:

Approach Accuracy Effort Maintainability
Estimation Plugin Low-Medium Low High
Proxy Integration High Medium Medium
Core Modification High High Low

Recommendation: Start with an estimation plugin for immediate value, then evolve to a proxy-based solution for accurate measurements. Core modifications should only be pursued if the feature gains traction and could be accepted upstream.


References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment