Skip to content

Instantly share code, notes, and snippets.

View johnlindquist's full-sized avatar
💭
Educating 👨🏻‍🏫

John Lindquist johnlindquist

💭
Educating 👨🏻‍🏫
View GitHub Profile
@johnlindquist
johnlindquist / blog-event-driven-knowledge-delivery.md
Created March 12, 2026 05:54
From Pull to Push — Event-Driven Knowledge Delivery for AI Agents

From Pull to Push — Event-Driven Knowledge Delivery for AI Agents

Documentation is a pull system. You go find what you need. A plugin is a push system. Knowledge finds you at the moment you need it. The difference is the same gap that separates polling an API every five seconds and subscribing to a webhook.


The Two Models of Knowledge

Every system for getting information to a consumer falls into one of two categories: pull or push. In a pull system, the consumer initiates the request. They know what they need, they know where it lives, and they go get it. In a push system, the producer initiates the delivery. The consumer declares what they care about, and relevant information arrives when it becomes available.

@johnlindquist
johnlindquist / vercel-plugin-lexical-summary.md
Created March 11, 2026 20:00
Vercel Plugin: Lexical Search & Intelligent Skill Matching — Executive Summary

Vercel Plugin: Lexical Search & Intelligent Skill Matching

Executive Summary

The Vercel Plugin for Claude Code now ships a hybrid prompt matching engine that combines exact phrase matching with full-text lexical search, synonym expansion, and suffix stemming. The result: skills are injected when the developer means something, not just when they say the exact magic words.


What Problem Does This Solve?

@johnlindquist
johnlindquist / workflow-blog-posts-v2.md
Created March 11, 2026 17:26
Vercel Workflow blog post drafts v2 — quality-checked against docs and editorial skill

Vercel Workflow Blog Post Drafts (v2)

Three variations targeting different reader motivations. All technically grounded in Workflow DevKit documentation and following Vercel blog editorial conventions.


Durable AI streaming with Vercel Workflow

Format: Deep-dive Primary keyword: durable AI streaming

@johnlindquist
johnlindquist / workflow-blog-posts.md
Created March 11, 2026 17:19
Vercel Workflow blog post drafts — 3 variations (deep-dive, thought leadership, tutorial)

Vercel Workflow Blog Post Drafts

Three variations, each covering a different angle. All draw from the same product (Workflow DevKit) and the same proof points, but target different reader motivations.


Variation 1: Durable AI streaming that works locally

Format: Deep-dive (1200-1500 words) Primary keyword: durable AI streaming

Claude Code → Cursor Hook Schema Mapping

Key differences between Claude Code and Cursor hook formats:

Concept Claude Code Cursor
Event name hook_event_name hook_event_name (same)
Session ID session_id conversation_id
Working dir cwd + CLAUDE_PROJECT_ROOT env workspace_roots[] + CURSOR_PROJECT_DIR env
Tool name tool_name tool_name (same for preToolUse/postToolUse)
@johnlindquist
johnlindquist / vercel-sandbox-vcpu-minimum.md
Last active March 4, 2026 22:03
Vercel Sandbox: vcpus=1 rejected by API despite docs showing it as valid

Vercel Sandbox: vcpus: 1 rejected by API (docs show it as valid)

Summary

The Vercel Sandbox API rejects resources: { vcpus: 1 } with a 400 error, but multiple documentation pages show or imply 1 vCPU as a valid configuration. The actual minimum is 2 vCPUs (even numbers only).

This matters for I/O-bound workloads. We measured CPU utilization inside OpenClaw sandboxes over a 2-hour live test: <2% CPU even at 162 msgs/hr sustained. A 1 vCPU / 2 GB tier would halve memory costs, which are 97-99% of total sandbox cost for this workload.


@johnlindquist
johnlindquist / oidc-writeup.md
Last active March 4, 2026 22:36
Vercel OIDC for AI Gateway: Zero-Config Authentication Writeup

Vercel OIDC for AI Gateway: Zero-Config Authentication

Problem

OpenClaw sandboxes on Vercel needed a manually-configured AI_GATEWAY_API_KEY environment variable to authenticate with Vercel's AI Gateway for LLM inference. This was a deployment friction point — every new deployment or team member needed the key configured.

Solution

Replaced the static API key with Vercel's OIDC (OpenID Connect) token federation. The dashboard's Vercel Functions automatically receive an OIDC JWT that proves they're running on Vercel infrastructure. AI Gateway accepts this JWT as a bearer token — no static key needed.

@johnlindquist
johnlindquist / COMPARISON-GRID.md
Created March 3, 2026 00:47
Open Plugin - Plugin Feature Comparison Grid (8 tools + Open Plugin)

Plugin Feature Comparison Grid

Generated: 2026-03-02 Source: GPT-5.2 Pro (via Oracle) + web research (March 2026) Tools compared: Claude Code, Cursor, Codex CLI, OpenCode, Gemini CLI, Windsurf, VS Code / Copilot, Aider

Legend: ✅ full support | 🟡 partial/limited | ❌ no support


@johnlindquist
johnlindquist / REVIEW.md
Created March 3, 2026 00:47
Open Plugin - Full Project Review (GPT-5.2 Pro via Oracle)

Open Plugin Project Review

Reviewed: 2026-03-02 Reviewer: GPT-5.2 Pro (via Oracle) + Claude Opus 4.6 Scope: Full project review against Claude Code's plugin system for cross-harness viability


1) Gap Analysis vs Claude Code

@johnlindquist
johnlindquist / v0-sandbox-gap.md
Created February 28, 2026 01:03
v0 Platform API: File uploads don't get full-stack sandbox previews

v0 Platform API: File uploads don't get full-stack sandbox previews

The Problem

When publishing Next.js apps via the v0 Platform API using chats.init({ type: "files" }), the preview (demo-*.vusercontent.net) does not execute API routes, Server Actions, or any server-side code. Every request — including POST /api/my-route — returns the page HTML instead.

This makes it impossible to build interactive demos that use workflow, database connections, or any backend logic when publishing via the Platform API.

The same app connected via chats.init({ type: "repo" }) gets a real Vercel Sandbox where API routes work, workflows execute, and server-side code runs as expected.