Skip to content

Instantly share code, notes, and snippets.

@mpalpha
Last active January 29, 2026 15:56
Show Gist options
  • Select an option

  • Save mpalpha/32e02b911a6dae5751053ad158fc0e86 to your computer and use it in GitHub Desktop.

Select an option

Save mpalpha/32e02b911a6dae5751053ad158fc0e86 to your computer and use it in GitHub Desktop.
Memory-Augmented Reasoning MCP Server v3.3.6 - Protocol-enforced learning system with fixed FTS5 search (OR logic + BM25 ranking). Captures and retrieves universal working knowledge through three-gate enforcement (TEACH → LEARN → REASON).

Memory-Augmented Reasoning MCP Server

A concise MCP server for capturing and retrieving universal working knowledge. Records meta-patterns about effective work with AI agents, tools, protocols, and processes - NOT task-specific code patterns.

Author: Jason Lusk [email protected] License: MIT Gist URL: https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86

Features

  • Hybrid Storage: User-level + optional project-level SQLite databases
  • FTS5 Full-Text Search: Fast, ranked search across all experience fields
  • Automatic Deduplication: No duplicate patterns across databases
  • Zero Configuration: Works out-of-the-box with sensible defaults
  • Export Capabilities: JSON and Markdown export formats
  • Informed Reasoning: Multi-phase reasoning tool with context-aware guidance

What's New

v3.0.0 (Three-Gate Enforcement & Installation System) - January 2026

MAJOR UPDATE: Complete v3.0 system with three-gate enforcement, installation automation, and enhanced protocol workflow.

Three-Gate Enforcement (TEACH → LEARN → REASON):

  • TEACH Gate: Blocks new work if previous sessions have unrecorded experiences
    • Ensures institutional memory by requiring record_experience before starting new tasks
    • 24-hour TTL on recording requirement tokens
    • Proactive cleanup of expired tokens
  • LEARN Gate: Blocks file operations until search_experiences called
    • Session-specific tokens: .protocol-search-completed-{sessionId}
    • 5-minute TTL for search completion tokens
    • Prevents agents from skipping past experience searches
  • REASON Gate: Blocks file operations until informed_reasoning called
    • REQUEST-SCOPED tokens: Each user message requires new protocol flow
    • Session-specific tokens: .protocol-session-{sessionId}.json
    • 60-minute max TTL (tokens are per-request, not session-wide)
    • Forces thoughtful analysis before modifications

Actionable Error Messages:

  • Parameter Validation: All tools validate required parameters upfront
  • Specific Error Messages: No generic "Internal error" - always explains what's wrong
  • Copy-Paste Examples: Every error includes correct usage example
  • Phase-Specific Guidance: informed_reasoning validates per-phase requirements
  • Tool validations:
    • record_experience: Validates all 6 required fields (type, domain, situation, approach, outcome, reasoning)
    • informed_reasoning analyze: Validates problem parameter exists and is non-empty
    • informed_reasoning integrate: Validates problem parameter
    • informed_reasoning record: Validates problem and finalConclusion parameters
    • informed_reasoning reason: Validates thought parameter (existing from v2.1.2)
  • Error format: (1) What's wrong, (2) What's required, (3) Example with correct usage

Installation:

  • AI-agent guided installation via README documentation
  • Analysis-driven approach: Agents analyze environment before installing
  • No backward compatibility needed (clean v3 implementation)
  • Comprehensive test coverage validates all functionality

Enhanced Token System:

  • Session-Specific Naming: All tokens include session ID for concurrency support
  • Persistent Session File: .protocol-current-session-id (8-hour expiry)
  • Proactive Cleanup: user-prompt-submit hook removes tokens >24 hours old
  • Loop Prevention: stop hook tracks reminders to prevent infinite loops
  • Smart Task Detection: Distinguishes tasks from questions (no spam on "What is X?")

Hook Updates (All 6 Hooks):

  • user-prompt-submit.cjs: Smart task detection + proactive cleanup
  • pre-tool-use.cjs: Three-gate enforcement with session consistency
  • post-tool-use.cjs: 24-hour TTL recording tokens + immediate reminders
  • stop.cjs (NEW): TEACH phase reminders with loop prevention
  • session-start.cjs (NEW): Minimal setup, no prompt injection
  • pre-compact-handoff.cjs (ENHANCED): Last chance recording reminders

Session ID Consistency:

  • 3-level priority chain: env var → persistent file → generate new
  • getSessionId() function unified across all hooks and MCP server
  • Session persistence across process restarts
  • 8-hour session file expiry for security

Developer Experience:

  • Simple Setup: Follow AI-agent installation guide or manual steps
  • Clear Documentation: Every step explained with examples
  • Idempotent Configuration: Safe to reconfigure multiple times

Breaking Changes from v2:

  • Token naming changed (session-specific: .protocol-*-{sessionId})
  • Three gates instead of two (added TEACH gate)
  • Recording tokens expire after 24 hours (not infinite)
  • No backward compatibility in hooks (clean migration during install)

Migration Path:

  • V3 uses new token format (session-specific naming)
  • Old v2 tokens are incompatible and should be removed manually
  • First v3 workflow establishes new token format

Works with: protocol-enforcer v3.0.0+

AI-Agent Installation Guide

This guide is designed for AI agents to install and configure the MCP server automatically.

Prerequisites Check

Agent must verify:

  1. Node.js: Run node --version (require v14+)
  2. npm: Run npm --version
  3. Build Tools (for better-sqlite3):
    • macOS: xcode-select --version
    • Linux: dpkg -l | grep build-essential
  4. Workspace Path: Run pwd to get absolute project path

Step 1: Environment Analysis

# Detect OS
uname -s  # Darwin (macOS), Linux, etc.

# Get absolute workspace path
WORKSPACE_PATH=$(pwd)

# Check Node.js version
node --version  # Must be v14.0.0 or higher

# Check for existing configuration
ls ~/.claude.json 2>/dev/null && echo "Config exists" || echo "Config missing"
ls .mcp.json 2>/dev/null && echo ".mcp.json exists (WARN: known issues)" || echo "No .mcp.json"

Step 2: Configuration Strategy

CRITICAL: Configure in ~/.claude.json, NOT .mcp.json

Reason: .mcp.json with enabledMcpjsonServers has known loading issues in Claude Code 2.1.17+

Configuration Location:

~/.claude.json → projects.{absolute_workspace_path}.mcpServers

Check if Already Configured:

# Read existing config
cat ~/.claude.json | grep -A 20 "$WORKSPACE_PATH"

# Look for existing mcpServers entry

Step 3: Idempotent Configuration

Agent should update existing configuration rather than duplicate:

// Pseudo-code for agent
const config = JSON.parse(fs.readFileSync('~/.claude.json'));
const projectPath = process.cwd();

if (!config.projects) config.projects = {};
if (!config.projects[projectPath]) {
  config.projects[projectPath] = { mcpServers: {} };
}

// Update/add memory-augmented-reasoning
config.projects[projectPath].mcpServers['memory-augmented-reasoning'] = {
  type: 'stdio',
  command: 'npx',
  args: ['-y', 'https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86'],
  env: {
    MEMORY_AUGMENTED_REASONING_USER_DB: '${HOME}/.cursor/memory-augmented-reasoning.db',
    MEMORY_AUGMENTED_REASONING_PROJECT_DB: '${WORKSPACE_FOLDER}/.cursor/memory-augmented-reasoning.db'
  }
};

// Write back
fs.writeFileSync('~/.claude.json', JSON.stringify(config, null, 2));

Step 4: Verification Protocol

Test 1: MCP Server Connection

// Agent calls this tool
await mcp__memory_augmented_reasoning__search_experiences({
  query: "test"
});

// Expected: Returns results or empty array
// If fails: Check ~/.claude.json configuration

Test 2: Token Creation

# Clear existing tokens
rm ~/.protocol-informed-reasoning-token 2>/dev/null

# Agent calls informed_reasoning
await mcp__memory_augmented_reasoning__informed_reasoning({
  phase: 'analyze',
  problem: 'verify token creation'
});

# Verify token exists
ls -la ~/.protocol-informed-reasoning-token && echo "✅ Token created" || echo "❌ FAILED"

# Check token content
cat ~/.protocol-informed-reasoning-token
# Expected: {"sessionId":"...","phase":"analyze","timestamp":"...","expires":"..."}

Test 3: Experience Recording

# Record a test experience
await mcp__memory_augmented_reasoning__record_experience({
  type: 'effective',
  domain: 'Tools',
  situation: 'Test installation',
  approach: 'Used AI-Agent Installation Guide',
  outcome: 'Successfully installed',
  reasoning: 'Following automated installation procedure'
});

# Verify in database
sqlite3 ~/.cursor/memory-augmented-reasoning.db \
  "SELECT situation FROM experiences ORDER BY id DESC LIMIT 1"

# Expected: Shows "Test installation"

Step 5: Troubleshooting Decision Tree

Issue: MCP Server Not Loading

Symptom: search_experiences returns "No such tool"
Diagnosis:
  1. Check ~/.claude.json has project path entry
  2. Verify mcpServers configuration exists
  3. Check for .mcp.json conflicts (remove if exists)
  4. Reload Claude Code (Cmd+Shift+P → Reload Window)
Resolution: Reconfigure in ~/.claude.json, reload

Issue: Tokens Not Created

Symptom: ~/.protocol-informed-reasoning-token doesn't exist after analyze
Diagnosis:
  1. Check stderr logs for "✓ Token created" or "✗ FAILED"
  2. Verify MAR server loaded (Test 1)
  3. Check file permissions on home directory
Resolution: Check MCP server logs, verify home directory writable

Issue: Experience Recording Fails

Symptom: Database shows no new experiences
Diagnosis:
  1. Check database file exists: ls ~/.cursor/memory-augmented-reasoning.db
  2. Check file permissions
  3. Verify better-sqlite3 compiled correctly
Resolution: Reinstall with npm install, check build tools

Issue: Running Old Cached Version (v3.3.1+)

Symptom: Agent shows old behavior despite v3.3.1 deployed
  - Analyze output missing userIntent field
  - Analyze output missing keyFactors array
  - Queries are long mechanical echoes (50+ words) instead of focused (<20 words)

Root Cause: npx caches MCP servers - may run stale version

Diagnosis:
  1. Check analyze output for v3.3.1 fields:
     • userIntent object (goal, priority, expectedFormat, context)
     • keyFactors array (search priorities)
     • suggestedQueries.learningLibrary.query <20 words

  2. If missing, you're running cached old version

Resolution Options:

  Option 1: Clear npx cache
    npx clear-npx-cache
    rm -rf ~/.npm/_npx
    # Then restart Claude Desktop/Code completely

  Option 2: Force latest in configuration
    # In Claude Desktop config.json:
    {
      "mcpServers": {
        "memory-augmented-reasoning": {
          "command": "npx",
          "args": ["--yes", "gist:32e02b911a6dae5751053ad158fc0e86"]
        }
      }
    }
    # The --yes flag forces npx to fetch latest version

  Option 3: Verify version manually
    npx gist:32e02b911a6dae5751053ad158fc0e86 --version
    # Should show: 3.3.1

Expected v3.3.1 Behavior:
  When analyze phase called with problem "standup?":

  ✅ Correct (v3.3.1):
  {
    "userIntent": {
      "goal": "Brief status update",
      "priority": "Speed and clarity",
      "expectedFormat": "Git log + current work",
      "context": "Recurring informational"
    },
    "suggestedQueries": {
      "learningLibrary": {
        "query": "standup status brief",  // <20 words, focused
        "patterns": ["standup", "status"],
        "domain": "Communication"
      }
    },
    "keyFactors": ["recency", "brevity"]
  }

  ❌ Wrong (v3.3.0 or earlier - cached):
  {
    "suggestedQueries": {
      "learningLibrary": {
        "query": "standup?"  // Just lowercased problem
      }
    }
    // Missing: userIntent, keyFactors, patterns, focused query
  }

Step 6: Installation Complete

Agent should inform user:

✅ Installation Complete

MCP Server Configured:
  - memory-augmented-reasoning (v2.0.7+)

Database Paths:
  - User: ~/.cursor/memory-augmented-reasoning.db
  - Project: ${WORKSPACE_FOLDER}/.cursor/memory-augmented-reasoning.db

Verification Tests: 3/3 passed

Next Steps:
  1. Reload Claude Code (Cmd+Shift+P → Reload Window)
  2. Test informed_reasoning tool
  3. Start recording experiences with record_experience tool

Agent Instructions Summary

For AI agents executing this installation:

  1. Analyze Environment:

    • Run: pwd, uname -s, node --version, npm --version
    • Check: ~/.claude.json, .mcp.json, build tools
  2. Configure MCP Server:

    • Read ~/.claude.json
    • Add/update projects.{absolute_path}.mcpServers
    • Use npx remote execution URL
  3. Verify Installation:

    • Test 1: MCP connection
    • Test 2: Token creation
    • Test 3: Experience recording
  4. Troubleshoot Failures:

    • Use decision tree above
    • Check logs and file permissions
    • Reload after configuration changes
  5. Inform User:

    • Summarize what was installed
    • Report test results
    • Provide next steps

Quick Installation (Users)

Complete setup in 5 steps - installs MCP server + instance identity system.

1. Add MCP Server

Add to .mcp.json (project-level) or ~/.claude.json (user-level):

{
  "mcpServers": {
    "memory-augmented-reasoning": {
      "command": "npx",
      "args": [
        "-y",
        "https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86"
      ],
      "env": {
        "MEMORY_AUGMENTED_REASONING_USER_DB": "${HOME}/.cursor/memory-augmented-reasoning.db"
      }
    }
  }
}

Optional: Add project-level database for project-specific experiences:

{
  "mcpServers": {
    "memory-augmented-reasoning": {
      "command": "npx",
      "args": [
        "-y",
        "https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86"
      ],
      "env": {
        "MEMORY_AUGMENTED_REASONING_USER_DB": "${HOME}/.cursor/memory-augmented-reasoning.db",
        "MEMORY_AUGMENTED_REASONING_PROJECT_DB": "${WORKSPACE_FOLDER}/.cursor/memory-augmented-reasoning.db"
      }
    }
  }
}

2. Install Build Tools (Required for better-sqlite3)

The server uses better-sqlite3 which requires native compilation. Ensure you have the necessary build tools:

macOS:

xcode-select --install

Linux (Debian/Ubuntu):

sudo apt-get install build-essential python3

Windows:

3. Install Instance Identity Hook (Optional but Recommended)

Instance identity enables cross-session learning. Download the SessionStart hook:

# Create hooks directory
mkdir -p .cursor/hooks

# Download session-start hook
curl -o .cursor/hooks/session-start.cjs \
  https://gist.githubusercontent.com/mpalpha/32e02b911a6dae5751053ad158fc0e86/raw/session-start.cjs

# Make executable
chmod +x .cursor/hooks/session-start.cjs

Configure in .claude/settings.json:

{
  "hooks": {
    "SessionStart": [{
      "hooks": [{
        "type": "command",
        "command": "${workspaceFolder}/.cursor/hooks/session-start.cjs"
      }]
    }]
  }
}

What this does:

  • Auto-creates .cursor/claude-instance.json with instance ID on first session
  • Tracks session count and timestamps
  • Enables instance-specific experience tagging for targeted learning

To skip instance identity: Skip step 3. MCP tools will still work normally.

4. Reload IDE

Claude Code / Cursor:

Cmd+Shift+P → "Developer: Reload Window"

5. Verify Installation

On first run, better-sqlite3 will compile (takes ~30 seconds):

npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --help

Expected output:

> [email protected] install
> node-gyp rebuild

  CXX(target) Release/obj.target/better_sqlite3/src/better_sqlite3.o
  ...
  SOLINK_MODULE(target) Release/better_sqlite3.node

With instance identity hook installed:

✨ Instance identity created: my-project-claude
📁 Config: .cursor/claude-instance.json
💡 Edit config file to customize instance identity

Note: First run takes longer due to native compilation. Subsequent runs use npx cache and start instantly.


Integration with Protocol Enforcer

When used with protocol-enforcer-mcp, this server provides the core reasoning and memory system:

Key Components

  1. informed_reasoning tool - Multi-phase guided reasoning

    • analyze phase: Creates .protocol-informed-reasoning-token (required for file operations)
    • analyze phase: Creates .protocol-search-required-token when suggesting searches
    • integrate phase: Synthesizes gathered context
    • reason phase: Evaluates approaches and makes decisions
    • record phase: Captures learnings as experiences
  2. search_experiences tool - Full-text search across experience library

    • Consumes .protocol-search-required-token when called
    • Prevents protocol violations from skipping suggested searches
  3. record_experience tool - Manual experience recording

    • Allows explicit experience capture
    • Complements automatic recording
  4. Automatic recording - --record-last-session CLI flag

    • Triggered by PostToolUse hook after file operations
    • Reads .protocol-session-recording-token for file context
    • Saves experiences with rich metadata

How informed_reasoning Satisfies Requirements

When called with phase="analyze", the informed_reasoning tool performs three critical token operations:

1. Clears Requirement Token

Deletes .protocol-informed-reasoning-required token (if exists):

  • This token is created by user_prompt_submit hook when no session exists
  • Deletion signals that the requirement has been satisfied
  • Allows tools to proceed (pre_tool_use CHECK 0 will pass)
// Inside informed_reasoning analyze phase
const requirementFile = path.join(os.homedir(), '.protocol-informed-reasoning-required');
if (fs.existsSync(requirementFile)) {
  fs.unlinkSync(requirementFile);  // Requirement satisfied
}

2. Creates Session Token

Creates .protocol-session-token with 10-minute TTL:

  • Prevents requirement token creation for subsequent messages
  • Optimizes multi-turn conversations
  • Allows agent to use tools freely within session window
// Session token structure
{
  "sessionId": "session_abc123",
  "timestamp": 1769444349113,
  "expires": 1769444949113,  // 10 minutes later
  "created": "2026-01-26T16:19:09.113Z"
}

3. Creates Verification Token

Creates .protocol-informed-reasoning-token with 60-second TTL:

  • Used by pre_tool_use hook for secondary verification (CHECK 1)
  • Short lifetime ensures recent analysis
  • Part of two-factor verification system

Result: Agent can now use tools because:

  • Requirement satisfied (no .protocol-informed-reasoning-required)
  • Analysis verified (.protocol-informed-reasoning-token exists)
  • Session active (.protocol-session-token prevents future requirements)

REQUEST-SCOPED Session Tokens (CRITICAL)

IMPORTANT: Session tokens are REQUEST-SCOPED, not session-wide. Each new user message requires a new protocol flow.

Understanding Request Scope

Session tokens authorize multi-tool workflows within a SINGLE user request:

User: "Add feature X to the code"
  ↓
[NEW REQUEST - Protocol Required]
  ↓
Agent: search_experiences (LEARN gate)
Agent: informed_reasoning (REASON gate)
  → Creates REQUEST-SCOPED token for this request
  ↓
Agent: Read file → Write file → Edit file
  (All authorized by same token - same request)
  ↓
Agent: record_experience (TEACH phase)

Multiple User Requests = Multiple Protocol Flows

Each new user message is a NEW request requiring NEW protocol flow:

User: "Add feature X"          [Request 1]
  → search_experiences
  → informed_reasoning         → Token A created
  → Write/Edit                 (authorized by Token A)
  → record_experience

User: "Now update the docs"    [Request 2 - NEW PROTOCOL REQUIRED]
  → search_experiences         (must search again)
  → informed_reasoning         → Token B created
  → Write                      (authorized by Token B)
  → record_experience

User: "Fix the typo"           [Request 3 - NEW PROTOCOL REQUIRED]
  → search_experiences         (must search again)
  → informed_reasoning         → Token C created
  → Edit                       (authorized by Token C)
  → record_experience

Common Misconception

WRONG: "I ran informed_reasoning once → got 60-min token → can skip protocol for rest of session"

CORRECT: "Each user message = new request = run protocol again"

Why Request-Scoped?

  • Relevance: Each request needs context search for that specific task
  • Analysis: Each change requires fresh reasoning
  • Institutional Memory: Each outcome must be recorded
  • Security: Prevents agents from skipping protocol on subsequent requests

60-Minute TTL Explained

The 60-minute TTL is the maximum lifespan of a token, not a "session pass":

  • Token authorizes tools for THIS request's workflow only
  • Token expires OR is invalidated after Write/Edit
  • Does NOT carry over to next user request
  • Next user message = new request = new token required

Anti-Pattern Warning

Do NOT reuse tokens across different user requests. Even if a token hasn't expired yet, each new user message requires running the protocol again.

MCP Server Configuration

CRITICAL: AI agents must configure in ~/.claude.json, NOT .mcp.json

Agent must:

  1. Determine user's absolute project path: pwd
  2. Read/create ~/.claude.json
  3. Add configuration under projects.{absolute_project_path}.mcpServers

Example structure (agent must replace path):

{
  "projects": {
    "/Users/username/projects/myproject": {
      "mcpServers": {
        "memory-augmented-reasoning": {
          "type": "stdio",
          "command": "npx",
          "args": ["-y", "https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86"],
          "env": {
            "MEMORY_AUGMENTED_REASONING_USER_DB": "${HOME}/.cursor/memory-augmented-reasoning.db",
            "MEMORY_AUGMENTED_REASONING_PROJECT_DB": "${WORKSPACE_FOLDER}/.cursor/memory-augmented-reasoning.db"
          }
        },
        "protocol-enforcer": {
          "type": "stdio",
          "command": "npx",
          "args": ["-y", "https://gist.github.com/mpalpha/c2f1723868c86343e590ed38e80f264d"],
          "env": {}
        }
      }
    }
  }
}

Agent tasks:

  1. Run pwd to get absolute project path
  2. Read ~/.claude.json (create if missing: echo '{"projects":{}}' > ~/.claude.json)
  3. Parse JSON structure
  4. Navigate to projects object
  5. Add/update entry for {absolute_project_path}
  6. Add mcpServers configuration with both servers
  7. Write back to ~/.claude.json
  8. Instruct user to reload Claude Code

Note: ${HOME} and ${WORKSPACE_FOLDER} variables ARE supported in MCP server env configurations.


Integration with CLAUDE.md (Project Documentation)

After installing the MCP server, add the following protocol to your project's CLAUDE.md file to ensure future agents understand the "learn/save" convention.

Add to Critical Override Rules section (as a numbered rule):

5. **Learn/Save Protocol**: When asked to "learn", "save information", "save patterns", or similar requests:
   - **USE** `mcp__memory-augmented-reasoning__record_experience` to save knowledge to memory
   - **DO NOT** create external documents, guides, or summary files unless explicitly requested
   - Save effective patterns (what works), ineffective patterns (what fails), and reasoning processes
   - Organize by domain: Tools, Protocol, Communication, Process, Debugging, Decision
   - Include: situation, approach, outcome, reasoning, confidence level
   - Memory persists across sessions and can be retrieved via `search_experiences`

Why this is important:

  • Users often say "learn this" or "save this pattern" expecting memory storage, not file creation
  • Without this protocol, agents may create unnecessary documentation files instead of using the memory system
  • This convention ensures consistent behavior across all agents working in the project

Example user requests that trigger this protocol:

  • "Learn how to use CDP for browser testing"
  • "Save this pattern for future reference"
  • "Remember this approach"
  • "Store this information"

All these mean: use record_experience tool, not Write/Edit tools.

Instance Identity System

Enables cross-session learning and targeted self-improvement for Claude instances.

The Instance Identity system gives each Claude instance a unique, persistent label (like my-project-claude) that enables:

  1. Cross-Session Learning - Experiences tagged with instance ID can be retrieved in future sessions
  2. Targeted Instructions - Provide instance-specific guidance in CLAUDE.md
  3. Multi-Instance Support - Different Claude instances for different projects
  4. Self-Improvement - Analyze patterns specific to this instance

Features

  • Zero-Configuration Setup - Automatically creates instance identity on first session
  • Auto-Generated IDs - Derived from workspace name (e.g., smartpass-admin-ui-claude)
  • Session Tracking - Counts sessions and tracks last session timestamp
  • Customizable - Edit .cursor/claude-instance.json to add specializations, descriptions, and notes
  • Non-Breaking - No database schema changes required

Quick Start

Instance identity requires SessionStart hook installation (see Installation step 3 above).

Once the hook is installed, it automatically:

  1. Checks for .cursor/claude-instance.json
  2. Creates it if not found (auto-generated from workspace name)
  3. Loads the instance profile
  4. Increments session count
  5. Logs instance info to console

First session with hook installed:

✨ Instance identity created: my-project-claude
📁 Config: .cursor/claude-instance.json
💡 Edit config file to customize instance identity

Subsequent sessions:

✅ Instance: my-project-claude
📊 Session 47 started

Without hook: MCP tools work normally, but no automatic instance tracking.

Configuration File

Location: .cursor/claude-instance.json

Auto-generated structure:

{
  "instance_id": "my-project-claude",
  "workspace": "/path/to/my-project",
  "created": "2026-01-24T00:00:00.000Z",
  "auto_generated": true,
  "sessions": 47,
  "last_session": "2026-01-24T03:15:00.000Z",
  "description": "Auto-generated instance identity. Edit this file to customize.",
  "specializations": []
}

Customization options:

{
  "instance_id": "custom-name-here",
  "description": "Describe this instance's purpose",
  "specializations": ["list", "your", "focus", "areas"],
  "learning_enabled": true,
  "notes": "Any additional metadata you want"
}

Recording Instance-Specific Experiences

Convention: Include instance_id in the context field:

await mcp__memory-augmented-reasoning__record_experience({
  type: "effective",
  domain: "Process",
  situation: "Deploying application to production",
  approach: "Used containerized deployment with reverse proxy",
  outcome: "Deployment successful, zero downtime",
  reasoning: "Containers provide consistent environment across dev/prod",
  context: "instance_id: my-project-claude",  // ← Tag with instance
  confidence: 0.9
});

Searching Instance-Specific Experiences

Find experiences for this instance:

const experiences = await mcp__memory-augmented-reasoning__search_experiences({
  query: "instance_id: my-project-claude deployment"
});

Filter by domain:

const processExperiences = await mcp__memory-augmented-reasoning__search_experiences({
  query: "instance_id: my-project-claude",
  domain: "Process",
  type: "effective"
});

Multi-Instance Support

Different projects automatically get different instance identities:

~/projects/
├─ website/
│  └─ .cursor/claude-instance.json → "website-claude"
├─ api/
│  └─ .cursor/claude-instance.json → "api-claude"
└─ mobile-app/
   └─ .cursor/claude-instance.json → "mobile-app-claude"

Each instance maintains its own:

  • Experience library (tagged with instance_id)
  • Specializations
  • Learning history
  • Self-improvement rules

Self-Improvement Cycle

1. Record Patterns:

// After completing a task successfully:
await record_experience({
  context: "instance_id: my-project-claude",
  situation: "Need to update API schema",
  approach: "1. Update schema definition, 2. Run codegen, 3. Update types",
  outcome: "Types updated correctly, no compilation errors"
});

2. Search Before Acting:

// Before starting similar task:
const patterns = await search_experiences({
  query: "instance_id: my-project-claude API schema"
});
// Returns previous approach → Apply same pattern

3. Analyze and Improve:

// Periodically review ineffective patterns:
const antiPatterns = await search_experiences({
  query: "instance_id: my-project-claude",
  type: "ineffective"
});
// Identify what NOT to do in future

Disabling Instance Identity

Temporary (one session):

# Rename hook to disable
mv .cursor/hooks/session-start.cjs .cursor/hooks/session-start.cjs.disabled

Permanent:

# Delete hook
rm .cursor/hooks/session-start.cjs

# Or add to config:
{
  "instance_id": "my-project-claude",
  "enabled": false
}

Advanced Usage

Add instance-specific instructions to CLAUDE.md:

## Instance Identity: my-project-claude

### Specializations
- Web framework with TypeScript
- REST/GraphQL API integration
- Unit testing framework

### Self-Improvement Rules
- Always check API schema before proposing queries
- Search for existing utilities before creating new ones
- Run build/codegen commands after schema changes

### Known Patterns (Instance-Specific)
- This codebase uses modular styling conventions
- API operations organized in dedicated directory
- Strict typing enforced throughout

### Learning Targets
- Improve: Remembering to run build after API changes
- Focus: Better pattern recognition for reusable code

For complete documentation, see: INSTANCE-IDENTITY-GUIDE.md

Environment Variables

  • MEMORY_AUGMENTED_REASONING_USER_DB - User-level database path (default: ~/.cursor/memory-augmented-reasoning.db)
  • MEMORY_AUGMENTED_REASONING_PROJECT_DB - Optional project-level database path
  • LEARNING_LIBRARY_MAX_RESULTS - Max search results (default: 50)

MCP Tools

1. informed_reasoning (Phase 2 - Active Reasoning)

NEW: Multi-phase reasoning with context-aware guidance. Replaces sequential thought with context synthesis from multiple sources.

Phases:

  1. analyze - Suggests queries based on available MCP tools
  2. integrate - Synthesizes context with priority-based merging
  3. reason - Evaluates thoughts against learned patterns
  4. record - Auto-captures experience from session

Parameters:

{
  phase: 'analyze' | 'integrate' | 'reason' | 'record',

  // ANALYZE phase
  problem?: string,
  availableTools?: string[],  // e.g., ['memory-augmented-reasoning', 'context7', 'jira']

  // INTEGRATE phase
  gatheredContext?: {
    experiences?: any[],      // From search_experiences
    localDocs?: any[],      // From file reads
    mcpData?: {             // From other MCP servers
      jira?: any,
      figma?: any,
      context7?: any
    },
    webResults?: any[]
  },

  // REASON phase
  thought?: string,
  thoughtNumber?: number,
  totalThoughts?: number,
  nextThoughtNeeded?: boolean,
  isRevision?: boolean,
  revisesThought?: number,
  branchFromThought?: number,
  branchId?: string,
  needsMoreThoughts?: boolean,

  // RECORD phase
  finalConclusion?: string,
  relatedLearnings?: number[]  // IDs of related experiences
}

Example Workflow:

// 1. ANALYZE: Get suggestions for what to query
const analysis = await mcp__learning_library__informed_reasoning({
  phase: 'analyze',
  problem: 'How do I implement user authentication in React?',
  availableTools: ['memory-augmented-reasoning', 'context7', 'github']
});
// Returns: { suggestedQueries: { learningLibrary: {...}, context7: {...}, ... } }

// 2. Execute suggested queries (user/AI does this)
const experiences = await mcp__learning_library__search_experiences({
  query: 'authentication React'
});
const docs = await mcp__context7__query_docs({
  libraryId: '/facebook/react',
  query: 'authentication patterns'
});

// 3. INTEGRATE: Synthesize context with token budget management
const context = await mcp__learning_library__informed_reasoning({
  phase: 'integrate',
  problem: 'How do I implement user authentication in React?',
  gatheredContext: {
    experiences,
    localDocs: [{ path: 'CLAUDE.md', content: '...' }],
    mcpData: { context7: docs }
  }
});
// Returns: { synthesizedContext: 'markdown text', estimatedThoughts: 5, tokenBudget: {...} }

// 4. REASON: Evaluate each thought against context
const evaluation1 = await mcp__learning_library__informed_reasoning({
  phase: 'reason',
  problem: 'How do I implement user authentication in React?',
  thought: 'I should use JWT tokens stored in localStorage',
  thoughtNumber: 1,
  totalThoughts: 5,
  nextThoughtNeeded: true
});
// Returns: { evaluation: {...}, guidance: '...', revisionSuggestion: {...} }

// 5. Continue reasoning with multiple thoughts...

// 6. RECORD: Capture experience when complete
const recorded = await mcp__learning_library__informed_reasoning({
  phase: 'record',
  problem: 'How do I implement user authentication in React?',
  finalConclusion: 'Use JWT tokens with httpOnly cookies, not localStorage',
  relatedLearnings: [123, 456]  // IDs from search results
});
// Returns: { recorded: true, learningId: 789, ... }

Key Features:

  • Dynamic Tool Discovery: Adapts suggestions based on availableTools parameter
  • Token Budget Management: 20K token limit with intelligent pruning
  • 4-Tier Context Prioritization: Project rules → Effective experiences → Anti-patterns → External data
  • Thought Evaluation: Detects scope creep, implementing-without-reading, protocol violations
  • Auto-Experience Capture: Automatically records reasoning sessions for future use

2. record_experience

Record a experience to the library.

Parameters:

{
  scope: 'user' | 'project',  // Default: 'user'
  type: 'effective' | 'ineffective',
  domain: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  situation: string,  // When this applies
  approach: string,   // What was done
  outcome: string,    // What happened
  reasoning: string,  // Why it worked/failed
  alternative?: string  // For ineffective: what to use instead
}

Example:

await mcp__learning_library__record_experience({
  scope: 'user',
  type: 'effective',
  domain: 'Protocol',
  situation: 'Before any file write operation',
  approach: 'Call mcp__protocol-enforcer__verify_protocol_compliance first',
  outcome: 'File operation succeeds without blocking',
  reasoning: 'Protocol enforcer requires authorization token before allowing writes'
});

2. search_experiences

Search experiences with FTS5 full-text search.

Parameters:

{
  query?: string,  // FTS5 full-text search (optional)
  domain?: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  type?: 'effective' | 'ineffective',
  limit?: number  // Default: 50
}

Example:

// Find all Protocol experiences
const protocolPatterns = await mcp__learning_library__search_experiences({
  domain: 'Protocol',
  type: 'effective'
});

// Full-text search
const reasoningRules = await mcp__learning_library__search_experiences({
  query: 'informed_reasoning mandatory'
});

3. export_experiences

Export experiences to JSON or Markdown.

Parameters:

{
  scope: 'user' | 'project',  // Default: 'user'
  format: 'json' | 'markdown',  // Default: 'json'
  domain?: 'Tools' | 'Protocol' | 'Communication' | 'Process' | 'Debugging' | 'Decision',
  type?: 'effective' | 'ineffective',
  filename?: string  // Auto-generated if not provided
}

Example:

// Export all user-level experiences as JSON
await mcp__learning_library__export_experiences({
  scope: 'user',
  format: 'json'
});

// Export project Protocol experiences as Markdown
await mcp__learning_library__export_experiences({
  scope: 'project',
  format: 'markdown',
  domain: 'Protocol'
});

Automatic Experience Recording

The --record-last-session flag enables automatic experience recording triggered by PostToolUse hooks after successful file operations.

How It Works

1. PostToolUse hook creates recording token:

{
  "sessionId": "08e2e62e-a02b-4018-91bc-148c45209523",
  "toolName": "Edit",
  "timestamp": "2026-01-26T16:20:26.938Z",
  "context": {
    "filePath": "/path/to/project/test.txt",
    "cwd": "/path/to/project",
    "transcriptPath": "/Users/name/.claude/projects/.../session.jsonl"
  }
}

File location: ~/.protocol-session-recording-token

2. Hook spawns recording process:

npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --record-last-session

Environment variables passed:

  • CLAUDE_SESSION_ID: Session identifier
  • MEMORY_AUGMENTED_REASONING_PROJECT_DB: Project database path
  • FORCE_COLOR=0: Disable color output

3. Recording script execution:

The --record-last-session handler (index.js:1340-1440):

// Read and consume recording token
const recordingTokenFile = path.join(os.homedir(), '.protocol-session-recording-token');
if (fs.existsSync(recordingTokenFile)) {
  const tokenData = JSON.parse(fs.readFileSync(recordingTokenFile, 'utf8'));
  sessionId = tokenData.sessionId;
  toolName = tokenData.toolName || 'tool';
  timestamp = tokenData.timestamp;
  filePath = tokenData.context?.filePath || 'unknown';
  cwd = tokenData.context?.cwd || process.cwd();

  // Consume token
  fs.unlinkSync(recordingTokenFile);
}

// Generate context-rich experience
const fileDisplayName = filePath !== 'unknown' ? path.basename(filePath) : 'unknown file';
const experience = {
  type: 'effective',
  domain: 'Tools',
  situation: `${toolName} operation on ${fileDisplayName} at ${timestamp}`,
  approach: `Protocol flow: informed_reasoning → verify → authorize → ${toolName}`,
  outcome: `Successfully completed ${toolName} operation following protocol requirements`,
  reasoning: 'Following protocol workflow with automatic experience recording',
  context: `session: ${sessionId}, file: ${filePath}, cwd: ${cwd}`,
  confidence: 0.8,
  scope: 'project'  // or 'user' based on config
};

// Save to database
recordExperience(dbs, experience);

Example Experience Output

With file context (correct):

situation: "Edit operation on test.txt at 2026-01-26T16:20:26.938Z"
approach: "Protocol flow: informed_reasoning → verify → authorize → Edit"
outcome: "Successfully completed Edit operation following protocol requirements"
reasoning: "Following protocol workflow with automatic experience recording"
context: "session: 08e2e62e-a02b-4018-91bc-148c45209523, file: /path/to/test.txt, cwd: /path/to/project"

Without file context (broken - indicates hook issue):

situation: "tool operation on unknown file at 2026-01-26T16:20:26.938Z"
context: "session: ..., file: unknown, cwd: /path/to/project"

Token File Structure

The recording token is separate from protocol enforcement tokens:

Token File Created By Consumed By Purpose
.protocol-informed-reasoning-token informed_reasoning PreToolUse hook Proves analyze phase completed
.protocol-enforcer-token authorize_file_operation PreToolUse hook Proves operation authorized
.protocol-search-required-token informed_reasoning search_experiences Enforces suggested searches
.protocol-session-recording-token PostToolUse hook --record-last-session Passes file context for recording

Key difference: The recording token is NOT checked by PreToolUse hook - it's only consumed by the recording script.

Database Storage

Experiences are saved to:

  • Project scope: {project}/.cursor/memory-augmented-reasoning.db
  • User scope: ~/.cursor/memory-augmented-reasoning.db

Scope determined by .protocol-enforcer.json:

{
  "automatic_experience_recording": {
    "enabled": true,
    "scope": "project"  // or "user"
  }
}

Testing Automatic Recording

Agent can test recording functionality:

# 1. Create test recording token
cat > ~/.protocol-session-recording-token << 'EOF'
{
  "sessionId": "test-session-123",
  "toolName": "Edit",
  "timestamp": "2026-01-26T16:20:00.000Z",
  "context": {
    "filePath": "/path/to/test.txt",
    "cwd": "/path/to/project",
    "transcriptPath": null
  }
}
EOF

# 2. Run recording script (agent must use actual project path)
cd /path/to/project
MEMORY_AUGMENTED_REASONING_PROJECT_DB="$(pwd)/.cursor/memory-augmented-reasoning.db" \
npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --record-last-session

# Expected output:
# [memory-augmented-reasoning-mcp] Experience recorded automatically: 1 (save)

# 3. Verify experience was saved
sqlite3 .cursor/memory-augmented-reasoning.db \
  "SELECT id, situation, context FROM experiences ORDER BY id DESC LIMIT 1"

# Expected output:
# 1|Edit operation on test.txt at 2026-01-26T16:20:00.000Z|session: test-session-123, file: /path/to/test.txt, cwd: /path/to/project

# 4. Verify token was consumed
ls ~/.protocol-session-recording-token 2>/dev/null || echo "✅ Token consumed"

Troubleshooting Recording

Problem: Experiences show "unknown file"

# Check if PostToolUse hook is creating enhanced token
# After Edit operation, immediately check:
cat ~/.protocol-session-recording-token

# Should show:
# {"sessionId":"...","toolName":"Edit",...,"context":{"filePath":"/actual/path/..."}}

# If filePath is "unknown" or null, PostToolUse hook needs update

Problem: No experiences being recorded

# 1. Check if PostToolUse hook is configured
cat .claude/settings.json | grep -A 5 "PostToolUse"

# 2. Check if recording is enabled in config
cat .protocol-enforcer.json | grep -A 5 "automatic_experience_recording"

# 3. Check audit log to see if recording was triggered
tail ~/.protocol-enforcer-audit.log

# 4. Manually test recording (see testing section above)

Problem: Recording script fails

# Check for errors in background process
# Look for npx error logs:
ls ~/.npm/_logs/

# Check database permissions
ls -la .cursor/memory-augmented-reasoning.db

# Manually run recording with stderr visible
npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --record-last-session
# (Create token first as shown in testing section)

Token Creation and Management

This server creates and manages three types of protocol tokens:

1. Informed Reasoning Token

File: ~/.protocol-informed-reasoning-token Created: After analyze phase completes successfully Lifetime: 60 seconds Code: index.js:1145-1169

Critical fix in v2.0.5: Token creation moved OUTSIDE if (dbs.user) block to ensure it's ALWAYS created, even when user database doesn't exist.

// CRITICAL: Token creation MUST happen regardless of database availability
const sessionId = getSessionId();

if (phase === 'analyze') {
  const tokenFile = path.join(os.homedir(), '.protocol-informed-reasoning-token');
  try {
    const now = new Date();
    const expires = new Date(now.getTime() + 60000);
    const tokenData = {
      sessionId,
      phase,
      timestamp: now.toISOString(),
      expires: expires.toISOString()
    };
    fs.writeFileSync(tokenFile, JSON.stringify(tokenData), 'utf8');
    console.error(`[memory-augmented-reasoning-mcp] ✓ Token created: ${tokenFile}`);
  } catch (err) {
    console.error('[memory-augmented-reasoning-mcp] ✗ FAILED to write token:', err.message);
    console.error('[memory-augmented-reasoning-mcp] Stack:', err.stack);
  }
}

Debug logging: Added in v2.0.5 to diagnose token creation failures.

2. Search Required Token

File: ~/.protocol-search-required-token Created: When analyze phase suggests searching experiences Consumed: By search_experiences tool Lifetime: 5 minutes (300,000ms) Code:

Purpose: Enforces that AI agents actually search experiences when informed_reasoning suggests it.

// Create search token when suggesting search
if (result && result.suggestedQueries && result.suggestedQueries.learningLibrary) {
  const searchTokenPath = path.join(os.homedir(), '.protocol-search-required-token');
  try {
    const searchToken = {
      timestamp: Date.now(),
      sessionId,
      suggestedQuery: result.suggestedQueries.learningLibrary.query,
      domain: result.suggestedQueries.learningLibrary.domain,
      phase: 'analyze'
    };
    fs.writeFileSync(searchTokenPath, JSON.stringify(searchToken), 'utf8');
    console.error('[memory-augmented-reasoning-mcp] ✓ Search token created');
  } catch (err) {
    console.error('[memory-augmented-reasoning-mcp] ✗ Failed to write search token:', err.message);
  }
}

3. Session Token (REQUEST-SCOPED)

File: ~/.protocol-session-{session_id}.json Created: When session_id parameter provided to informed_reasoning Lifetime: 60 minutes maximum (REQUEST-SCOPED) Purpose: Authorizes multi-tool workflows within a SINGLE user request

CRITICAL: Token is REQUEST-SCOPED - each new user message requires new protocol flow.

Usage: Reduces overhead within a single request (e.g., Read → Write → Edit for one task).


Testing Guide for AI Agents

AI agents can guide users through these verification tests:

Test 1: Verify Token Creation

# Clear existing tokens
rm ~/.protocol-informed-reasoning-token 2>/dev/null

# Agent calls informed_reasoning in Claude session
# Example: informed_reasoning(phase="analyze", problem="verify token creation")

# Verify token exists
ls -la ~/.protocol-informed-reasoning-token && echo "✅ Token created" || echo "❌ FAILED"

# Check token content
cat ~/.protocol-informed-reasoning-token
# Expected: {"sessionId":"...","phase":"analyze","timestamp":"...","expires":"..."}

# Check for debug output
# Agent should see in stderr: "✓ Token created: /Users/.../.protocol-informed-reasoning-token"

If token doesn't exist:

  • Check MCP server loaded: ps aux | grep memory-augmented-reasoning
  • Check ~/.claude.json configuration
  • Look for error in stderr: "✗ FAILED to write token"
  • Reload Claude Code

Test 2: Verify Search Enforcement

# Clear search token
rm ~/.protocol-search-required-token 2>/dev/null

# Agent calls informed_reasoning (should suggest search)
# Check if search token created:
ls -la ~/.protocol-search-required-token && echo "✅ Search token created" || echo "❌ No search suggested"

# If token exists, read it:
cat ~/.protocol-search-required-token
# Expected: {"timestamp":...,"sessionId":"...","suggestedQuery":"...","domain":"..."}

# Agent calls search_experiences with suggested query
# Verify token consumed:
ls -la ~/.protocol-search-required-token 2>/dev/null || echo "✅ Token consumed by search_experiences"

Test 3: Verify Automatic Recording

# After Edit operation, wait for background recording
sleep 5

# Check database
sqlite3 .cursor/memory-augmented-reasoning.db \
  "SELECT id, situation FROM experiences ORDER BY id DESC LIMIT 1"

# Expected output:
# 1|Edit operation on test.txt at 2026-01-26T16:20:26.938Z

# NOT this (indicates broken recording):
# 1|tool operation on unknown file at ...

# Verify recording token was created and consumed:
ls ~/.protocol-session-recording-token 2>/dev/null || echo "✅ Recording token was consumed"

Test 4: Manual Recording Test

# Agent can manually test recording:

# 1. Create test token
cat > ~/.protocol-session-recording-token << 'EOF'
{
  "sessionId": "manual-test",
  "toolName": "Edit",
  "timestamp": "2026-01-26T16:00:00.000Z",
  "context": {
    "filePath": "/path/to/test.txt",
    "cwd": "/path/to/project",
    "transcriptPath": null
  }
}
EOF

# 2. Run recording (agent must use actual project path)
MEMORY_AUGMENTED_REASONING_PROJECT_DB="/path/to/project/.cursor/memory-augmented-reasoning.db" \
npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --record-last-session

# 3. Check output
# Expected: "[memory-augmented-reasoning-mcp] Experience recorded automatically: 1 (save)"

# 4. Verify in database
sqlite3 /path/to/project/.cursor/memory-augmented-reasoning.db \
  "SELECT situation FROM experiences WHERE situation LIKE '%test.txt%'"

# Expected: "Edit operation on test.txt at 2026-01-26T16:00:00.000Z"

Automated Experience Extraction

NEW: Automatically extract experiences from Claude Code session transcripts using LLM-based analysis.

Features

  • Automatic Trigger: SessionEnd hook automatically extracts experiences when sessions complete
  • API Key Auto-Discovery: Reads Claude API key from macOS keychain (no manual setup)
  • LLM-Based Analysis: Uses Claude 3.5 Haiku for cost-effective pattern extraction
  • Confidence Scoring: Categorizes experiences by confidence (high ≥0.9, medium 0.5-0.9)
  • Deduplication: Prevents duplicate experiences across databases
  • Topic Segmentation: Groups messages by topic for better context analysis

Installation & Configuration

Step 1: Install MCP Server

First, install the MCP server following the instructions in the main Installation section above. Then configure the SessionEnd hook for automatic extraction.

Step 2: Install CLI Commands

Make the CLI commands globally available:

cd memory-augmented-reasoning-mcp
npm install           # Install dependencies (better-sqlite3)
npm link              # Create global symlink

Verify installation:

memory-augmented-reasoning --version
# Output: memory-augmented-reasoning-mcp v2.0.0

Step 3: Configure SessionEnd Hook (Automatic Extraction)

The SessionEnd hook automatically triggers experience extraction when Claude Code sessions end.

Create hook file:

# Create hooks directory
mkdir -p .cursor/hooks

# Copy the session-end hook
cp .cursor/hooks/session-end-handoff.cjs .cursor/hooks/session-end-handoff.cjs

Hook file location: .cursor/hooks/session-end-handoff.cjs

The hook automatically:

  • Detects when sessions end
  • Reads API key from macOS keychain
  • Triggers memory-augmented-reasoning extract-experiences --session <id> --auto --notify
  • Runs in background (non-blocking)

Configure in .claude/settings.json:

{
  "hooks": {
    "sessionEnd": {
      "command": ".cursor/hooks/session-end-handoff.cjs"
    }
  }
}

Reload Claude Code:

Cmd+Shift+P → "Developer: Reload Window"

Disable auto-extraction (optional):

# Add to ~/.zshrc or ~/.bashrc
export AUTO_EXTRACT_LEARNINGS=false

When disabled, manually trigger extraction:

memory-augmented-reasoning extract-experiences --session <id> --auto

CLI Commands

1. list-sessions

List recent sessions with transcripts:

memory-augmented-reasoning list-sessions [--days N]

# Examples
memory-augmented-reasoning list-sessions              # Last 7 days (default)
memory-augmented-reasoning list-sessions --days 1     # Last 24 hours
memory-augmented-reasoning list-sessions --days 30    # Last 30 days

Output includes:

  • Session ID
  • Last modified date/time
  • Transcript file size
  • Full file path

2. extract-experiences

Extract experiences from a specific session:

memory-augmented-reasoning extract-experiences --session <session-id> [--auto] [--notify]

# Options
--session <id>   Session ID to analyze (required)
--auto           Auto-save high confidence experiences (≥0.9) without review
--notify         Send desktop notification when complete (macOS only)

# Examples
memory-augmented-reasoning extract-experiences --session abc123
memory-augmented-reasoning extract-experiences --session abc123 --auto
memory-augmented-reasoning extract-experiences --session abc123 --auto --notify

Extraction Process:

  1. Find Transcripts - Locates main and agent transcripts for session
  2. Parse Messages - Extracts all messages with role and content
  3. Segment by Topic - Groups related messages (general, tools, protocol, etc.)
  4. LLM Analysis - Claude 3.5 Haiku analyzes each segment for patterns
  5. Deduplicate - Checks against existing experiences to prevent duplicates
  6. Categorize - Sorts by confidence level:
    • High (≥0.9): Auto-saved with --auto flag
    • Medium (0.5-0.9): Requires manual review
    • Duplicates: Skipped automatically

API Key Setup:

The CLI automatically discovers the Claude API key from:

  1. macOS keychain (service: "Claude Code") - automatic, no setup needed
  2. CLAUDE_API_KEY environment variable (fallback)
  3. ANTHROPIC_API_KEY environment variable (fallback)

If API key not found, you'll see:

⚠️  No API key found (checked keychain and env vars)
💡 Set CLAUDE_API_KEY or ANTHROPIC_API_KEY environment variable

3. stats

View experience library statistics:

memory-augmented-reasoning stats [--domain <domain>]

# Examples
memory-augmented-reasoning stats                 # All domains
memory-augmented-reasoning stats --domain Tools  # Tools domain only

Output includes:

  • Total experiences count
  • Breakdown by type (effective/ineffective)
  • Breakdown by domain
  • Average confidence score
  • Recent experiences (last 5)

4. review-session

Interactive review of medium-confidence experiences (planned for future release):

memory-augmented-reasoning review-session <session-id>

Note: Currently shows placeholder. Interactive review will be added in a future update.

5. --record-last-session

Automatically record experience from the most recent informed_reasoning session (used by protocol-enforcer post-tool-use hook):

npx -y https://gist.github.com/mpalpha/32e02b911a6dae5751053ad158fc0e86 --record-last-session

How it works:

  1. Reads session token from ~/.protocol-informed-reasoning-token
  2. Loads scope configuration from .protocol-enforcer.json (defaults to 'project')
  3. Auto-generates experience record:
    • Type: effective
    • Domain: Tools
    • Situation: Task analyzed via informed_reasoning session
    • Approach: Used informed_reasoning analyze phase before tool execution
    • Outcome: Tool execution completed successfully after protocol compliance
    • Reasoning: Following protocol workflow
    • Confidence: 0.8
  4. Records to database (with automatic deduplication)

Called automatically by:

  • protocol-enforcer v2.0.8+ post-tool-use.cjs hook
  • Triggered after successful execution of Write, Edit, NotebookEdit, Task, WebSearch, or Grep tools
  • Runs in background (detached, non-blocking)

Configuration:

{
  "automatic_experience_recording": {
    "enabled": true,
    "scope": "project",
    "trigger": "post_tool_use",
    "record_for_tools": ["Write", "Edit", "NotebookEdit", "Task", "WebSearch", "Grep"]
  }
}

Audit logging: All automatic recordings are logged to ~/.protocol-enforcer-audit.log for debugging.

SessionEnd Hook Automation

The .cursor/hooks/session-end-handoff.cjs hook automatically triggers extraction when sessions end:

// Automatically runs on session end
memory-augmented-reasoning extract-experiences --session <id> --auto --notify

Features:

  • Runs in background (detached process)
  • Auto-discovers API key from keychain
  • Notifies when complete (macOS)
  • Can be disabled with AUTO_EXTRACT_LEARNINGS=false

Manual Trigger:

To disable auto-extraction and trigger manually:

export AUTO_EXTRACT_LEARNINGS=false
# Sessions will not auto-extract
# Run manually: memory-augmented-reasoning extract-experiences --session <id> --auto

Extraction Workflow Example

# 1. List recent sessions
$ memory-augmented-reasoning list-sessions --days 1

📁 Sessions with transcripts (last 1 days):

1. abc123def456...
   Modified: 2026-01-22 10:30:00
   Size: 3813.4 KB
   Path: /Users/.../.claude/projects/.../abc123def456.jsonl

# 2. Extract experiences
$ memory-augmented-reasoning extract-experiences --session abc123def456 --auto

🔍 Extracting experiences from session: abc123def456

📁 Finding transcripts...
   Main: .../abc123def456.jsonl
   Agents: 0 agent transcript(s)

📖 Parsing transcripts...
   Main transcript: 2041 messages
   Total: 2041 messages

🔖 Segmenting by topics...
   Created 1 topic segments
   Distribution: { general: 1 }

🤖 Extracting experiences with Claude...
Extracted 4 experiences from general segment
   Extracted: 4 potential experiences

🔍 Deduplicating and categorizing...
   High confidence (≥0.9): 0
   Medium confidence (0.5-0.9): 4
   Duplicates skipped: 0

⚠️  Medium confidence experiences require review:
   4 experiences need manual review
   Run: memory-augmented-reasoning review-session abc123def456

✅ Extraction complete!

# 3. View statistics
$ memory-augmented-reasoning stats

📊 Memory-Augmented Reasoning Statistics

Total Experiences: 42

By Type:
  effective: 38
  ineffective: 4

By Domain:
  Process: 29
  Protocol: 5
  Communication: 3
  Tools: 3
  Debugging: 2

Average Confidence: 79.8%

Troubleshooting

"API key not found"

# Verify keychain access (macOS)
security find-generic-password -s "Claude Code" -w

# Or set environment variable
export CLAUDE_API_KEY="sk-ant-api03-..."

"command not found: memory-augmented-reasoning"

# Re-link the CLI
cd memory-augmented-reasoning-mcp
npm link

"Session already processed"

Sessions are cached to avoid re-processing. To re-analyze, use the review command (when implemented).

Domain Categories

  • Tools: Tool selection and usage patterns
  • Protocol: Mandatory protocol and workflow requirements
  • Communication: User communication preferences and styles
  • Process: Standard operating procedures and workflows
  • Debugging: Debugging approaches and troubleshooting methods
  • Decision: Decision-making patterns and criteria

Database Schema

CREATE TABLE experiences (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  type TEXT NOT NULL CHECK(type IN ('effective', 'ineffective')),
  domain TEXT NOT NULL CHECK(domain IN ('Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision')),
  situation TEXT NOT NULL,
  approach TEXT NOT NULL,
  outcome TEXT NOT NULL,
  reasoning TEXT NOT NULL,
  alternative TEXT,
  created_at TEXT DEFAULT CURRENT_TIMESTAMP
);

-- FTS5 virtual table for full-text search
CREATE VIRTUAL TABLE experiences_fts USING fts5(
  situation, approach, outcome, reasoning, alternative,
  content='experiences',
  content_rowid='id'
);

Usage Patterns

Recording Effective Patterns

await mcp__learning_library__record_experience({
  scope: 'user',
  type: 'effective',
  domain: 'Tools',
  situation: 'Searching codebase for broad patterns',
  approach: 'Use Task tool with Explore agent',
  outcome: 'Found all relevant files efficiently',
  reasoning: 'Explore agent optimized for codebase discovery'
});

Recording Ineffective Patterns (Anti-Patterns)

await mcp__learning_library__record_experience({
  scope: 'user',
  type: 'ineffective',
  domain: 'Tools',
  situation: 'Searching codebase for broad patterns',
  approach: 'Use Grep tool repeatedly',
  outcome: 'Slow, missed many relevant files',
  reasoning: 'Grep requires knowing exact search terms',
  alternative: 'Use Task tool with Explore agent for broad searches'
});

Querying Before Action

// Before starting a task, query relevant patterns
const toolPatterns = await mcp__learning_library__search_experiences({
  domain: 'Tools'
});

const protocolRules = await mcp__learning_library__search_experiences({
  domain: 'Protocol',
  type: 'effective'
});

Testing

Verify Installation

node index.js --version
# Output: memory-augmented-reasoning-mcp v2.0.0

Test MCP Connection

// From Claude Code
await mcp__learning_library__search_experiences({});

File Locations

  • User-level database: ~/.cursor/memory-augmented-reasoning.db
  • Project-level database: ${WORKSPACE_FOLDER}/.cursor/memory-augmented-reasoning.db (if configured)
  • Export files: .cursor/experience-exports/

Architecture

  • Single-file design: Entire MCP server in index.js
  • SQLite + FTS5: Embedded database with full-text search
  • Zero dependencies (except better-sqlite3)
  • Environment variable configuration: No config files needed

License

MIT License - Copyright (c) 2025 Jason Lusk

#!/usr/bin/env node
/**
* Memory-Augmented Reasoning MCP Server
* A concise MCP server for capturing and retrieving universal working knowledge.
* Records meta-patterns about effective work with AI agents, tools, protocols, and processes.
*
* Version: 3.3.6
* License: MIT
*/
const Database = require('better-sqlite3');
const fs = require('fs');
const path = require('path');
const os = require('os');
const readline = require('readline');
const VERSION = '3.3.6';
// Custom error class for parameter validation (returns -32602 instead of -32603)
// Phase 7.7: Support separate message (visible to agent) and details (in error.data)
class ValidationError extends Error {
constructor(message, details) {
super(message);
this.name = 'ValidationError';
this.code = -32602; // Invalid params
// If details provided, store separately for error.data
// If only message provided, use it for both (backward compatible)
this.details = details || message;
// Log full error to stderr for debugging
console.error(`[validation-error] ${message}`);
if (details && details !== message) {
console.error(`[validation-error] Full details: ${details}`);
}
}
}
// State tracking for session tokens
const state = {
sessionTokens: new Map(), // Map<sessionId, {expires: timestamp, created: timestamp}>
sessionTokenTimeout: 3600000, // 60 minutes
sessions: new Map() // NEW: Track phase completion for enhanced protocol (Phase 2)
};
// BUG #20 FIX: Background cleanup of expired session tokens (every 5 minutes)
setInterval(() => {
const now = Date.now();
for (const [sessionId, tokenData] of state.sessionTokens.entries()) {
if (tokenData.expires < now) {
state.sessionTokens.delete(sessionId);
// Clean up file system token
const tokenFile = path.join(os.homedir(), `.protocol-session-${sessionId}.json`);
try {
fs.unlinkSync(tokenFile);
} catch (err) {
// Ignore errors - file may already be cleaned by GC
}
}
}
}, 300000); // 5 minutes
// Expand environment variables in paths
function expandPath(filePath) {
if (!filePath) return filePath;
// Expand ${HOME} or $HOME
filePath = filePath.replace(/\$\{HOME\}|\$HOME/g, os.homedir());
// Expand ${WORKSPACE_FOLDER} or $WORKSPACE_FOLDER with current working directory
filePath = filePath.replace(/\$\{WORKSPACE_FOLDER\}|\$WORKSPACE_FOLDER/g, process.cwd());
// Expand ~ at the beginning
if (filePath.startsWith('~/')) {
filePath = path.join(os.homedir(), filePath.slice(2));
}
return filePath;
}
// Database schema with FTS5 full-text search + informed reasoning enhancements
const SCHEMA = `
CREATE TABLE IF NOT EXISTS experiences (
id INTEGER PRIMARY KEY AUTOINCREMENT,
type TEXT NOT NULL CHECK(type IN ('effective', 'ineffective')),
domain TEXT NOT NULL CHECK(domain IN ('Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision')),
situation TEXT NOT NULL,
approach TEXT NOT NULL,
outcome TEXT NOT NULL,
reasoning TEXT NOT NULL,
alternative TEXT,
confidence REAL CHECK(confidence BETWEEN 0 AND 1),
revision_of INTEGER,
contradicts TEXT,
supports TEXT,
context TEXT,
assumptions TEXT,
limitations TEXT,
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (revision_of) REFERENCES experiences(id)
);
CREATE INDEX IF NOT EXISTS idx_domain_type ON experiences(domain, type);
CREATE INDEX IF NOT EXISTS idx_created ON experiences(created_at DESC);
CREATE VIRTUAL TABLE IF NOT EXISTS experiences_fts USING fts5(
situation, approach, outcome, reasoning, alternative, context, assumptions, limitations,
content='experiences',
content_rowid='id'
);
CREATE TRIGGER IF NOT EXISTS experiences_ai AFTER INSERT ON experiences BEGIN
INSERT INTO experiences_fts(rowid, situation, approach, outcome, reasoning, alternative, context, assumptions, limitations)
VALUES (new.id, new.situation, new.approach, new.outcome, new.reasoning, new.alternative, new.context, new.assumptions, new.limitations);
END;
CREATE TRIGGER IF NOT EXISTS experiences_ad AFTER DELETE ON experiences BEGIN
DELETE FROM experiences_fts WHERE rowid = old.id;
END;
CREATE TRIGGER IF NOT EXISTS experiences_au AFTER UPDATE ON experiences BEGIN
UPDATE experiences_fts SET situation=new.situation, approach=new.approach,
outcome=new.outcome, reasoning=new.reasoning, alternative=new.alternative,
context=new.context, assumptions=new.assumptions, limitations=new.limitations
WHERE rowid = new.id;
END;
CREATE TABLE IF NOT EXISTS pattern_acceptance (
id INTEGER PRIMARY KEY AUTOINCREMENT,
domain TEXT NOT NULL,
pattern_signature TEXT NOT NULL,
presented_count INTEGER DEFAULT 0,
accepted_count INTEGER DEFAULT 0,
rejected_count INTEGER DEFAULT 0,
acceptance_rate REAL GENERATED ALWAYS AS (
CAST(accepted_count AS REAL) / NULLIF(presented_count, 0)
) STORED,
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(domain, pattern_signature)
);
CREATE INDEX IF NOT EXISTS idx_pattern_acceptance_domain ON pattern_acceptance(domain);
CREATE INDEX IF NOT EXISTS idx_pattern_acceptance_rate ON pattern_acceptance(acceptance_rate DESC);
CREATE TABLE IF NOT EXISTS protocol_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL UNIQUE,
started_at TEXT DEFAULT CURRENT_TIMESTAMP,
last_activity TEXT DEFAULT CURRENT_TIMESTAMP,
informed_reasoning_phases TEXT,
protocol_steps_completed TEXT,
checklist_items_verified TEXT,
plan_approved BOOLEAN DEFAULT 0,
plan_content TEXT,
confidence_level TEXT,
scope_boundaries TEXT,
session_metadata TEXT
);
CREATE INDEX IF NOT EXISTS idx_session_id ON protocol_sessions(session_id);
CREATE INDEX IF NOT EXISTS idx_last_activity ON protocol_sessions(last_activity DESC);
CREATE TABLE IF NOT EXISTS protocol_events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
event_type TEXT NOT NULL,
event_data TEXT,
timestamp TEXT DEFAULT CURRENT_TIMESTAMP,
FOREIGN KEY (session_id) REFERENCES protocol_sessions(session_id)
);
CREATE INDEX IF NOT EXISTS idx_events_session ON protocol_events(session_id, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_events_type ON protocol_events(event_type);
`;
// Initialize database with schema
function initDatabase(dbPath) {
const dir = path.dirname(dbPath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
const db = new Database(dbPath);
// Enable trusted_schema to allow FTS5 virtual table triggers
db.exec('PRAGMA trusted_schema = ON;');
db.exec(SCHEMA);
return db;
}
// Load databases from environment variables
function loadDatabases() {
const dbs = {};
// Always load user-level database (from env or default)
const userDbPath = expandPath(
process.env.MEMORY_AUGMENTED_REASONING_USER_DB ||
path.join(os.homedir(), '.cursor', 'memory-augmented-reasoning.db')
);
dbs.user = initDatabase(userDbPath);
// Load project-level database if env var is set
if (process.env.MEMORY_AUGMENTED_REASONING_PROJECT_DB) {
const projectDbPath = expandPath(process.env.MEMORY_AUGMENTED_REASONING_PROJECT_DB);
dbs.project = initDatabase(projectDbPath);
}
return dbs;
}
// ============================================================================
// SESSION MANAGEMENT
// ============================================================================
/**
* Get Claude Code session ID with consistent priority chain
* Priority: CLAUDE_SESSION_ID env → persistent session file → generate new
* Matches hook implementations for consistency
*/
function getSessionId() {
// PRIORITY 1: Environment variable (set by Claude Code)
if (process.env.CLAUDE_SESSION_ID) {
return process.env.CLAUDE_SESSION_ID;
}
// PRIORITY 2: Persistent session file
const sessionFilePath = path.join(os.homedir(), '.protocol-current-session-id');
try {
if (fs.existsSync(sessionFilePath)) {
const sessionData = JSON.parse(fs.readFileSync(sessionFilePath, 'utf8'));
// Check if session file is stale (>8 hours old)
const age = Date.now() - sessionData.created;
if (age < 8 * 60 * 60 * 1000) {
return sessionData.session_id;
}
}
} catch (err) {
// Failed to read session file - continue to generate new
}
// PRIORITY 3: Generate new session ID and persist it
const newSessionId = `session-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
try {
fs.writeFileSync(sessionFilePath, JSON.stringify({
session_id: newSessionId,
created: Date.now(),
created_iso: new Date().toISOString()
}));
} catch (err) {
// Failed to persist - session ID still usable, just won't persist across restarts
}
return newSessionId;
}
/**
* Get or create protocol session
*/
function getOrCreateSession(db, sessionId) {
let session = db.prepare('SELECT * FROM protocol_sessions WHERE session_id = ?').get(sessionId);
if (!session) {
db.prepare(`
INSERT INTO protocol_sessions (session_id)
VALUES (?)
`).run(sessionId);
session = db.prepare('SELECT * FROM protocol_sessions WHERE session_id = ?').get(sessionId);
}
return session;
}
/**
* Update session activity timestamp
*/
function updateSessionActivity(db, sessionId) {
db.prepare(`
UPDATE protocol_sessions
SET last_activity = CURRENT_TIMESTAMP
WHERE session_id = ?
`).run(sessionId);
}
/**
* Record protocol event
*/
function recordProtocolEvent(db, sessionId, eventType, eventData) {
db.prepare(`
INSERT INTO protocol_events (session_id, event_type, event_data)
VALUES (?, ?, ?)
`).run(sessionId, eventType, JSON.stringify(eventData));
}
/**
* Update session phases
*/
function updateSessionPhases(db, sessionId, phase) {
const session = getOrCreateSession(db, sessionId);
const phases = session.informed_reasoning_phases ? JSON.parse(session.informed_reasoning_phases) : [];
if (!phases.includes(phase)) {
phases.push(phase);
db.prepare(`
UPDATE protocol_sessions
SET informed_reasoning_phases = ?
WHERE session_id = ?
`).run(JSON.stringify(phases), sessionId);
}
}
// ============================================================================
// DEDUPLICATION LOGIC
// ============================================================================
/**
* Calculate string similarity using Dice coefficient
* Returns value between 0 (no similarity) and 1 (identical)
*/
function calculateStringSimilarity(str1, str2) {
if (!str1 || !str2) return 0;
if (str1 === str2) return 1;
const normalize = (s) => s.toLowerCase().trim();
const a = normalize(str1);
const b = normalize(str2);
if (a.length < 2 || b.length < 2) return 0;
// Create bigram sets
const aBigrams = new Set();
const bBigrams = new Set();
for (let i = 0; i < a.length - 1; i++) {
aBigrams.add(a.substring(i, i + 2));
}
for (let i = 0; i < b.length - 1; i++) {
bBigrams.add(b.substring(i, i + 2));
}
// Count intersections
let intersection = 0;
for (const bigram of aBigrams) {
if (bBigrams.has(bigram)) intersection++;
}
// Dice coefficient
return (2 * intersection) / (aBigrams.size + bBigrams.size);
}
/**
* Compare two experiences to determine if they're similar
* Returns true if experiences are similar enough to be considered duplicates
*/
function areExperiencesSimilar(existing, newExperience, threshold = 0.7) {
// Must be same domain
if (existing.domain !== newExperience.domain) return false;
// Must be same type (effective/ineffective)
if (existing.type !== newExperience.type) return false;
// Calculate similarity scores for key fields
const situationSimilarity = calculateStringSimilarity(existing.situation, newExperience.situation);
const approachSimilarity = calculateStringSimilarity(existing.approach, newExperience.approach);
// Consider similar if situation AND approach are above threshold
return situationSimilarity >= threshold && approachSimilarity >= threshold;
}
/**
* Determine if new experience should update existing one
* Update if: higher confidence OR better reasoning OR more specific
*/
function shouldUpdateExperience(existing, newExperience) {
const newConfidence = newExperience.confidence || 0;
const existingConfidence = existing.confidence || 0;
// Update if significantly higher confidence
if (newConfidence > existingConfidence + 0.1) return true;
// Update if same/similar confidence but better reasoning
if (Math.abs(newConfidence - existingConfidence) < 0.1) {
const newReasoningLength = (newExperience.reasoning || '').length;
const existingReasoningLength = (existing.reasoning || '').length;
// Update if new reasoning is significantly more detailed
if (newReasoningLength > existingReasoningLength * 1.2) return true;
// Update if new experience has more context/assumptions/limitations
const newHasMore =
(newExperience.context ? 1 : 0) +
(newExperience.assumptions ? 1 : 0) +
(newExperience.limitations ? 1 : 0);
const existingHasMore =
(existing.context ? 1 : 0) +
(existing.assumptions ? 1 : 0) +
(existing.limitations ? 1 : 0);
if (newHasMore > existingHasMore) return true;
}
return false;
}
/**
* Find similar experiences in database
* Returns array of similar experiences
*/
function findSimilarExperiences(db, newExperience) {
// Query experiences with same domain and type
const candidates = db.prepare(`
SELECT * FROM experiences
WHERE domain = ? AND type = ?
ORDER BY created_at DESC
LIMIT 50
`).all(newExperience.domain, newExperience.type);
// Filter to truly similar experiences
const similar = [];
for (const candidate of candidates) {
if (areExperiencesSimilar(candidate, newExperience)) {
similar.push(candidate);
}
}
return similar;
}
/**
* Check for duplicates and determine action
* Returns: { action: 'save'|'update'|'skip', existingId?: number, reason: string }
*/
function checkDuplication(db, newExperience) {
const similarExperiences = findSimilarExperiences(db, newExperience);
if (similarExperiences.length === 0) {
return {
action: 'save',
reason: 'No similar experiences found'
};
}
// Find best matching existing experience
let bestMatch = similarExperiences[0];
let highestSimilarity = 0;
for (const similar of similarExperiences) {
const situationSim = calculateStringSimilarity(similar.situation, newExperience.situation);
const approachSim = calculateStringSimilarity(similar.approach, newExperience.approach);
const avgSim = (situationSim + approachSim) / 2;
if (avgSim > highestSimilarity) {
highestSimilarity = avgSim;
bestMatch = similar;
}
}
// If very similar (>90%), check if update is warranted
if (highestSimilarity > 0.9) {
if (shouldUpdateExperience(bestMatch, newExperience)) {
return {
action: 'update',
existingId: bestMatch.id,
reason: `Updating existing experience #${bestMatch.id} with improved information`
};
} else {
return {
action: 'skip',
existingId: bestMatch.id,
reason: `Duplicate of existing experience #${bestMatch.id} (similarity: ${(highestSimilarity * 100).toFixed(1)}%)`
};
}
}
// If moderately similar (70-90%), save as new but note relationship
if (highestSimilarity > 0.7) {
return {
action: 'save',
relatedId: bestMatch.id,
reason: `Similar to experience #${bestMatch.id} but distinct enough to save separately`
};
}
// Not similar enough
return {
action: 'save',
reason: 'Sufficiently distinct from existing experiences'
};
}
// ============================================================================
// END DEDUPLICATION LOGIC
// ============================================================================
// ============================================================================
// AUTOMATIC SCOPE DETECTION (v3.1.0)
// ============================================================================
// Cache for project keywords (cleared on restart)
const scopeDetectionCache = {
projectKeywords: null,
discoveredAt: null
};
// Load automatic experience recording configuration
function loadAutoScopeConfig() {
const configPath = path.join(process.cwd(), '.protocol-enforcer.json');
if (fs.existsSync(configPath)) {
try {
const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));
return config.automatic_experience_recording || {};
} catch (e) {
console.error('[memory-augmented-reasoning-mcp] Failed to load auto-scope config:', e.message);
}
}
// Default configuration
return {
scope: 'auto',
auto_discover_keywords: true,
project_keywords: [],
default_when_uncertain: 'project'
};
}
// Auto-discover project keywords from git, package.json, directory structure
function discoverProjectKeywords() {
const startTime = Date.now();
// Return cached if available and fresh
if (scopeDetectionCache.projectKeywords && scopeDetectionCache.discoveredAt) {
const age = Date.now() - scopeDetectionCache.discoveredAt;
if (age < 3600000) { // Cache for 1 hour
return scopeDetectionCache.projectKeywords;
}
}
const keywords = new Set();
// 1. Git repository name
try {
const gitConfigPath = path.join(process.cwd(), '.git', 'config');
if (fs.existsSync(gitConfigPath)) {
const gitConfig = fs.readFileSync(gitConfigPath, 'utf8');
const urlMatch = gitConfig.match(/url\s*=\s*.+\/([^\/\n]+?)(?:\.git)?$/m);
if (urlMatch) {
keywords.add(urlMatch[1].toLowerCase());
}
}
} catch (e) {
// Silent fail
}
// 2. Package.json name
try {
const packagePath = path.join(process.cwd(), 'package.json');
if (fs.existsSync(packagePath)) {
const pkg = JSON.parse(fs.readFileSync(packagePath, 'utf8'));
if (pkg.name) {
// Handle scoped packages like @org/package
const pkgName = pkg.name.startsWith('@') ? pkg.name.split('/')[1] : pkg.name;
keywords.add(pkgName.toLowerCase());
}
}
} catch (e) {
// Silent fail
}
// 3. Root directory name
try {
const rootDir = path.basename(process.cwd()).toLowerCase();
if (rootDir && rootDir !== '.' && rootDir !== '..') {
keywords.add(rootDir);
}
} catch (e) {
// Silent fail
}
// 4. Top-level directories (src, lib, components, etc.)
try {
const topLevel = fs.readdirSync(process.cwd(), { withFileTypes: true });
const relevantDirs = topLevel
.filter(d => d.isDirectory() && !d.name.startsWith('.') && d.name !== 'node_modules')
.map(d => d.name.toLowerCase())
.slice(0, 5); // Limit to first 5
relevantDirs.forEach(dir => keywords.add(dir));
} catch (e) {
// Silent fail
}
const result = Array.from(keywords);
// Cache results
scopeDetectionCache.projectKeywords = result;
scopeDetectionCache.discoveredAt = Date.now();
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Auto-discovered ${result.length} project keywords in ${elapsed}ms`);
return result;
}
// Detect scope based on experience content
function detectExperienceScope(experience, config) {
const startTime = Date.now();
// Combine all text fields for analysis
const combinedText = [
experience.situation,
experience.approach,
experience.outcome,
experience.reasoning
].join(' ').toLowerCase();
// 1. Manual config override (highest priority)
if (config.project_keywords && config.project_keywords.length > 0) {
for (const keyword of config.project_keywords) {
if (combinedText.includes(keyword.toLowerCase())) {
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Scope: 'project' (manual keyword: ${keyword}, ${elapsed}ms)`);
return 'project';
}
}
}
// 2. File path detection (project-specific paths)
const projectPathPatterns = [
'/src/', '/lib/', '/components/', '/modules/', '/packages/',
'/app/', '/pages/', '/api/', '/server/', '/client/',
'/test/', '/tests/', '/spec/', '/specs/',
'src/', 'lib/', 'components/', 'app/'
];
for (const pattern of projectPathPatterns) {
if (combinedText.includes(pattern)) {
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Scope: 'project' (file path: ${pattern}, ${elapsed}ms)`);
return 'project';
}
}
// 3. Auto-discovered project keywords
if (config.auto_discover_keywords !== false) {
const projectKeywords = discoverProjectKeywords();
for (const keyword of projectKeywords) {
if (combinedText.includes(keyword)) {
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Scope: 'project' (auto keyword: ${keyword}, ${elapsed}ms)`);
return 'project';
}
}
}
// 4. Generic tool/protocol patterns suggest user scope
const userScopePatterns = [
'read tool', 'write tool', 'edit tool', 'grep tool', 'glob tool', 'bash tool',
'mcp server', 'protocol', 'hook', 'workflow', 'enforcement'
];
let userMatches = 0;
for (const pattern of userScopePatterns) {
if (combinedText.includes(pattern)) {
userMatches++;
}
}
if (userMatches >= 2) {
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Scope: 'user' (generic patterns: ${userMatches}, ${elapsed}ms)`);
return 'user';
}
// 5. Fall back to configured default
const defaultScope = config.default_when_uncertain || 'project';
const elapsed = Date.now() - startTime;
console.error(`[memory-augmented-reasoning-mcp] Scope: '${defaultScope}' (default fallback, ${elapsed}ms)`);
return defaultScope;
}
// ============================================================================
// END AUTOMATIC SCOPE DETECTION
// ============================================================================
// Record experience to specified database
function recordExperience(dbs, params) {
// Validate required parameters upfront
const required = ['type', 'domain', 'situation', 'approach', 'outcome', 'reasoning'];
const missing = required.filter(field => !params[field]);
if (missing.length > 0) {
throw new ValidationError(
`Invalid params: missing ${missing.join(', ')}`,
`Missing required parameter: ${missing.join(', ')}\n` +
`All required fields: type, domain, situation, approach, outcome, reasoning\n` +
`Example: { type: "effective", domain: "Tools", situation: "...", approach: "...", outcome: "...", reasoning: "..." }`
);
}
// Validate enum values
const validTypes = ['effective', 'ineffective'];
if (!validTypes.includes(params.type)) {
throw new ValidationError(
`Invalid params: type must be "effective" or "ineffective"`,
`Invalid type: "${params.type}"\n` +
`Valid values: "effective", "ineffective"\n` +
`Example: { type: "effective", domain: "Tools", situation: "...", approach: "...", outcome: "...", reasoning: "..." }`
);
}
const validDomains = ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'];
if (!validDomains.includes(params.domain)) {
throw new ValidationError(
`Invalid params: domain must be one of: ${validDomains.join(', ')}`,
`Invalid domain: "${params.domain}"\n` +
`Valid values: ${validDomains.join(', ')}\n` +
`Example: { type: "effective", domain: "Tools", situation: "...", approach: "...", outcome: "...", reasoning: "..." }`
);
}
// Automatic scope detection (v3.1.0)
let scope;
if (params.scope && params.scope !== 'auto') {
// Explicit scope provided (not "auto")
scope = params.scope;
} else {
// Scope is "auto" or not provided - detect automatically
const config = loadAutoScopeConfig();
scope = detectExperienceScope(params, config);
}
const db = dbs[scope];
if (!db) {
throw new Error(`Database scope '${scope}' not configured`);
}
// Check for duplicates
const duplicationCheck = checkDuplication(db, params);
// Handle skip action
if (duplicationCheck.action === 'skip') {
// TEACH GATE: Clear recording requirement tokens
try {
const homeDir = os.homedir();
const files = fs.readdirSync(homeDir);
const recordingTokens = files.filter(f => f.startsWith('.protocol-recording-required-'));
for (const file of recordingTokens) {
try {
fs.unlinkSync(path.join(homeDir, file));
} catch (e) {
// Silent fail
}
}
} catch (e) {
// Silent fail
}
return {
success: true,
action: 'skip',
id: duplicationCheck.existingId,
scope: scope,
message: duplicationCheck.reason
};
}
// Handle update action
if (duplicationCheck.action === 'update') {
const updateStmt = db.prepare(`
UPDATE experiences
SET situation = ?, approach = ?, outcome = ?, reasoning = ?,
alternative = ?, confidence = ?, context = ?, assumptions = ?, limitations = ?,
revision_of = ?
WHERE id = ?
`);
updateStmt.run(
params.situation,
params.approach,
params.outcome,
params.reasoning,
params.alternative || null,
params.confidence || null,
params.context || null,
params.assumptions || null,
params.limitations || null,
duplicationCheck.existingId, // Mark as revision of existing
duplicationCheck.existingId
);
// TEACH GATE: Clear recording requirement tokens
try {
const homeDir = os.homedir();
const files = fs.readdirSync(homeDir);
const recordingTokens = files.filter(f => f.startsWith('.protocol-recording-required-'));
for (const file of recordingTokens) {
try {
fs.unlinkSync(path.join(homeDir, file));
} catch (e) {
// Silent fail
}
}
} catch (e) {
// Silent fail
}
return {
success: true,
action: 'update',
id: duplicationCheck.existingId,
scope: scope,
message: duplicationCheck.reason
};
}
// Handle save action (new experience)
const insertStmt = db.prepare(`
INSERT INTO experiences (
type, domain, situation, approach, outcome, reasoning, alternative,
confidence, revision_of, contradicts, supports, context, assumptions, limitations
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
`);
const result = insertStmt.run(
params.type,
params.domain,
params.situation,
params.approach,
params.outcome,
params.reasoning,
params.alternative || null,
params.confidence || null,
duplicationCheck.relatedId || params.revision_of || null,
params.contradicts || null,
params.supports || null,
params.context || null,
params.assumptions || null,
params.limitations || null
);
// TEACH GATE: Clear recording requirement tokens for all sessions
try {
const homeDir = os.homedir();
const files = fs.readdirSync(homeDir);
const recordingTokens = files.filter(f => f.startsWith('.protocol-recording-required-'));
for (const file of recordingTokens) {
try {
fs.unlinkSync(path.join(homeDir, file));
} catch (e) {
// Silent fail - token cleanup is not critical
}
}
} catch (e) {
// Silent fail - directory read errors are not critical
}
return {
success: true,
action: 'save',
id: result.lastInsertRowid,
scope: scope,
message: duplicationCheck.reason,
relatedTo: duplicationCheck.relatedId || null
};
}
// Query with FTS5 full-text search
function queryWithFTS(db, params, source) {
if (params.query) {
try {
// Split query into terms, quote each individually, join with OR
// This enables partial matching with BM25 relevance ranking
const terms = params.query.trim().split(/\s+/)
.map(term => `"${term.replace(/"/g, '""')}"`)
.join(' OR ');
// Use FTS5 for text search with BM25 ranking
return db.prepare(`
SELECT l.*, ? as source FROM experiences l
JOIN experiences_fts ON l.id = experiences_fts.rowid
WHERE experiences_fts MATCH ?
AND (? IS NULL OR l.domain = ?)
AND (? IS NULL OR l.type = ?)
ORDER BY bm25(experiences_fts)
`).all(source, terms, params.domain, params.domain, params.type, params.type);
} catch (error) {
// If FTS5 query fails (invalid syntax), fall back to LIKE search
console.error('[memory-augmented-reasoning-mcp] FTS5 query failed, falling back to LIKE:', error.message);
const likePattern = `%${params.query}%`;
return db.prepare(`
SELECT *, ? as source FROM experiences
WHERE (situation LIKE ? OR approach LIKE ? OR outcome LIKE ? OR reasoning LIKE ?)
AND (? IS NULL OR domain = ?)
AND (? IS NULL OR type = ?)
ORDER BY created_at DESC
`).all(source, likePattern, likePattern, likePattern, likePattern, params.domain, params.domain, params.type, params.type);
}
} else {
// Direct query without FTS
return db.prepare(`
SELECT *, ? as source FROM experiences
WHERE (? IS NULL OR domain = ?)
AND (? IS NULL OR type = ?)
ORDER BY created_at DESC
`).all(source, params.domain, params.domain, params.type, params.type);
}
}
// Search experiences across both databases
function searchExperiences(dbs, params) {
// LEARN GATE: Clear search requirement and create completion token
const searchRequiredPath = path.join(os.homedir(), '.protocol-search-required-token');
if (fs.existsSync(searchRequiredPath)) {
try {
fs.unlinkSync(searchRequiredPath);
} catch (e) {
// Silent fail - token cleanup is not critical
}
}
// Create session-specific search-completed token (5-minute TTL)
const sessionId = getSessionId();
const searchCompletedPath = path.join(os.homedir(), `.protocol-search-completed-${sessionId}`);
const searchCompletedData = {
completed: new Date().toISOString(),
expires: new Date(Date.now() + 5 * 60 * 1000).toISOString(), // 5 minutes
session_id: sessionId
};
try {
fs.writeFileSync(searchCompletedPath, JSON.stringify(searchCompletedData, null, 2));
} catch (e) {
// Silent fail - token creation is not critical
}
const results = [];
// Search project database first (higher priority)
if (dbs.project) {
const projectResults = queryWithFTS(dbs.project, params, 'project');
results.push(...projectResults);
}
// Search user database
const userResults = queryWithFTS(dbs.user, params, 'user');
results.push(...userResults);
// Deduplicate by (domain, situation, approach)
const seen = new Set();
const deduped = [];
for (const result of results) {
const key = `${result.domain}|${result.situation}|${result.approach}`;
if (!seen.has(key)) {
seen.add(key);
deduped.push(result);
}
}
// Apply limit
const limit = params.limit || parseInt(process.env.LEARNING_LIBRARY_MAX_RESULTS || '50');
return deduped.slice(0, limit);
}
// Export experiences to JSON or Markdown
function exportExperiences(dbs, params) {
const scope = params.scope || 'user';
const db = dbs[scope];
if (!db) {
throw new Error(`Database scope '${scope}' not configured`);
}
let query = 'SELECT * FROM experiences WHERE 1=1';
const bindings = [];
if (params.domain) {
query += ' AND domain = ?';
bindings.push(params.domain);
}
if (params.type) {
query += ' AND type = ?';
bindings.push(params.type);
}
query += ' ORDER BY created_at DESC';
const experiences = db.prepare(query).all(...bindings);
const format = params.format || 'json';
const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
const filename = params.filename || `experiences_${scope}_${timestamp}.${format === 'json' ? 'json' : 'md'}`;
const exportDir = path.join(process.cwd(), '.cursor', 'experience-exports');
if (!fs.existsSync(exportDir)) {
fs.mkdirSync(exportDir, { recursive: true });
}
const exportPath = path.join(exportDir, filename);
if (format === 'json') {
fs.writeFileSync(exportPath, JSON.stringify(experiences, null, 2), 'utf8');
} else {
// Markdown format
let markdown = `# Memory-Augmented Reasoning Export\n\n`;
markdown += `**Scope:** ${scope}\n`;
markdown += `**Date:** ${new Date().toISOString()}\n`;
markdown += `**Count:** ${experiences.length}\n\n`;
for (const experience of experiences) {
markdown += `## ${experience.domain} - ${experience.type}\n\n`;
markdown += `**Situation:** ${experience.situation}\n\n`;
markdown += `**Approach:** ${experience.approach}\n\n`;
markdown += `**Outcome:** ${experience.outcome}\n\n`;
markdown += `**Reasoning:** ${experience.reasoning}\n\n`;
if (experience.alternative) {
markdown += `**Alternative:** ${experience.alternative}\n\n`;
}
markdown += `**Created:** ${experience.created_at}\n\n`;
markdown += `---\n\n`;
}
fs.writeFileSync(exportPath, markdown, 'utf8');
}
return {
success: true,
path: exportPath,
count: experiences.length,
format: format
};
}
// ============================================================================
// PHASE 2: ACTIVE REASONING - informed_reasoning Tool Implementation
// ============================================================================
// Helper: Estimate token count (rough approximation: 1 token ≈ 4 characters)
function estimateTokens(content) {
if (!content) return 0;
const str = typeof content === 'string' ? content : JSON.stringify(content);
return Math.ceil(str.length / 4);
}
// Helper: Extract protocol rules from local documentation
function extractProtocolRules(localDocs) {
const rules = [];
for (const doc of localDocs) {
// Handle both 'path' and 'source' fields for flexibility
const docPath = doc.path || doc.source || '';
if (docPath.includes('.cursor/rules/') || docPath.endsWith('CLAUDE.md')) {
// Extract rule-like patterns (lines starting with -, *, or numbered lists)
const lines = doc.content.split('\n');
for (const line of lines) {
if (/^[-*\d]+\.?\s+/.test(line.trim()) && line.length > 20) {
rules.push(line.trim());
}
}
}
}
return rules;
}
// Helper functions for user intent extraction
function extractGoal(problem) {
const normalized = problem.toLowerCase().trim();
// Detect informational requests
if (normalized.match(/^(what|how|why|when|where|which|explain|tell|show|list)/)) {
return `Understand ${normalized.split(/\s+/).slice(1, 5).join(' ')}`;
}
// Detect status/summary requests
if (normalized.match(/(standup|status|summary|progress|update)/)) {
return 'Brief status update';
}
// Detect action requests
if (normalized.match(/^(create|add|implement|build|fix|update|refactor|optimize)/)) {
const action = normalized.split(/\s+/)[0];
const target = normalized.split(/\s+/).slice(1, 4).join(' ');
return `${action.charAt(0).toUpperCase() + action.slice(1)} ${target}`;
}
// Default: extract first meaningful phrase
return problem.split(/[.?!]/)[0].trim().slice(0, 50);
}
function extractPriority(problem) {
const normalized = problem.toLowerCase();
if (normalized.match(/(quick|brief|short|fast|asap)/)) {
return 'Speed and clarity';
}
if (normalized.match(/(detailed|comprehensive|thorough|complete)/)) {
return 'Completeness and accuracy';
}
if (normalized.match(/(fix|bug|error|broken)/)) {
return 'Correctness and reliability';
}
if (normalized.match(/(optimize|performance|speed|slow)/)) {
return 'Efficiency';
}
return 'Balance of speed and quality';
}
function extractFormat(problem) {
const normalized = problem.toLowerCase();
if (normalized.match(/(standup|status)/)) {
return 'Git log + current work';
}
if (normalized.match(/^(list|show|display)/)) {
return 'Structured list';
}
if (normalized.match(/(explain|how|why)/)) {
return 'Detailed explanation';
}
if (normalized.match(/(summary|brief)/)) {
return 'Concise summary';
}
return 'Clear and actionable';
}
function categorizeRequest(problem) {
const normalized = problem.toLowerCase();
if (normalized.match(/(standup|status|progress|update)/)) {
return 'Recurring informational';
}
if (normalized.match(/(how|what|why|explain)/)) {
return 'Knowledge request';
}
if (normalized.match(/(create|add|implement|build)/)) {
return 'New feature';
}
if (normalized.match(/(fix|bug|error|broken)/)) {
return 'Bug fix';
}
if (normalized.match(/(refactor|optimize|improve)/)) {
return 'Code improvement';
}
return 'General task';
}
function generateFocusedQuery(problem) {
const normalized = problem.toLowerCase().trim();
// Extract key patterns (remove question marks, filler words)
const fillerWords = ['a', 'an', 'the', 'is', 'are', 'was', 'were', 'be', 'been',
'being', 'have', 'has', 'had', 'do', 'does', 'did', 'will',
'would', 'could', 'should', 'may', 'might', 'can', 'i', 'you',
'we', 'they', 'my', 'your', 'our', 'their', 'to', 'from', 'for'];
const words = normalized
.replace(/[?!.,;:]/g, '')
.split(/\s+/)
.filter(w => w.length > 2 && !fillerWords.includes(w));
// Take first 3-5 meaningful words
const keyWords = words.slice(0, 5).join(' ');
// Limit to 20 words maximum
const result = keyWords.split(/\s+/).slice(0, 20).join(' ');
return result || problem.slice(0, 100);
}
function extractPatterns(problem) {
const normalized = problem.toLowerCase().trim();
const patterns = [];
// Extract domain-specific patterns
if (normalized.match(/(standup|status)/)) patterns.push('standup', 'status');
if (normalized.match(/(jira|ticket|issue)/)) patterns.push('jira', 'ticket');
if (normalized.match(/(git|commit|branch)/)) patterns.push('git', 'version-control');
if (normalized.match(/(test|testing|spec)/)) patterns.push('testing');
if (normalized.match(/(api|endpoint|request)/)) patterns.push('api');
if (normalized.match(/(database|query|sql)/)) patterns.push('database');
// Extract action patterns
if (normalized.match(/^(how|what|why|explain)/)) patterns.push('informational');
if (normalized.match(/^(create|add|implement)/)) patterns.push('creation');
if (normalized.match(/^(fix|bug|error)/)) patterns.push('debugging');
return patterns.length > 0 ? patterns : ['general'];
}
function categorizeDomain(problem) {
const normalized = problem.toLowerCase();
if (normalized.match(/(jira|ticket|issue|task)/)) return 'Tools';
if (normalized.match(/(standup|status|summary|report)/)) return 'Communication';
if (normalized.match(/(decide|choose|option|approach)/)) return 'Decision';
if (normalized.match(/(bug|error|fix|broken)/)) return 'Debugging';
if (normalized.match(/(how|why|explain|understand)/)) return 'Process';
return null; // Search all domains
}
function identifyKeyFactors(problem) {
const normalized = problem.toLowerCase();
const factors = [];
if (normalized.match(/(standup|status|brief|quick)/)) factors.push('recency', 'brevity');
if (normalized.match(/(detailed|comprehensive|thorough)/)) factors.push('completeness', 'accuracy');
if (normalized.match(/(similar|like|previous)/)) factors.push('similarity', 'recency');
if (normalized.match(/(best|optimal|recommended)/)) factors.push('effectiveness', 'confidence');
return factors.length > 0 ? factors : ['relevance'];
}
// Phase 1: ANALYZE - Identify relevant sources based on availableTools
function analyzePhase(params) {
const { problem, availableTools = [] } = params;
// Validate required parameters
if (!problem || typeof problem !== 'string' || problem.trim().length === 0) {
const briefMsg = !problem
? 'Invalid params: missing problem parameter'
: 'Invalid params: problem must be non-empty string';
const details = !problem
? 'Missing required parameter: problem\nThe problem parameter must be a non-empty string describing what you want to solve.\nExample: { phase: "analyze", problem: "How to optimize database queries" }'
: 'Invalid parameter: problem must be a non-empty string\nExample: { phase: "analyze", problem: "How to optimize database queries" }';
throw new ValidationError(briefMsg, details);
}
// Extract user intent
const userIntent = {
goal: extractGoal(problem),
priority: extractPriority(problem),
expectedFormat: extractFormat(problem),
context: categorizeRequest(problem)
};
const suggestedQueries = {};
// Always suggest experience library query - now with focused generation
if (availableTools.includes('memory-augmented-reasoning') || availableTools.length === 0) {
suggestedQueries.learningLibrary = {
query: generateFocusedQuery(problem),
patterns: extractPatterns(problem),
domain: categorizeDomain(problem),
type: null, // Both effective and ineffective
context: userIntent.context
};
}
// Suggest local file searches
suggestedQueries.localFiles = [
'CLAUDE.md',
'.cursor/rules/*.mdc',
'README.md'
];
// Suggest Context7 library documentation query
if (availableTools.includes('context7')) {
suggestedQueries.context7 = {
libraryName: 'auto-detect',
query: problem
};
}
// Suggest Jira ticket search
if (availableTools.includes('jira') || availableTools.includes('mcp-atlassian')) {
suggestedQueries.jira = {
jql: `text ~ "${problem.split(' ').slice(0, 3).join(' ')}" ORDER BY updated DESC`
};
}
// Suggest Figma design search
if (availableTools.includes('figma') || availableTools.includes('figma-desktop')) {
suggestedQueries.figma = {
nodeId: null, // Request current selection
query: 'design specifications'
};
}
// Suggest GitHub code search
if (availableTools.includes('github')) {
suggestedQueries.github = {
query: problem.split(' ').slice(0, 5).join(' '),
scope: 'code'
};
}
// Suggest web search as fallback
suggestedQueries.webSearch = {
query: `${problem} best practices 2026`
};
return {
phase: 'analyze',
nextPhase: 'integrate',
problem: problem, // FIX #3: Store problem for later phases
userIntent, // NEW: User intent extraction
suggestedQueries,
keyFactors: identifyKeyFactors(problem), // NEW: Search priorities
estimatedComplexity: problem.length > 100 ? 'high' : problem.split(' ').length > 10 ? 'medium' : 'low'
};
}
// Phase 2: INTEGRATE - Synthesize context from multiple sources
function integratePhase(params, dbs, sessionData) {
let { problem, gatheredContext = {} } = params;
// FIX #1: Session-aware validation - Try to get problem from analyze phase if not provided
if (!problem && sessionData?.phases?.analyze?.artifacts?.problem) {
problem = sessionData.phases.analyze.artifacts.problem;
console.error('[integrate] Using problem from analyze phase');
}
// Strict validation with minimal error message
if (!problem || typeof problem !== 'string' || problem.trim().length === 0) {
throw new ValidationError(
'Invalid params: missing problem parameter',
'Missing required parameter: problem (integrate phase)\n' +
'Required: { phase: "integrate", session_id: "...", problem: "...", gatheredContext: {...} }'
);
}
// Phase 7.5: Enforce that search_experiences was called before integrate
if (sessionData?.phases?.analyze?.artifacts && !sessionData.phases.analyze.artifacts.searchCalled) {
const suggestedQuery = sessionData.phases.analyze.artifacts.suggestedQueries?.learningLibrary?.query || problem;
throw new ValidationError(
'Invalid params: search_experiences not called before integrate',
'Protocol violation: search_experiences must be called before integrate phase\n' +
'Required sequence:\n' +
' 1. informed_reasoning({ phase: "analyze", problem: "..." })\n' +
' 2. search_experiences({ query: "..." }) ← YOU SKIPPED THIS\n' +
' 3. informed_reasoning({ phase: "integrate", gatheredContext: { experiences: [...] } })\n' +
'\n' +
'Example:\n' +
` search_experiences({ query: "${suggestedQuery}" })\n` +
'\n' +
'Why: The system learns from past experiences. Skipping search means missing valuable patterns.'
);
}
// Validate gatheredContext type
if (gatheredContext && typeof gatheredContext !== 'object') {
throw new ValidationError(
'Invalid params: gatheredContext must be object',
'Invalid parameter type: gatheredContext must be object (integrate phase)\n' +
'Required: { phase: "integrate", session_id: "...", problem: "...", gatheredContext: {...} }'
);
}
// FIX #4: Validate gatheredContext quality
const experiences = gatheredContext.experiences || [];
const localDocs = gatheredContext.localDocs || [];
const mcpData = gatheredContext.mcpData || {};
// Phase 7.5: Warn if no experiences found (they searched but found nothing)
if (experiences.length === 0) {
console.error('[memory-augmented-reasoning-mcp] WARNING: No experiences found in search. Consider recording this as a new pattern after task completion.');
}
const qualityMetrics = {
experiencesCount: experiences.length,
effectiveCount: experiences.filter(e => e.type === 'effective').length,
ineffectiveCount: experiences.filter(e => e.type === 'ineffective').length,
localDocsCount: localDocs.length,
mcpDataKeys: Object.keys(mcpData).length,
totalItems: experiences.length + localDocs.length + Object.keys(mcpData).length
};
// Log quality metrics to stderr for agent awareness
console.error(`[integrate] Context quality: ${qualityMetrics.totalItems} items total ` +
`(${qualityMetrics.experiencesCount} experiences, ${qualityMetrics.localDocsCount} docs, ` +
`${qualityMetrics.mcpDataKeys} MCP sources)`);
// Warn if context is empty or very minimal
if (qualityMetrics.totalItems === 0) {
console.error(
'\n⚠️ WARNING: Empty gatheredContext provided\n' +
' IMPACT: synthesizedContext will be minimal (likely < 50 chars)\n' +
' ROOT CAUSE: You did not execute any queries from analyze phase\n\n' +
' RECOMMENDED ACTIONS:\n' +
' 1. Call search_experiences(...) with queries from analyze phase\n' +
' 2. Use Read tool to load relevant project files\n' +
' 3. Query other MCP servers if applicable (jira, figma, context7)\n' +
' 4. Pass gathered data in gatheredContext parameter:\n' +
' gatheredContext: {\n' +
' experiences: [...], // from search_experiences results\n' +
' localDocs: [...], // from Read tool results\n' +
' mcpData: {...} // from other MCP servers\n' +
' }\n\n' +
' NOTE: Session validation requires synthesizedContext >= 50 chars.\n' +
' Empty context will cause validation failure.\n'
);
} else if (qualityMetrics.totalItems < 3) {
console.error(
`\n⚠️ WARNING: Minimal gatheredContext (only ${qualityMetrics.totalItems} item${qualityMetrics.totalItems === 1 ? '' : 's'})\n` +
' SUGGESTION: Execute more queries to improve context quality:\n' +
' - search_experiences(...) for relevant past experiences\n' +
' - Read tool for project documentation/code\n' +
' - MCP servers for external requirements\n'
);
} else if (qualityMetrics.experiencesCount === 0) {
console.error(
'\n💡 TIP: No experiences found in gatheredContext\n' +
' Consider calling search_experiences(...) to learn from past work\n'
);
}
const synthesis = {
summary: {},
text: '',
estimatedThoughts: 3,
priority: []
};
let tokenCount = 0;
const MAX_TOKENS = 20000;
// TIER 1: MANDATORY - Project rules and protocols
if (localDocs.length > 0) {
synthesis.summary.projectRules = extractProtocolRules(localDocs);
synthesis.priority.push('PROJECT_RULES');
tokenCount += estimateTokens(synthesis.summary.projectRules);
}
// TIER 2: HIGH PRIORITY - Effective experiences
const effective = experiences
.filter(l => l.type === 'effective')
.sort((a, b) => (b.confidence || 0) - (a.confidence || 0))
.slice(0, 10);
if (effective.length > 0 && tokenCount < MAX_TOKENS * 0.5) {
synthesis.summary.effectiveExperiences = effective.map(l => ({
id: l.id,
approach: l.approach,
outcome: l.outcome,
confidence: l.confidence,
context: l.context
}));
synthesis.priority.push('EFFECTIVE_LEARNINGS');
tokenCount += estimateTokens(synthesis.summary.effectiveExperiences);
}
// TIER 2: Anti-patterns
const ineffective = experiences.filter(l => l.type === 'ineffective');
if (ineffective.length > 0 && tokenCount < MAX_TOKENS * 0.6) {
synthesis.summary.antiPatterns = ineffective.map(l => ({
id: l.id,
avoidApproach: l.approach,
useInstead: l.alternative,
reasoning: l.reasoning
}));
synthesis.priority.push('ANTI_PATTERNS');
tokenCount += estimateTokens(synthesis.summary.antiPatterns);
}
// TIER 3: External requirements (Jira, Figma, etc.)
if (mcpData.jira && tokenCount < MAX_TOKENS * 0.75) {
synthesis.summary.requirements = {
summary: mcpData.jira.summary || mcpData.jira.fields?.summary,
description: mcpData.jira.description || mcpData.jira.fields?.description
};
synthesis.priority.push('JIRA_REQUIREMENTS');
tokenCount += estimateTokens(synthesis.summary.requirements);
}
if (mcpData.figma && tokenCount < MAX_TOKENS * 0.8) {
synthesis.summary.designSpecs = mcpData.figma;
synthesis.priority.push('FIGMA_DESIGNS');
tokenCount += estimateTokens(synthesis.summary.designSpecs);
}
// TIER 4: Truncated summaries (library docs, code examples, web)
if (mcpData.context7 && tokenCount < MAX_TOKENS * 0.9) {
synthesis.summary.libraryDocs = {
library: mcpData.context7.libraryName || 'auto-detected',
summary: mcpData.context7.docs ? String(mcpData.context7.docs).slice(0, 500) : 'N/A'
};
tokenCount += estimateTokens(synthesis.summary.libraryDocs);
}
// Generate synthesized text
let text = `# Context for: ${problem}\n\n`;
if (synthesis.summary.projectRules && synthesis.summary.projectRules.length > 0) {
text += `## Project Rules (MANDATORY)\n${synthesis.summary.projectRules.slice(0, 5).join('\n')}\n\n`;
}
if (synthesis.summary.effectiveExperiences && synthesis.summary.effectiveExperiences.length > 0) {
text += `## Effective Patterns (from past work)\n`;
for (const experience of synthesis.summary.effectiveExperiences.slice(0, 3)) {
text += `- **Approach:** ${experience.approach}\n`;
text += ` **Outcome:** ${experience.outcome} (confidence: ${experience.confidence || 'N/A'})\n`;
}
text += '\n';
}
if (synthesis.summary.antiPatterns && synthesis.summary.antiPatterns.length > 0) {
text += `## Anti-Patterns (AVOID)\n`;
for (const antiPattern of synthesis.summary.antiPatterns) {
text += `- **Don't:** ${antiPattern.avoidApproach}\n`;
text += ` **Instead:** ${antiPattern.useInstead}\n`;
}
text += '\n';
}
if (synthesis.summary.requirements) {
text += `## Requirements\n${synthesis.summary.requirements.summary}\n\n`;
}
synthesis.text = text;
// Estimate thoughts needed based on complexity
const complexity =
(synthesis.summary.projectRules?.length || 0) +
(synthesis.summary.effectiveExperiences?.length || 0) +
(synthesis.summary.antiPatterns?.length || 0);
synthesis.estimatedThoughts = Math.max(3, Math.min(10, Math.ceil(complexity / 3)));
return {
phase: 'integrate',
nextPhase: 'reason',
contextSummary: synthesis.summary,
synthesizedContext: synthesis.text,
guidance: `⚠️ USER REQUIREMENT: informed_reasoning(phase='reason') MUST be called next.\nThe system needs all 4 phases to learn from this task properly.\nOther tools blocked until protocol complete.`,
estimatedThoughts: synthesis.estimatedThoughts,
tokenBudget: {
used: tokenCount,
remaining: MAX_TOKENS - tokenCount,
warning: tokenCount > MAX_TOKENS * 0.9 ? 'Approaching token limit' : null
}
};
}
// Phase 3: REASON - Evaluate thought against context
function reasonPhase(params, dbs) {
const {
problem,
thought,
thoughtNumber,
totalThoughts,
nextThoughtNeeded,
isRevision = false,
revisesThought = null,
branchFromThought = null,
branchId = null,
needsMoreThoughts = false
} = params;
// Validate required parameters
if (!thought || typeof thought !== 'string') {
throw new ValidationError(
'Invalid params: missing thought parameter',
'Missing required parameter: thought (reason phase)\n' +
'Required: { phase: "reason", session_id: "...", problem: "...", thought: "...", thoughtNumber: 1, totalThoughts: 3, nextThoughtNeeded: true }'
);
}
if (typeof thoughtNumber !== 'number' || thoughtNumber < 1) {
throw new ValidationError(
'Invalid params: thoughtNumber must be >= 1',
'Invalid parameter: thoughtNumber must be number >= 1 (reason phase)\n' +
'Required: { phase: "reason", ..., thoughtNumber: 1, totalThoughts: 3, nextThoughtNeeded: true }'
);
}
if (typeof totalThoughts !== 'number' || totalThoughts < 1) {
throw new ValidationError(
'Invalid params: totalThoughts must be >= 1',
'Invalid parameter: totalThoughts must be number >= 1 (reason phase)\n' +
'Required: { phase: "reason", ..., thoughtNumber: 1, totalThoughts: 3, nextThoughtNeeded: true }'
);
}
if (thoughtNumber > totalThoughts) {
throw new ValidationError(
'Invalid params: thoughtNumber exceeds totalThoughts',
'Invalid parameter: thoughtNumber cannot exceed totalThoughts (reason phase)\n' +
'Required: thoughtNumber <= totalThoughts'
);
}
if (typeof nextThoughtNeeded !== 'boolean') {
throw new ValidationError(
'Invalid params: nextThoughtNeeded must be boolean',
'Invalid parameter: nextThoughtNeeded must be boolean (reason phase)\n' +
'Required: { phase: "reason", ..., nextThoughtNeeded: true }'
);
}
// Evaluate thought against available context
const evaluation = {
alignment: 'good',
protocolCompliant: true,
patternMatch: true,
issues: []
};
// Check if thought mentions implementing without reading
if (thought.toLowerCase().includes('implement') &&
!thought.toLowerCase().includes('read') &&
!thought.toLowerCase().includes('check') &&
!thought.toLowerCase().includes('verify')) {
evaluation.issues.push('Implementing without reading existing code first');
evaluation.alignment = 'poor';
}
// Check if thought suggests adding features not requested
if (thought.toLowerCase().match(/also|additionally|while we're at it/)) {
evaluation.issues.push('Potential scope creep detected');
}
// Provide guidance
let guidance = '';
if (nextThoughtNeeded === false) {
if (evaluation.issues.length === 0) {
guidance = '⚠️ USER REQUIREMENT: record_experience MUST be called after Write/Edit operations.\nThe system learns from outcomes - recording is mandatory for institutional memory.\nFile operations now allowed, but must record results.';
} else {
guidance = `⚠️ USER REQUIREMENT: Issues detected: ${evaluation.issues.join(', ')}. Consider revising before proceeding.\nThe system learns from outcomes - recording is mandatory for institutional memory.\nFile operations now allowed, but must record results.`;
}
} else {
guidance = '⚠️ USER REQUIREMENT: Continue reasoning until complete.\nThe system needs all 4 phases to learn from this task properly.';
}
// Suggest revision if major issues
const revisionSuggestion = evaluation.issues.length > 0 && !isRevision ? {
shouldRevise: true,
thought: thoughtNumber,
reason: evaluation.issues[0]
} : undefined;
// Suggest branching for exploration
const branchSuggestion =
thoughtNumber > 2 &&
thought.toLowerCase().match(/could|might|alternative|or/) &&
!branchId ? {
shouldBranch: true,
reason: 'Multiple approaches detected',
branchOptions: ['option-a', 'option-b']
} : undefined;
return {
phase: 'reason',
thoughtNumber,
totalThoughts: needsMoreThoughts ? totalThoughts + 1 : totalThoughts,
nextPhase: nextThoughtNeeded ? 'reason' : 'record',
guidance,
suggestedActions: evaluation.issues.length > 0 ? ['Review effective experiences', 'Check protocol rules'] : [],
nextThoughtNeeded,
branches: branchId ? [branchId] : [],
thoughtHistoryLength: thoughtNumber,
evaluation,
branchSuggestion,
revisionSuggestion
};
}
// Phase 4: RECORD - Capture experience
function recordPhase(params, dbs) {
const { problem, finalConclusion, relatedExperiences = [] } = params;
// Validate required parameters
if (!problem || typeof problem !== 'string' || problem.trim().length === 0) {
const briefMsg = !problem
? 'Invalid params: missing problem parameter'
: 'Invalid params: problem must be non-empty string';
const details = !problem
? 'Missing required parameter: problem\nThe problem parameter must be a non-empty string.\nExample: { phase: "record", problem: "...", finalConclusion: "..." }'
: 'Invalid parameter: problem must be a non-empty string\nExample: { phase: "record", problem: "...", finalConclusion: "..." }';
throw new ValidationError(briefMsg, details);
}
if (!finalConclusion || typeof finalConclusion !== 'string' || finalConclusion.trim().length === 0) {
const briefMsg = !finalConclusion
? 'Invalid params: missing finalConclusion parameter'
: 'Invalid params: finalConclusion must be non-empty string';
const details = !finalConclusion
? 'Missing required parameter: finalConclusion\nThe finalConclusion parameter must be a non-empty string summarizing the result.\nExample: { phase: "record", problem: "...", finalConclusion: "..." }'
: 'Invalid parameter: finalConclusion must be a non-empty string\nExample: { phase: "record", problem: "...", finalConclusion: "..." }';
throw new ValidationError(briefMsg, details);
}
// Auto-capture experience from reasoning session
const experience = {
type: 'effective',
domain: 'Process',
situation: problem,
approach: finalConclusion,
outcome: 'Reasoning completed with context',
reasoning: 'Applied informed reasoning with multi-source context',
confidence: 0.7,
supports: relatedExperiences.length > 0 ? relatedExperiences.join(',') : null,
scope: 'user'
};
try {
const result = recordExperience(dbs, experience);
return {
phase: 'record',
recorded: true,
learningId: result.id,
experience,
guidance: `Saved ${experience.type} pattern to the experience library (ID: ${result.id}). Available for future reasoning sessions.`,
relatedExperiences
};
} catch (error) {
return {
phase: 'record',
recorded: false,
experience,
guidance: `Failed to record experience: ${error.message}`,
relatedExperiences
};
}
}
// Check protocol compliance for current session
function checkProtocolCompliance(dbs, params) {
const { session_id, hook, tool_name } = params;
const sessionId = session_id || getSessionId();
if (!dbs.user) {
return {
compliant: false,
reason: 'No user database available',
violations: ['User database not loaded']
};
}
const session = dbs.user.prepare('SELECT * FROM protocol_sessions WHERE session_id = ?').get(sessionId);
if (!session) {
return {
compliant: false,
reason: 'No protocol session found',
violations: ['Session not initialized - call informed_reasoning first']
};
}
// Check requirements based on hook
const violations = [];
const phases = session.informed_reasoning_phases ? JSON.parse(session.informed_reasoning_phases) : [];
if (!phases.includes('analyze')) {
violations.push('Must call informed_reasoning (analyze phase) first');
}
// Check for stale session (> 5 minutes since last activity)
const lastActivity = new Date(session.last_activity);
const fiveMinutesAgo = new Date(Date.now() - 5 * 60 * 1000);
if (lastActivity < fiveMinutesAgo) {
violations.push('Session stale - must call informed_reasoning again');
}
return {
compliant: violations.length === 0,
reason: violations.length === 0 ? 'All requirements met' : 'Protocol violations detected',
violations,
session: {
id: sessionId,
phases_completed: phases,
last_activity: session.last_activity
}
};
}
// Verify all artifacts meet quality standards (Phase 2)
function verifyAllArtifacts(sessionData) {
const { phases } = sessionData;
const issues = [];
// Check analyze phase artifacts
if (phases.analyze && phases.analyze.completed) {
const artifacts = phases.analyze.artifacts || {};
const suggestedQueries = artifacts.suggestedQueries || {};
// Validate suggested queries have non-empty strings
const queryValues = Object.values(suggestedQueries);
if (queryValues.length === 0) {
issues.push('analyze: suggestedQueries is empty');
} else {
for (const [key, value] of Object.entries(suggestedQueries)) {
if (typeof value === 'string') {
if (value.length < 3) {
issues.push(`analyze: suggestedQueries.${key} too short (${value.length} chars, min 3)`);
}
} else if (typeof value === 'object' && value !== null) {
if (value.query && value.query.length < 3) {
issues.push(`analyze: suggestedQueries.${key}.query too short`);
}
}
}
}
}
// Check integrate phase artifacts
if (phases.integrate && phases.integrate.completed) {
const artifacts = phases.integrate.artifacts || {};
const synthesizedContext = artifacts.synthesizedContext;
if (!synthesizedContext || typeof synthesizedContext !== 'string') {
issues.push('integrate: synthesizedContext missing or not a string');
} else if (synthesizedContext.length < 50) {
// FIX #2: Enhanced validation messages - Detect common failure patterns
const isStubOnly = synthesizedContext.includes('undefined') ||
/^#\s*Context\s+for:\s*\w*\s*\n\n$/.test(synthesizedContext);
if (isStubOnly) {
// Stub-only context indicates missing problem parameter
issues.push(
`integrate: synthesizedContext is stub only (${synthesizedContext.length} chars).\n` +
' ROOT CAUSE: Missing or undefined "problem" parameter in integrate phase.\n' +
' FIX: Pass problem parameter: informed_reasoning({ phase: "integrate", problem: "...", ... })\n' +
' NOTE: Problem should match what you provided in analyze phase.'
);
} else {
// Short but not stub - likely empty gatheredContext
issues.push(
`integrate: synthesizedContext too short (${synthesizedContext.length} chars, min 50).\n` +
' ROOT CAUSE: Empty or minimal gatheredContext provided.\n' +
' FIX: Execute suggested queries from analyze phase:\n' +
' - search_experiences(...)\n' +
' - Read relevant files\n' +
' - Query other MCP servers\n' +
' Then pass results in gatheredContext parameter.'
);
}
}
}
// Check reason phase artifacts
if (phases.reason && phases.reason.completed) {
const artifacts = phases.reason.artifacts || {};
const thoughts = artifacts.thoughts;
if (!Array.isArray(thoughts) || thoughts.length === 0) {
issues.push(
'reason: thoughts array is empty or missing.\n' +
' ROOT CAUSE: Called reason phase but no thoughts were recorded.\n' +
' FIX: Provide thought parameter: informed_reasoning({ phase: "reason", thought: "...", ... })'
);
} else {
for (let i = 0; i < thoughts.length; i++) {
const thought = thoughts[i];
if (!thought.content || thought.content.length < 10) {
issues.push(
`reason: thought[${i}].content too short or missing (${thought.content?.length || 0} chars, min 10).\n` +
' FIX: Provide substantive reasoning in thought parameter.'
);
}
if (!thought.evaluation) {
issues.push(`reason: thought[${i}].evaluation missing (internal error)`);
}
}
}
}
// Check record phase artifacts
if (phases.record && phases.record.completed) {
const artifacts = phases.record.artifacts || {};
const finalConclusion = artifacts.finalConclusion;
if (!finalConclusion || typeof finalConclusion !== 'string') {
issues.push(
'record: finalConclusion missing or not a string.\n' +
' ROOT CAUSE: Called record phase without finalConclusion parameter.\n' +
' FIX: Provide finalConclusion: informed_reasoning({ phase: "record", finalConclusion: "...", ... })'
);
} else if (finalConclusion.length < 20) {
issues.push(
`record: finalConclusion too short (${finalConclusion.length} chars, min 20).\n` +
' ROOT CAUSE: Conclusion not comprehensive enough.\n' +
' FIX: Provide detailed summary of reasoning and outcomes.'
);
}
}
return {
valid: issues.length === 0,
issues: issues
};
}
// Main handler for informed_reasoning tool
function informedReasoning(dbs, params) {
const { phase } = params;
// Clean up stale search tokens (analyze phase only)
if (phase === 'analyze') {
const searchTokenPath = path.join(os.homedir(), '.protocol-search-required-token');
if (fs.existsSync(searchTokenPath)) {
try {
const token = JSON.parse(fs.readFileSync(searchTokenPath, 'utf8'));
const age = Date.now() - token.timestamp;
// Remove tokens older than 5 minutes
if (age > 300000) {
fs.unlinkSync(searchTokenPath);
}
} catch (e) {
// Invalid token - remove it
try {
fs.unlinkSync(searchTokenPath);
} catch (e2) {
// Silent fail
}
}
}
// PHASE 2: Clean up expired session files (analyze phase only)
const homeDir = os.homedir();
try {
const files = fs.readdirSync(homeDir);
const now = Date.now();
for (const file of files) {
if (file.startsWith('.protocol-session-') && file.endsWith('.json')) {
const sessionPath = path.join(homeDir, file);
try {
const sessionData = JSON.parse(fs.readFileSync(sessionPath, 'utf8'));
// Remove if expired
if (sessionData.expires_at && sessionData.expires_at < now) {
fs.unlinkSync(sessionPath);
// Also clean up from memory
if (sessionData.session_id) {
state.sessionTokens.delete(sessionData.session_id);
state.sessions.delete(sessionData.session_id);
}
}
} catch (e) {
// Invalid or corrupt session file - remove it
try {
fs.unlinkSync(sessionPath);
} catch (e2) {
// Silent fail
}
}
}
}
} catch (e) {
// Silent fail if we can't read home directory
}
}
// ============================================================================
// ENHANCED: Phase Order Enforcement (Path B Implementation)
// ============================================================================
const sessionId = params.session_id || getSessionId();
// Load existing session from file if it exists (persistence across restarts)
const sessionFile = path.join(os.homedir(), `.protocol-session-${sessionId}.json`);
let sessionData = state.sessions.get(sessionId);
if (!sessionData && fs.existsSync(sessionFile)) {
try {
sessionData = JSON.parse(fs.readFileSync(sessionFile, 'utf8'));
// Restore to memory
state.sessions.set(sessionId, sessionData);
} catch (e) {
// Invalid session file - will create new session below
sessionData = null;
}
}
// Get or create session data
if (!sessionData) {
const now = Date.now();
sessionData = {
session_id: sessionId,
created_at: now,
expires_at: now + state.sessionTokenTimeout,
version: '2.1',
phases: {
analyze: { completed: false },
integrate: { completed: false },
reason: { completed: false },
record: { completed: false }
},
allPhasesComplete: false,
artifactsVerified: false
};
state.sessions.set(sessionId, sessionData);
}
// ENFORCE PHASE ORDER: Check prerequisites before executing phase
if (phase === 'integrate' && !sessionData.phases.analyze.completed) {
throw new ValidationError(
'Phase order violation: Must complete analyze before integrate\n' +
'Required sequence: analyze → integrate → reason → record'
);
}
if (phase === 'reason' && !sessionData.phases.integrate.completed) {
const missing = [];
if (!sessionData.phases.analyze.completed) missing.push('analyze');
missing.push('integrate');
throw new ValidationError(
`Phase order violation: Must complete ${missing.join(', ')} before reason\n` +
'Required sequence: analyze → integrate → reason → record'
);
}
if (phase === 'record' && !sessionData.phases.reason.completed) {
const missing = [];
if (!sessionData.phases.analyze.completed) missing.push('analyze');
if (!sessionData.phases.integrate.completed) missing.push('integrate');
missing.push('reason');
throw new ValidationError(
`Phase order violation: Must complete ${missing.join(', ')} before record\n` +
'Required sequence: analyze → integrate → reason → record'
);
}
// Execute the phase logic
let result;
switch (phase) {
case 'analyze':
result = analyzePhase(params);
break;
case 'integrate':
result = integratePhase(params, dbs, sessionData); // FIX #1: Pass sessionData
break;
case 'reason':
result = reasonPhase(params, dbs);
break;
case 'record':
result = recordPhase(params, dbs);
break;
default:
throw new ValidationError(`Invalid phase: ${phase}\nRequired: phase must be "analyze", "integrate", "reason", or "record"`);
}
// ============================================================================
// PHASE 2: Enhanced Session Tracking with Phases Object
// ============================================================================
// Update phase completion and artifacts
const now = Date.now();
sessionData.phases[phase].completed = true;
sessionData.phases[phase].timestamp = now;
// Capture phase-specific artifacts
if (phase === 'analyze' && result.suggestedQueries) {
sessionData.phases.analyze.artifacts = {
problem: result.problem, // FIX #3: Store problem for use in later phases
suggestedQueries: result.suggestedQueries,
timestamp: now,
searchCalled: false // Phase 7.5: Track if search_experiences was called
};
} else if (phase === 'integrate' && result.synthesizedContext) {
sessionData.phases.integrate.artifacts = {
synthesizedContext: result.synthesizedContext,
contextSummary: result.contextSummary
};
} else if (phase === 'reason') {
// Accumulate thoughts across multiple reason calls
if (!sessionData.phases.reason.artifacts) {
sessionData.phases.reason.artifacts = { thoughts: [] };
}
if (params.thought) {
sessionData.phases.reason.artifacts.thoughts.push({
thoughtNumber: params.thoughtNumber,
content: params.thought,
evaluation: result.evaluation,
timestamp: now
});
}
} else if (phase === 'record' && params.finalConclusion) {
sessionData.phases.record.artifacts = {
finalConclusion: params.finalConclusion,
learningId: result.learningId
};
}
// Check if all phases are complete
const allComplete =
sessionData.phases.analyze.completed &&
sessionData.phases.integrate.completed &&
sessionData.phases.reason.completed &&
sessionData.phases.record.completed;
sessionData.allPhasesComplete = allComplete;
// Phase 7.10: Generate new session ID when session completes
// Completed sessions become immutable historical records, not active sessions
if (allComplete) {
const newSessionId = `session-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`;
const sessionFilePath = path.join(os.homedir(), '.protocol-current-session-id');
try {
fs.writeFileSync(sessionFilePath, JSON.stringify({
session_id: newSessionId,
created: Date.now(),
created_iso: new Date().toISOString(),
reason: 'previous_session_completed',
previous_session_id: sessionId
}));
console.error(`[memory-augmented-reasoning-mcp] Session completed, generated new session ID: ${newSessionId}`);
console.error(`[memory-augmented-reasoning-mcp] Previous session archived: ${sessionId}`);
} catch (err) {
console.error(`[memory-augmented-reasoning-mcp] Warning: Failed to generate new session ID: ${err.message}`);
}
}
// Verify artifacts if all phases complete
if (allComplete) {
const verification = verifyAllArtifacts(sessionData);
sessionData.artifactsVerified = verification.valid;
if (!verification.valid) {
sessionData.artifactIssues = verification.issues;
}
}
// ENHANCED: Write session JSON after EVERY phase (for persistence)
try {
const tempFile = `${sessionFile}.tmp`;
fs.writeFileSync(tempFile, JSON.stringify(sessionData, null, 2), 'utf8');
fs.renameSync(tempFile, sessionFile);
console.error(`[memory-augmented-reasoning-mcp] ✓ Session updated: phase ${phase} complete`);
} catch (err) {
console.error('[memory-augmented-reasoning-mcp] Failed to write session JSON:', err.message);
}
// ============================================================================
// CRITICAL: Token creation MUST happen regardless of database availability
// Issue: Token creation was inside if(dbs.user) block, causing tokens to not
// be created when user database doesn't exist, blocking all file operations
// Fix: Move token creation OUTSIDE database conditional block
// ============================================================================
// Write token file for hook verification (analyze phase only)
// Token expires after 60 seconds to support parallel tool execution (Issue #3 fix)
if (phase === 'analyze') {
// STEP 1: Clear requirement token (satisfies user_prompt_submit requirement)
const requirementFile = path.join(os.homedir(), '.protocol-informed-reasoning-required');
try {
if (fs.existsSync(requirementFile)) {
fs.unlinkSync(requirementFile);
console.error('[memory-augmented-reasoning-mcp] ✓ Requirement token cleared');
}
} catch (err) {
console.error('[memory-augmented-reasoning-mcp] ✗ Failed to clear requirement token:', err.message);
}
// STEP 2: Create informed reasoning token (60s TTL for tool verification)
const tokenFile = path.join(os.homedir(), '.protocol-informed-reasoning-token');
try {
const now = new Date();
const expires = new Date(now.getTime() + 60000); // 60 seconds
const tokenData = {
sessionId,
phase,
timestamp: now.toISOString(),
expires: expires.toISOString()
};
fs.writeFileSync(tokenFile, JSON.stringify(tokenData), 'utf8');
console.error(`[memory-augmented-reasoning-mcp] ✓ Token created: ${tokenFile}`);
console.error(`[memory-augmented-reasoning-mcp] Token data: ${JSON.stringify(tokenData)}`);
} catch (err) {
console.error('[memory-augmented-reasoning-mcp] ✗ FAILED to write token file:', err.message);
console.error('[memory-augmented-reasoning-mcp] Stack:', err.stack);
}
// Create search-required token if learning library search was suggested
if (result && result.suggestedQueries && result.suggestedQueries.learningLibrary) {
const searchTokenPath = path.join(os.homedir(), '.protocol-search-required-token');
try {
const searchToken = {
timestamp: Date.now(),
sessionId,
suggestedQuery: result.suggestedQueries.learningLibrary.query,
domain: result.suggestedQueries.learningLibrary.domain,
phase: 'analyze'
};
fs.writeFileSync(searchTokenPath, JSON.stringify(searchToken), 'utf8');
console.error('[memory-augmented-reasoning-mcp] ✓ Search token created');
} catch (err) {
console.error('[memory-augmented-reasoning-mcp] ✗ Failed to write search token:', err.message);
}
}
// STEP 3: Create session token (10min TTL for conversation optimization)
const sessionTokenFile = path.join(os.homedir(), '.protocol-session-token');
try {
const now = Date.now();
const expires = now + 600000; // 10 minutes
const sessionTokenData = {
sessionId,
timestamp: now,
expires: expires,
created: new Date().toISOString()
};
fs.writeFileSync(sessionTokenFile, JSON.stringify(sessionTokenData), 'utf8');
console.error('[memory-augmented-reasoning-mcp] ✓ Session token created (10min TTL)');
} catch (err) {
console.error('[memory-augmented-reasoning-mcp] ✗ Failed to create session token:', err.message);
}
}
// Track session activity (only if we have a user database)
if (dbs.user) {
try {
// Ensure session exists before recording activity
getOrCreateSession(dbs.user, sessionId);
updateSessionActivity(dbs.user, sessionId);
recordProtocolEvent(dbs.user, sessionId, 'phase_completed', {
phase,
timestamp: new Date().toISOString()
});
updateSessionPhases(dbs.user, sessionId, phase);
} catch (sessionError) {
// Don't fail the request if session tracking fails
console.error('[memory-augmented-reasoning-mcp] Session tracking error:', sessionError.message);
}
}
// ============================================================================
// BUG FIX: Session token file write removed - redundant and harmful
//
// ISSUE: Lines 1481-1548 (old code) were overwriting the enhanced session
// file with a simple 3-field structure after lines 1383-1391 correctly wrote
// the full enhanced structure.
//
// SOLUTION: Removed redundant session token creation code. The enhanced
// session is already written correctly at lines 1383-1391 with full phase
// tracking, artifacts, allPhasesComplete, and artifactsVerified fields.
//
// The old code created a simple tokenData with only:
// { session_id, created_at, expires_at }
//
// This was overwriting the enhanced structure from lines 1383-1391 that has:
// { session_id, created_at, expires_at, phases, allPhasesComplete,
// artifactsVerified, version, artifactIssues }
// ============================================================================
// ENHANCED: Include session tracking and next phase guidance in response
const nextPhase = sessionData.allPhasesComplete ? null :
!sessionData.phases.analyze.completed ? 'analyze' :
!sessionData.phases.integrate.completed ? 'integrate' :
!sessionData.phases.reason.completed ? 'reason' :
!sessionData.phases.record.completed ? 'record' : null;
// BUG FIX v2.1.3: Provide complete parameter guidance for each phase
let phaseGuidance = '';
if (sessionData.allPhasesComplete) {
phaseGuidance = "✅ All phases complete - session token valid for THIS REQUEST - you may now use Write/Edit/NotebookEdit for THIS TASK. NEW user requests require running protocol again.";
} else if (nextPhase === 'analyze') {
phaseGuidance = `Next: call informed_reasoning({ phase: "analyze", session_id: "${sessionId}", problem: "your problem", availableTools: ["Read", "Grep", ...] })`;
} else if (nextPhase === 'integrate') {
// Phase 7.5: After analyze completes, require search before integrate
const problem = sessionData.phases.analyze.artifacts?.problem || "your problem";
const searchQuery = sessionData.phases.analyze.artifacts?.suggestedQueries?.learningLibrary?.query || problem;
phaseGuidance = `⚠️ USER REQUIREMENT: BEFORE calling integrate, you MUST call search_experiences.\n\n` +
`REQUIRED STEPS:\n` +
` STEP 1: search_experiences({ query: "${searchQuery}" })\n` +
` STEP 2: informed_reasoning({ phase: "integrate", session_id: "${sessionId}", problem: "${problem}", gatheredContext: { experiences: [results] } })\n\n` +
`The system learns from past experiences - searching is MANDATORY, not optional.\n` +
`Skipping search will cause an error at integrate phase.`;
} else if (nextPhase === 'reason') {
const problem = sessionData.phases.analyze.artifacts?.problem || "your problem";
const estimatedThoughts = result.estimatedThoughts || 3;
phaseGuidance = `Next: call informed_reasoning({ phase: "reason", session_id: "${sessionId}", problem: "${problem}", thought: "your reasoning step", thoughtNumber: 1, totalThoughts: ${estimatedThoughts}, nextThoughtNeeded: true })`;
} else if (nextPhase === 'record') {
const problem = sessionData.phases.analyze.artifacts?.problem || "your problem";
phaseGuidance = `Next: call informed_reasoning({ phase: "record", session_id: "${sessionId}", problem: "${problem}", finalConclusion: "your conclusion" })`;
}
return {
...result,
session_id: sessionId,
phase_completed: phase,
all_phases_complete: sessionData.allPhasesComplete,
artifacts_verified: sessionData.artifactsVerified,
next_phase: nextPhase,
guidance: phaseGuidance
};
}
// ============================================================================
// END PHASE 2 IMPLEMENTATION
// ============================================================================
// MCP Protocol implementation
async function main() {
// Handle --version flag
if (process.argv.includes('--version')) {
console.log(`memory-augmented-reasoning-mcp v${VERSION}`);
process.exit(0);
}
// Handle --check-compliance flag (CLI mode for hooks)
if (process.argv.includes('--check-compliance')) {
try {
// Read hook input from stdin
const stdin = fs.readFileSync(0, 'utf-8');
const hookInput = JSON.parse(stdin);
const sessionId = process.env.CLAUDE_SESSION_ID || hookInput.sessionId;
const userDbPath = expandPath(
process.env.MEMORY_AUGMENTED_REASONING_USER_DB ||
path.join(os.homedir(), '.cursor', 'memory-augmented-reasoning.db')
);
const db = initDatabase(userDbPath);
const session = db.prepare('SELECT * FROM protocol_sessions WHERE session_id = ?').get(sessionId);
if (!session) {
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: "⛔ No protocol session - call informed_reasoning first"
}
}));
db.close();
process.exit(0);
}
const phases = session.informed_reasoning_phases ? JSON.parse(session.informed_reasoning_phases) : [];
const lastActivity = new Date(session.last_activity);
const fiveMinutesAgo = new Date(Date.now() - 5 * 60 * 1000);
if (!phases.includes('analyze') || lastActivity < fiveMinutesAgo) {
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: "⛔ Must call informed_reasoning (analyze phase) first"
}
}));
} else {
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "allow",
permissionDecisionReason: "✅ Protocol requirements met"
}
}));
}
db.close();
process.exit(0);
} catch (error) {
console.error('[memory-augmented-reasoning-mcp] CLI mode error:', error.message);
console.log(JSON.stringify({
hookSpecificOutput: {
hookEventName: "PreToolUse",
permissionDecision: "deny",
permissionDecisionReason: `⛔ Error checking compliance: ${error.message}`
}
}));
process.exit(1);
}
}
// Handle --record-last-session flag (CLI mode for automatic experience recording)
if (process.argv.includes('--record-last-session')) {
try {
// Read recording token created by PostToolUse hook
// NOTE: This is DIFFERENT from the protocol token consumed by PreToolUse
const recordingTokenFile = path.join(os.homedir(), '.protocol-session-recording-token');
// Extract context from token
let sessionId = process.env.CLAUDE_SESSION_ID;
let toolName = 'tool';
let filePath = 'unknown';
let timestamp = new Date().toISOString();
let cwd = process.cwd();
if (fs.existsSync(recordingTokenFile)) {
try {
const tokenData = JSON.parse(fs.readFileSync(recordingTokenFile, 'utf8'));
sessionId = tokenData.sessionId || sessionId;
toolName = tokenData.toolName || toolName;
timestamp = tokenData.timestamp || timestamp;
filePath = tokenData.context?.filePath || filePath;
cwd = tokenData.context?.cwd || cwd;
// Clean up the recording token after reading
fs.unlinkSync(recordingTokenFile);
} catch (e) {
console.error('[memory-augmented-reasoning-mcp] Failed to read recording token:', e.message);
}
}
if (!sessionId) {
console.error('[memory-augmented-reasoning-mcp] No session ID found (checked recording token and CLAUDE_SESSION_ID env var)');
process.exit(1);
}
// Load databases
const dbs = loadDatabases();
// Determine scope from config or use default
let scope = 'project';
const configPath = path.join(process.cwd(), '.protocol-enforcer.json');
if (fs.existsSync(configPath)) {
try {
const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));
scope = config.automatic_experience_recording?.scope || 'project';
} catch (e) {
// Use default scope
}
}
// Generate context-rich experience with unique situation string
const fileDisplayName = filePath !== 'unknown' ? path.basename(filePath) : 'unknown file';
const experience = {
type: 'effective',
domain: 'Tools',
situation: `${toolName} operation on ${fileDisplayName} at ${timestamp}`,
approach: `Protocol flow: informed_reasoning → verify_protocol_compliance → authorize_file_operation → ${toolName}`,
outcome: `Successfully completed ${toolName} operation following protocol requirements`,
reasoning: 'Following protocol workflow with automatic experience recording after successful tool execution',
context: `session: ${sessionId}, file: ${filePath}, cwd: ${cwd}`,
confidence: 0.8,
scope: scope
};
// Record the experience
const result = recordExperience(dbs, experience);
console.log(`[memory-augmented-reasoning-mcp] Experience recorded automatically: ${result.id} (${result.action})`);
// Close databases
if (dbs.user) dbs.user.close();
if (dbs.project) dbs.project.close();
process.exit(0);
} catch (error) {
console.error('[memory-augmented-reasoning-mcp] Failed to record experience:', error.message);
process.exit(1);
}
}
// Load databases
let dbs;
try {
dbs = loadDatabases();
console.error('[memory-augmented-reasoning-mcp] Databases loaded successfully');
} catch (error) {
console.error('[memory-augmented-reasoning-mcp] FATAL: Failed to load databases:', error.message);
console.error('[memory-augmented-reasoning-mcp] Stack:', error.stack);
process.exit(1);
}
// Use readline for proper line-by-line stdin reading
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
terminal: false
});
rl.on('line', async (line) => {
try {
if (!line.trim()) return;
const request = JSON.parse(line);
await handleRequest(dbs, request);
} catch (error) {
// For parse errors, try to extract ID from malformed JSON, otherwise use -1
let requestId = -1;
try {
const partialParse = JSON.parse(line);
if (partialParse && partialParse.id !== undefined) {
requestId = partialParse.id;
}
} catch (e) {
// If we can't even partially parse, use -1
}
console.error('[memory-augmented-reasoning-mcp] Parse error:', error.message);
sendError(requestId, -32700, 'Parse error', error.message);
}
});
// Graceful shutdown when STDIN closes (client disconnects)
rl.on('close', () => {
console.error('[memory-augmented-reasoning-mcp] STDIN closed, shutting down gracefully');
// Wait for stdout to flush before exiting to prevent truncating large responses
setImmediate(() => {
process.stdout.write('', () => {
process.exit(0);
});
});
});
// Handle termination signals
process.on('SIGTERM', () => {
console.error('[memory-augmented-reasoning-mcp] Received SIGTERM, shutting down');
process.exit(0);
});
process.on('SIGINT', () => {
console.error('[memory-augmented-reasoning-mcp] Received SIGINT, shutting down');
process.exit(0);
});
}
// Handle MCP requests
async function handleRequest(dbs, request) {
try {
const { id, method, params } = request;
// BUG FIX v2.1.4: Handle notifications (no id field) separately from requests
// Notifications should NEVER receive a response (neither success nor error)
// This fixes the "Received a response for an unknown message ID" error
const isNotification = (id === undefined || id === null);
// Handle notifications silently without sending any response
if (isNotification) {
if (method === 'notifications/initialized') {
// Client has finished initializing - handle silently
console.error('[memory-augmented-reasoning-mcp] Client initialized');
return; // No response for notifications
}
// For other unknown notifications, log but don't respond
console.error(`[memory-augmented-reasoning-mcp] Received notification: ${method}`);
return; // No response for notifications
}
// For requests (messages with an id), ensure we have a valid ID
const requestId = id;
switch (method) {
case 'initialize':
sendResponse(requestId, {
protocolVersion: '2024-11-05',
capabilities: {
tools: {}
},
serverInfo: {
name: 'memory-augmented-reasoning',
version: VERSION
}
});
break;
case 'tools/list':
sendResponse(requestId, {
tools: [
{
name: 'record_experience',
description: '📝 STEP 4 - TEACH: Record outcome for future sessions. USER REQUIREMENT: Must be called after Write/Edit/NotebookEdit. System learns from outcomes for institutional memory. REQUIRED after Write/Edit/NotebookEdit operations. Documents what worked/didn\'t work. Scope defaults to "auto" (v3.1.0) for automatic detection based on experience content.',
inputSchema: {
type: 'object',
properties: {
scope: { type: 'string', enum: ['user', 'project', 'auto'], default: 'auto' },
type: { type: 'string', enum: ['effective', 'ineffective'] },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
situation: { type: 'string' },
approach: { type: 'string' },
outcome: { type: 'string' },
reasoning: { type: 'string' },
alternative: { type: 'string' },
confidence: { type: 'number', minimum: 0, maximum: 1 },
revision_of: { type: 'integer' },
contradicts: { type: 'string' },
supports: { type: 'string' },
context: { type: 'string' },
assumptions: { type: 'string' },
limitations: { type: 'string' }
},
required: ['type', 'domain', 'situation', 'approach', 'outcome', 'reasoning']
}
},
{
name: 'search_experiences',
description: '🔍 STEP 1 - LEARN: Search past experiences BEFORE starting work. USER REQUIREMENT: Check past learnings first. Required for system to build on previous experiences. REQUIRED by protocol enforcement before Write/Edit/NotebookEdit. Returns relevant learnings to inform current task. Always call this first when given a new task.',
inputSchema: {
type: 'object',
properties: {
query: { type: 'string' },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
type: { type: 'string', enum: ['effective', 'ineffective'] },
limit: { type: 'number', default: 50 }
}
}
},
{
name: 'export_experiences',
description: 'Export experiences to JSON or Markdown',
inputSchema: {
type: 'object',
properties: {
scope: { type: 'string', enum: ['user', 'project'], default: 'user' },
format: { type: 'string', enum: ['json', 'markdown'], default: 'json' },
domain: { type: 'string', enum: ['Tools', 'Protocol', 'Communication', 'Process', 'Debugging', 'Decision'] },
type: { type: 'string', enum: ['effective', 'ineffective'] },
filename: { type: 'string' }
}
}
},
{
name: 'informed_reasoning',
description: '🧠 STEP 2 - REASON: Complete reasoning cycle to authorize file operations. USER REQUIREMENT: Mandatory protocol for learning system. The system needs all 4 phases to learn from tasks properly. REQUIRED by protocol enforcement before Write/Edit/NotebookEdit. Creates REQUEST-SCOPED session token (60-min max). MUST call for EACH new user request before file operations. Token authorizes Write/Edit/NotebookEdit for current request\'s multi-tool workflow only. Call after searching experiences. IMPORTANT: If session_id parameter not provided, it will auto-generate from CLAUDE_SESSION_ID environment variable or create persistent session identifier.',
inputSchema: {
type: 'object',
properties: {
phase: { type: 'string', enum: ['analyze', 'integrate', 'reason', 'record'] },
problem: { type: 'string' },
availableTools: { type: 'array', items: { type: 'string' } },
gatheredContext: {
type: 'object',
properties: {
experiences: { type: 'array' },
localDocs: { type: 'array' },
mcpData: { type: 'object' },
webResults: { type: 'array' }
}
},
thought: { type: 'string' },
thoughtNumber: { type: 'integer', minimum: 1 },
totalThoughts: { type: 'integer', minimum: 1 },
nextThoughtNeeded: { type: 'boolean' },
isRevision: { type: 'boolean' },
revisesThought: { type: 'integer', minimum: 1 },
branchFromThought: { type: 'integer', minimum: 1 },
branchId: { type: 'string' },
needsMoreThoughts: { type: 'boolean' },
finalConclusion: { type: 'string' },
relatedExperiences: { type: 'array', items: { type: 'integer' } },
session_id: {
type: 'string',
description: 'Optional: Claude Code session ID. If provided during analyze phase, creates a 60-minute session token that allows subsequent tools without repeated protocol flows. Reduces overhead from ~16 calls to ~4 calls per multi-tool workflow.'
}
},
required: ['phase']
}
},
{
name: 'check_protocol_compliance',
description: 'Check if protocol requirements are met for current session (used by hooks)',
inputSchema: {
type: 'object',
properties: {
session_id: { type: 'string', description: 'Session ID to check (optional, uses current session if not provided)' },
hook: { type: 'string', enum: ['pre_tool_use', 'user_prompt_submit'], description: 'Which hook is checking compliance' },
tool_name: { type: 'string', description: 'Tool being used (for pre_tool_use hook)' }
},
required: ['hook']
}
}
]
});
break;
case 'tools/call':
const toolName = params.name;
const toolParams = params.arguments || {};
let result;
switch (toolName) {
case 'record_experience':
result = recordExperience(dbs, toolParams);
break;
case 'search_experiences':
result = searchExperiences(dbs, toolParams);
// Phase 7.5: Track that search was called (if there's an active analyze session)
// FIX v3.3.1: Use getSessionId() instead of "most recent by mtime"
try {
const sessionId = getSessionId(); // Use existing function at line 209
const homeDir = os.homedir();
const sessionPath = path.join(homeDir, `.protocol-session-${sessionId}.json`);
if (fs.existsSync(sessionPath)) {
const sessionData = JSON.parse(fs.readFileSync(sessionPath, 'utf8'));
// Mark search as called if analyze phase is complete
if (sessionData.phases?.analyze?.completed && sessionData.phases.analyze.artifacts) {
sessionData.phases.analyze.artifacts.searchCalled = true;
fs.writeFileSync(sessionPath, JSON.stringify(sessionData, null, 2));
// Phase 7.8: Sync ONLY searchCalled flag in memory (don't replace entire session)
const memorySession = state.sessions.get(sessionId);
if (memorySession?.phases?.analyze?.artifacts) {
memorySession.phases.analyze.artifacts.searchCalled = true;
}
console.error('[memory-augmented-reasoning-mcp] search_experiences called, marked searchCalled=true for session:', sessionId);
}
}
} catch (err) {
// Don't fail if session tracking fails
console.error('[memory-augmented-reasoning-mcp] Warning: Could not track search call:', err.message);
}
break;
case 'export_experiences':
result = exportExperiences(dbs, toolParams);
break;
case 'informed_reasoning':
result = informedReasoning(dbs, toolParams);
break;
case 'check_protocol_compliance':
result = checkProtocolCompliance(dbs, toolParams);
break;
default:
throw new Error(`Unknown tool: ${toolName}`);
}
sendResponse(requestId, {
content: [
{
type: 'text',
text: JSON.stringify(result, null, 2)
}
]
});
break;
default:
sendError(requestId, -32601, 'Method not found', `Unknown method: ${method}`);
}
} catch (error) {
console.error('[memory-augmented-reasoning-mcp] Error handling request:', error.message);
console.error('[memory-augmented-reasoning-mcp] Stack:', error.stack);
console.error('[memory-augmented-reasoning-mcp] Request:', JSON.stringify(request));
const errorId = (request.id !== undefined && request.id !== null) ? request.id : -1;
// Distinguish parameter validation errors (-32602) from internal errors (-32603)
if (error instanceof ValidationError) {
// Phase 7.7: Use error.message for visible message, error.details for error.data
sendError(errorId, -32602, error.message, error.details);
} else {
sendError(errorId, -32603, 'Internal error', error.message);
}
}
}
// Send JSON-RPC response
function sendResponse(id, result) {
const response = {
jsonrpc: '2.0',
id: id,
result: result
};
console.error(`[memory-augmented-reasoning-mcp] Sending response for id=${id}, method=${result.protocolVersion ? 'initialize' : result.tools ? 'tools/list' : 'unknown'}`);
console.log(JSON.stringify(response));
}
// Send JSON-RPC error
function sendError(id, code, message, data) {
const response = {
jsonrpc: '2.0',
id: id,
error: {
code: code,
message: message,
data: data
}
};
console.error(`[memory-augmented-reasoning-mcp] Sending error for id=${id}, code=${code}, message=${message}`);
console.log(JSON.stringify(response));
}
// Export functions for CLI usage
module.exports = {
calculateStringSimilarity,
areExperiencesSimilar,
findSimilarExperiences,
shouldUpdateExperience,
checkDuplication,
initDatabase,
loadDatabases,
recordExperience
};
// Start server (only if run directly, not when imported)
if (require.main === module) {
main().catch(error => {
console.error('Fatal error:', error);
process.exit(1);
});
}
{
"name": "memory-augmented-reasoning-mcp",
"version": "3.3.6",
"description": "Memory-augmented reasoning system with persistent experience storage and multi-phase guided reasoning for AI agents",
"author": "Jason Lusk <[email protected]>",
"license": "MIT",
"main": "index.js",
"bin": {
"memory-augmented-reasoning": "./index.js"
},
"dependencies": {
"better-sqlite3": "^11.0.0"
},
"engines": {
"node": ">=14.0.0"
},
"keywords": [
"mcp",
"model-context-protocol",
"memory-augmented-reasoning",
"ai-assistant",
"knowledge-management",
"pattern-library"
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment