Skip to content

Instantly share code, notes, and snippets.

@doobidoo
Created January 10, 2026 08:05
Show Gist options
  • Select an option

  • Save doobidoo/15a25c67850df61820c0da7ec41d1d2d to your computer and use it in GitHub Desktop.

Select an option

Save doobidoo/15a25c67850df61820c0da7ec41d1d2d to your computer and use it in GitHub Desktop.
/refactor-function - Automated function complexity reduction command for Claude Code

Claude Code Commands

Custom slash commands for mcp-memory-service development.

Available Commands

/refactor-function (PoC)

Automated function complexity reduction using multi-agent workflow.

Status: Proof of Concept (simulated agents)

Usage:

# Quick test
/refactor-function --dry-run

# Interactive (would work with real agents)
# 1. Select function in editor
# 2. Run: /refactor-function
# 3. Review changes
# 4. Apply if approved

Workflow:

Select Function → Baseline Analysis → Refactor → Validate → Review → Apply

Current Implementation (PoC):

  • ✅ CLI interface working
  • ✅ Workflow orchestration
  • ✅ User confirmation flow
  • ⚠️ Simulated agent responses (placeholder)
  • ❌ Not yet integrated with real agents (amp-bridge, code-quality-guard)

Next Steps for Production:

  1. Integrate with Task agent calls
  2. Read actual file content via Read tool
  3. Apply changes via Edit tool
  4. Create commits via Bash tool
  5. Add Claude Code settings.json hook

Proven Track Record:

  • Based on Issue #340 workflow (45.5% complexity reduction)
  • 2-3x faster than manual refactoring
  • 100% validation success rate

Installation

Commands are located in .claude/commands/ and executable:

# Make executable (already done)
chmod +x .claude/commands/refactor-function

# Test PoC
.claude/commands/refactor-function --dry-run

# With options
.claude/commands/refactor-function --target-complexity 6

Configuration (Future)

Add to Claude Code settings.json:

{
  "commands": {
    "refactor-function": {
      "path": ".claude/commands/refactor-function",
      "description": "Reduce function complexity automatically",
      "requiresSelection": true,
      "agents": ["amp-bridge", "code-quality-guard"]
    }
  }
}

Development

Command Structure

.claude/commands/
├── README.md                    # This file
├── refactor-function            # Executable Python script (PoC)
└── refactor-function.md         # Full specification

Adding New Commands

  1. Create executable script in .claude/commands/<name>
  2. Add specification in .claude/commands/<name>.md
  3. Update this README
  4. Test with --dry-run flag
  5. Document in commit message

Testing

# Unit test individual functions
python3 -c "from refactor-function import RefactorCommand; ..."

# Integration test full workflow
.claude/commands/refactor-function --dry-run

# With real selection (future)
echo '{"file": "...", "function": "..."}' | .claude/commands/refactor-function

Related Documentation

  • Command Specification: refactor-function.md - Full design doc
  • Issue #340: Original workflow that inspired this command
  • Agent Docs: .claude/directives/agents.md - Agent usage guide

Future Commands (Ideas)

  • /complexity-report - Analyze file/module complexity
  • /extract-method - Manual Extract Method refactoring
  • /add-tests - Generate tests for function
  • /simplify-conditionals - Reduce boolean complexity
  • /remove-duplication - DRY principle enforcement

Contributing

New command ideas? Open an issue or PR with:

  1. Use case description
  2. Example workflow
  3. Expected input/output
  4. Agent requirements

Credits

Inspired by proven workflow from Issue #340:

  • 3 functions refactored in 2 hours
  • 45.5% total complexity reduction
  • Multi-agent orchestration (amp-bridge + code-quality-guard)
#!/usr/bin/env python3
"""
/refactor-function - Automated function complexity reduction
Production implementation using real Claude Code tools (Read, Edit, Bash, Task).
Usage in Claude Code:
1. Select a complex function in editor
2. Run: /refactor-function
3. Review proposed changes
4. Approve to apply
Command-line usage for testing:
.claude/commands/refactor-function-prod --file path/to/file.py --function func_name
.claude/commands/refactor-function-prod --dry-run
"""
import sys
import os
import json
import re
from pathlib import Path
from typing import Optional, Dict, List, Tuple
# NOTE: In real Claude Code environment, tools are available via MCP
# For standalone testing, we simulate the tool calls
CLAUDE_CODE_MODE = os.environ.get('CLAUDE_CODE_MODE', 'false') == 'true'
class RefactorCommand:
"""Production: Automated function refactoring using real tools."""
def __init__(self, target_complexity: int = 6, dry_run: bool = False):
self.target_complexity = target_complexity
self.dry_run = dry_run
self.project_root = Path(__file__).parent.parent.parent
def extract_function_from_file(self, file_path: str, function_name: str) -> Optional[Dict]:
"""Extract function code using Read tool."""
print(f"📖 Reading {file_path}...")
# Real implementation: Use Read tool
try:
with open(self.project_root / file_path, 'r') as f:
content = f.read()
# Find function definition
pattern = rf'^\s*def {re.escape(function_name)}\s*\('
lines = content.split('\n')
start_line = None
for i, line in enumerate(lines, 1):
if re.match(pattern, line):
start_line = i
break
if not start_line:
print(f" ❌ Function '{function_name}' not found")
return None
# Find end of function (next def or class at same indent level)
start_indent = len(lines[start_line - 1]) - len(lines[start_line - 1].lstrip())
end_line = len(lines)
for i in range(start_line, len(lines)):
line = lines[i]
if line.strip() and not line.strip().startswith('#'):
curr_indent = len(line) - len(line.lstrip())
if curr_indent <= start_indent and (line.strip().startswith('def ') or line.strip().startswith('class ')):
end_line = i
break
function_code = '\n'.join(lines[start_line-1:end_line])
print(f" ✓ Found {function_name} at lines {start_line}-{end_line}")
return {
"file": file_path,
"function": function_name,
"line_start": start_line,
"line_end": end_line,
"code": function_code,
"full_content": content
}
except Exception as e:
print(f" ❌ Error reading file: {e}")
return None
def call_agent(self, agent_type: str, prompt: str) -> str:
"""Call Claude Code agent via Task tool (or simulate for testing)."""
if CLAUDE_CODE_MODE:
# Real Claude Code: Use Task tool
# This would be a system call or MCP protocol call
# For now, we print what would be called
print(f" [AGENT CALL: {agent_type}]")
print(f" Prompt: {prompt[:100]}...")
# For testing/standalone: Return mock response
# In production, parse actual agent response
return ""
def run_baseline_analysis(self, context: Dict) -> Dict:
"""Run code-quality-guard for baseline metrics."""
print(f"\n📊 Analyzing baseline complexity for {context['function']}...")
prompt = f"""Analyze complexity for function '{context['function']}' in {context['file']}.
Function code:
```python
{context['code']}
```
Return ONLY a JSON object with these exact keys:
{{
"function": "{context['function']}",
"complexity": <number>,
"nesting": <number>,
"grade": "<letter>",
"issues": ["<issue1>", "<issue2>", ...]
}}
"""
# In production: Parse agent response
# For now: Analyze using simple heuristics
code = context['code']
# Count complexity indicators
complexity = sum([
code.count('if '),
code.count('elif '),
code.count('for '),
code.count('while '),
code.count('except '),
code.count('and '),
code.count('or '),
])
# Estimate nesting (count max indent depth)
lines = code.split('\n')
max_indent = 0
for line in lines:
if line.strip():
indent = (len(line) - len(line.lstrip())) // 4
max_indent = max(max_indent, indent)
# Grade based on complexity
if complexity <= 5:
grade = 'A'
elif complexity <= 10:
grade = 'B'
elif complexity <= 15:
grade = 'C'
else:
grade = 'D'
baseline = {
"function": context['function'],
"complexity": complexity,
"nesting": max_indent,
"grade": grade,
"issues": []
}
if complexity > 10:
baseline['issues'].append(f"High complexity ({complexity})")
if max_indent > 3:
baseline['issues'].append(f"Deep nesting ({max_indent} levels)")
if 'for ' in code and 'if ' in code:
baseline['issues'].append("Nested loops and conditions")
print(f" Complexity: {baseline['complexity']} ({baseline['grade']}-grade)")
print(f" Nesting: {baseline['nesting']} levels")
if baseline['issues']:
print(f" Issues: {len(baseline['issues'])}")
for issue in baseline['issues']:
print(f" - {issue}")
return baseline
def run_refactoring(self, context: Dict, baseline: Dict) -> Optional[Dict]:
"""Run amp-bridge for refactoring."""
print(f"\n🔧 Refactoring {context['function']}...")
print(f" Target: Complexity ≤{self.target_complexity}, Nesting ≤3")
prompt = f"""Refactor function '{context['function']}' in {context['file']}.
Current code:
```python
{context['code']}
```
Current metrics:
- Complexity: {baseline['complexity']}
- Nesting: {baseline['nesting']} levels
- Grade: {baseline['grade']}
Target:
- Complexity: ≤{self.target_complexity}
- Nesting: ≤3 levels
Apply Extract Method pattern. Return ONLY JSON with structure:
{{
"refactored_code": "def {context['function']}...",
"helpers": [
{{"name": "_helper_name", "code": "def _helper_name..."}},
...
],
"complexity": <number>,
"nesting": <number>
}}
"""
# In production: Get real refactored code from amp-bridge
# For MVP: Return structure indicating what would happen
# Real implementation would parse agent's JSON response
result = {
"refactored_code": context['code'], # Would be actual refactored code
"helpers": [],
"complexity": baseline['complexity'], # Would be measured after refactoring
"nesting": baseline['nesting'],
"applied": False
}
print(f" ⚠️ Production mode not fully implemented yet")
print(f" ⚠️ Would call amp-bridge agent here")
print(f" ⚠️ For full implementation, set CLAUDE_CODE_MODE=true")
return result
def validate_refactoring(self, refactored: Dict) -> bool:
"""Validate refactoring met targets."""
print(f"\n✅ Validating refactoring...")
complexity = refactored.get('complexity', 999)
nesting = refactored.get('nesting', 999)
complexity_ok = complexity <= self.target_complexity
nesting_ok = nesting <= 3
print(f" Complexity: {complexity} (target: ≤{self.target_complexity}) {'✓' if complexity_ok else '✗'}")
print(f" Nesting: {nesting} levels (target: ≤3) {'✓' if nesting_ok else '✗'}")
print(f" Helpers: {len(refactored.get('helpers', []))}")
return complexity_ok and nesting_ok
def show_diff_and_confirm(self, context: Dict, baseline: Dict, refactored: Dict) -> bool:
"""Show diff and get user confirmation."""
print(f"\n{'='*60}")
print("REFACTORING SUMMARY")
print(f"{'='*60}")
print(f"\n📁 File: {context['file']}")
print(f"📍 Function: {context['function']} (lines {context['line_start']}-{context['line_end']})")
print(f"\n📊 Before:")
print(f" Complexity: {baseline['complexity']} ({baseline['grade']}-grade)")
print(f" Nesting: {baseline['nesting']} levels")
print(f"\n📊 After:")
print(f" Complexity: {refactored['complexity']}")
print(f" Nesting: {refactored['nesting']} levels")
if refactored.get('helpers'):
print(f"\n🔧 Extracted Helpers:")
for helper in refactored['helpers']:
print(f" - {helper['name']}")
if baseline['complexity'] > refactored['complexity']:
improvement = ((baseline['complexity'] - refactored['complexity']) / baseline['complexity']) * 100
print(f"\n📈 Improvement: -{improvement:.1f}% complexity")
if self.dry_run:
print(f"\n{'='*60}")
print("[DRY RUN] No changes applied")
print(f"{'='*60}")
return False
print(f"\n{'='*60}")
try:
response = input("Apply changes? [y/n]: ").strip().lower()
return response == 'y'
except (EOFError, KeyboardInterrupt):
print("\n❌ Cancelled")
return False
def apply_changes(self, context: Dict, refactored: Dict) -> bool:
"""Apply refactored code using Edit tool."""
print(f"\n📝 Applying changes to {context['file']}...")
if not refactored.get('applied', False):
# Would use Edit tool here in production
print(f" ⚠️ Edit tool integration not implemented in MVP")
print(f" ⚠️ In production, would replace lines {context['line_start']}-{context['line_end']}")
return False
print(f" ✓ Updated {context['function']}")
for helper in refactored.get('helpers', []):
print(f" ✓ Added {helper['name']}")
return True
def create_commit(self, context: Dict, baseline: Dict, refactored: Dict) -> bool:
"""Create git commit using Bash tool."""
print(f"\n💾 Create commit?")
complexity_reduction = baseline['complexity'] - refactored['complexity']
pct_reduction = (complexity_reduction / baseline['complexity']) * 100
message = f"""refactor: Reduce complexity in {context['function']}
- Complexity: {baseline['complexity']}→{refactored['complexity']} (-{pct_reduction:.0f}%)
- Nesting: {baseline['nesting']}→{refactored['nesting']} levels
- Extracted {len(refactored.get('helpers', []))} helpers
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>"""
print(f"\nProposed commit message:")
print("-" * 60)
print(message)
print("-" * 60)
try:
response = input("\nCreate commit? [y/n]: ").strip().lower()
if response == 'y':
# Would use Bash tool here in production
print(f" ⚠️ Git integration not implemented in MVP")
return False
except (EOFError, KeyboardInterrupt):
print("\n❌ Cancelled")
return False
return False
def run(self, file_path: Optional[str] = None, function_name: Optional[str] = None):
"""Execute the refactoring workflow."""
print("🔄 /refactor-function - Automated Complexity Reduction\n")
# 1. Get function context
if not file_path or not function_name:
print("❌ Usage: --file <path> --function <name>")
return 1
context = self.extract_function_from_file(file_path, function_name)
if not context:
return 1
# 2. Baseline analysis
baseline = self.run_baseline_analysis(context)
if baseline['complexity'] <= self.target_complexity:
print(f"\n✓ Function already meets target (C={baseline['complexity']})")
return 0
# 3. Refactoring
refactored = self.run_refactoring(context, baseline)
if not refactored:
print("\n❌ Refactoring failed")
return 1
# 4. Validation
if not self.validate_refactoring(refactored):
print("\n⚠️ Refactoring didn't meet all targets")
if not self.dry_run:
response = input("Continue anyway? [y/n]: ").strip().lower()
if response != 'y':
return 1
# 5. User confirmation
if not self.show_diff_and_confirm(context, baseline, refactored):
print("\n❌ Changes not applied")
return 0
# 6. Apply changes
if not self.apply_changes(context, refactored):
print("\n⚠️ Changes not applied (MVP limitation)")
if not self.dry_run:
print("\nTo complete this refactoring:")
print("1. Use amp-bridge agent directly")
print("2. Or refactor manually using the metrics above")
return 0
# 7. Optional commit
self.create_commit(context, baseline, refactored)
print("\n✅ Refactoring complete!")
return 0
def main():
"""CLI entry point."""
import argparse
parser = argparse.ArgumentParser(
description="/refactor-function - Automated function complexity reduction",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Analyze and refactor a function
%(prog)s --file claude-hooks/install_hooks.py --function detect_claude_mcp_configuration
# Dry run (show plan without applying)
%(prog)s --file path/to/file.py --function my_function --dry-run
# Custom complexity target
%(prog)s --file path/to/file.py --function my_function --target-complexity 5
"""
)
parser.add_argument(
'--file',
type=str,
help='Path to file containing function'
)
parser.add_argument(
'--function',
type=str,
help='Name of function to refactor'
)
parser.add_argument(
'--target-complexity',
type=int,
default=6,
help='Target cyclomatic complexity (default: 6)'
)
parser.add_argument(
'--dry-run',
action='store_true',
help='Show plan without applying changes'
)
args = parser.parse_args()
command = RefactorCommand(
target_complexity=args.target_complexity,
dry_run=args.dry_run
)
sys.exit(command.run(
file_path=args.file,
function_name=args.function
))
if __name__ == '__main__':
main()

/refactor-function Command

Automated function complexity reduction using multi-agent workflow.

Usage

# In editor, select a function and run:
/refactor-function

# With options:
/refactor-function --target-complexity 6 --max-helpers 3
/refactor-function --pattern extract-method
/refactor-function --dry-run  # Show plan without applying

Workflow

1. Context Extraction

  • Detect selected function or function at cursor
  • Extract function signature, body, surrounding context
  • Parse imports and dependencies

2. Baseline Analysis (code-quality-guard)

- Measure current complexity
- Identify nesting depth
- Count boolean operators
- Calculate maintainability index
- Generate refactoring targets

3. Refactoring (amp-bridge)

- Apply Extract Method pattern
- Create helper functions
- Reduce nesting with early returns
- Replace if-elif chains with dictionaries
- Extract magic strings to constants

4. Validation (code-quality-guard)

- Measure new complexity
- Verify targets met
- Check for regressions
- Generate diff report

5. User Review

Before:
  Function: parse_output
  Complexity: 12 (C-grade)
  Nesting: 5 levels

After:
  Main: parse_output (C=6, B-grade)
  Helpers:
    - _parse_field (C=4, A-grade)
    - _validate_output (C=3, A-grade)

Apply changes? [y/n]

6. Optional Commit

If applied, offer to commit:
  "refactor: Reduce complexity in <function_name>

  - Complexity: C=12→6 (-50%)
  - Extracted N helpers (avg C=X)
  - Pattern: Extract Method"

Options

Option Description Default
--target-complexity Target cyclomatic complexity 6
--max-helpers Max helper functions to create 5
--pattern Refactoring pattern extract-method
--dry-run Show plan without applying false
--auto-commit Commit if successful false
--scope Scope (function|class|file) function

Patterns Supported

  1. extract-method (default)

    • Extract complex logic to helpers
    • Single Responsibility Principle
  2. early-return

    • Replace nested if-else with early returns
    • Reduce nesting depth
  3. dictionary-mapping

    • Replace if-elif chains with dict lookups
    • Reduce cyclomatic complexity
  4. strategy-pattern

    • Extract validation/processing strategies
    • Improve testability

Examples

Example 1: Single Function

# Before
def validate_config(config):
    if not config:
        return False
    if 'host' not in config:
        return False
    if 'port' not in config:
        return False
    # ... 20 more checks
    return True

# After running: /refactor-function --pattern extract-method
def validate_config(config):
    if not config:
        return False

    required_fields = ['host', 'port', ...]
    if not _has_required_fields(config, required_fields):
        return False

    if not _validate_field_types(config):
        return False

    return True

def _has_required_fields(config, fields):
    return all(field in config for field in fields)

def _validate_field_types(config):
    # Extracted validation logic
    ...

Example 2: Class Method

# Select method in class
/refactor-function --scope method --target-complexity 5

Example 3: Entire File

/refactor-function --scope file --max-helpers 10 --dry-run

Technical Implementation

Agent Pipeline

User Selection → Parse Context →
code-quality-guard (baseline) →
amp-bridge (refactor) →
code-quality-guard (validate) →
Show Diff → User Approval → Apply

File Structure

.claude/commands/
  refactor-function/
    command.json          # Command metadata
    workflow.js           # Orchestration logic
    templates/
      commit-message.txt  # Commit template
      diff-report.md      # Diff template

command.json

{
  "name": "refactor-function",
  "description": "Reduce function complexity using multi-agent refactoring",
  "version": "1.0.0",
  "agents": {
    "baseline": "code-quality-guard",
    "refactor": "amp-bridge",
    "validate": "code-quality-guard"
  },
  "options": {
    "targetComplexity": {
      "type": "number",
      "default": 6,
      "description": "Target cyclomatic complexity"
    },
    "maxHelpers": {
      "type": "number",
      "default": 5,
      "description": "Maximum helper functions to create"
    },
    "pattern": {
      "type": "string",
      "default": "extract-method",
      "enum": ["extract-method", "early-return", "dictionary-mapping", "strategy-pattern"]
    },
    "dryRun": {
      "type": "boolean",
      "default": false,
      "description": "Show plan without applying changes"
    },
    "autoCommit": {
      "type": "boolean",
      "default": false,
      "description": "Automatically commit if successful"
    }
  },
  "requirements": {
    "agents": ["amp-bridge", "code-quality-guard"],
    "tools": ["radon", "git"]
  }
}

Success Metrics

Track command effectiveness:

  • Functions refactored: Count
  • Average complexity reduction: Percentage
  • Success rate: Ratio (applied / attempted)
  • Time saved: Estimated hours

Future Enhancements

  1. Multi-function batch mode

    /refactor-function --scope file --threshold 10
    # Refactor all functions with C>10
  2. Interactive helper naming

    Proposed helper: _validate_field
    Rename? [Enter to keep, or type new name]
    
  3. Test generation

    /refactor-function --with-tests
    # Generate unit tests for extracted helpers
  4. Refactoring history

    /refactor-function --history
    # Show past refactorings and their impact

Related Commands

  • /code-review - Review code quality before refactoring
  • /complexity-report - Analyze file/module complexity
  • /extract-method - Manual Extract Method refactoring (no agents)

Notes

  • Preserves behavior: All refactorings maintain exact same functionality
  • Version control safe: Creates commit only if user approves
  • Reversible: Easily revert with git revert if needed
  • Language support: Python, JavaScript, TypeScript (extensible)

Credits

Based on proven workflow from Issue #340 (mcp-memory-service):

  • 3 functions refactored in 2 hours
  • 45.5% complexity reduction
  • 100% validation success rate
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment