A systematic approach for transforming basic documentation into polished, professional technical docs.
- Bullets are for information - Use for actionable items, features, or concepts that need to be scannable
- Tables are for data - Use for structured information with clear relationships between values
- Paragraphs are for context - Use when information doesn't need to be scannable
- Avoid redundant nesting - if you have only one item under a heading, eliminate the intermediate level
- Keep heading levels consistent across similar content types
- Don't create wrapper sections that add no semantic value
- All similar content should follow identical structural patterns
- Table headers should be descriptive and consistent across the document
- Code examples should have uniform presentation
Before:
### Dependencies
- **Runtime**: Node.js (built-in modules only)
- **Modules**: `fs`, `path` (Node.js core)
- **External**: None - completely self-containedAfter:
### Dependencies
Uses plain Node.js with no external dependencies - completely self-contained using only built-in modules (`fs`, `path`).Before:
### Command Distribution
- **Git dominance**: 28.9% of all commands
- **File operations**: 24% (ls, cd, rm, etc.)
- **Development tools**: 4.2% (npm, node, python)After:
#### Command Distribution
| Metric | Value | Details |
|:-------|:------|:--------|
| Git commands | 28.9% | Version control dominance |
| File operations | 24.0% | ls, cd, rm, cp, mv, mkdir |
| Development tools | 4.2% | npm, node, python, make |When a single table becomes too large or contains disparate data types, split into multiple focused tables with descriptive subheadings.
Before:
| Metric | Value | Details |
|:-------|:------|:--------|
| Git commands | 28.9% | Version control dominance |
| File operations | 24.0% | ls, cd, rm, cp, mv, mkdir |
| Total commands | 16,006 | Complete history archive |
| Unique commands | 15,660 | 97.8% unique (low repetition) |
| Primary patterns | Git operations | Commit, push, pull sequences |
| Secondary patterns | File navigation | cd chains, ls exploration |After:
#### Command Distribution
| Metric | Value | Details |
|:-------|:------|:--------|
| Git commands | 28.9% | Version control dominance |
| File operations | 24.0% | ls, cd, rm, cp, mv, mkdir |
#### Usage Patterns
| Metric | Value | Details |
|:-------|:------|:--------|
| Total commands | 16,006 | Complete history archive |
| Unique commands | 15,660 | 97.8% unique (low repetition) |
#### Workflows
| Metric | Value | Details |
|:-------|:------|:--------|
| Primary patterns | Git operations | Commit, push, pull sequences |
| Secondary patterns | File navigation | cd chains, ls exploration |Before:
### Generate Full Analysis
```bash
node analyze-history.js fish-history-sample.txt
# Creates fish-history-analysis.md with comprehensive reportnode query-history.js fish-history-sample.txt search "docker"
# Returns all commands containing "docker"
After:
```markdown
| Task | Command | Description |
|:-----|:--------|:------------|
| Generate Full Analysis | `node analyze-history.js fish-history-sample.txt` | Creates fish-history-analysis.md with comprehensive report |
| Find Docker Usage | `node query-history.js fish-history-sample.txt search "docker"` | Returns all commands containing "docker" |
Before:
**Purpose**: Comprehensive statistical analysis and report generation
**Usage**:
```bash
node analyze-history.js [path-to-fish-history-sample.txt]Features:
- Session Detection: Identifies shell session boundaries
- Command Categorization: Groups commands into categories
After:
```markdown
Comprehensive statistical analysis and report generation.
```bash
node analyze-history.js [path-to-fish-history-sample.txt]
Identifies shell session boundaries using exit commands and behavioral patterns, groups commands into categories, and generates comprehensive analysis reports.
### Eliminate Redundant Hierarchy
Before:
```markdown
## Analysis Tools
### Shell Utility
#### fish-query.sh
Fast command-line queries for common operations.
### JavaScript Tools
#### analyze-history.js
Comprehensive statistical analysis and report generation.
After:
## Analysis Tools
### fish-query.sh
Fast command-line queries for common operations.
### analyze-history.js
Comprehensive statistical analysis and report generation.Don't create top-level sections for data files, sample datasets, or other supporting materials unless they are primary deliverables. Supporting files should be mentioned briefly in context rather than given prominent placement.
LLM configuration files should not be considered when creating documentation as they are written in a specific style for AI systems, not human readers. Common LLM configuration files include:
CLAUDE.md- Claude AI instructionsGEMINI.md- Google Gemini instructionsCHATGPT.md- ChatGPT instructionsGPT.md- OpenAI GPT instructionsAI.md- Generic AI instructions.claudefiles - Claude-specific configuration.aiderfiles - Aider AI assistant configuration
Before:
## Core Components
### Data File
`fish-history-sample.txt` - Anonymized fish shell history archive (459KB, 16,006 lines)
## Analysis ToolsAfter:
## Overview
This project provides analysis tools for fish shell command history. It includes an anonymized sample dataset (`fish-history-sample.txt`) containing 16,006 command entries.
## Analysis Tools- Place technical details near the tools that implement them
- Group related information together rather than spreading across sections
- Put sample data insights in context, not mixed with architectural details
- Replace step-by-step bullet lists with ASCII flow diagrams where appropriate
- Show data flow and relationships visually rather than just listing steps
- Always use left-aligned headers:
|:-------|:--------| - Use descriptive column headers (e.g., "Command" not "Syntax")
- Group related data with section breaks in tables using bold headers
- Replace "leverage" with "use"
- Avoid buzzwords like "sophisticated", "innovative", "powerful", "elegant"
- Focus on what code does, not how impressive it is
- Describe technical mechanisms rather than making value judgments
- Use specific, measurable terms when possible
- Be direct and factual
- Use parallel structure for similar content types
- Maintain consistent terminology throughout
- Apply formatting rules uniformly
- Identify redundant hierarchy levels
- Find bullet-heavy sections that hide data
- Locate inconsistent formatting patterns
- Note pseudo-heading abuse
- Convert appropriate bullets to tables
- Replace pseudo-headings with natural language
- Eliminate unnecessary nesting
- Consolidate verbose examples
- Standardize table formatting
- Ensure parallel structure across similar sections
- Verify terminology consistency
- Check that all similar content follows identical patterns
- Remove corporate jargon
- Improve technical precision
- Ensure clarity over cleverness
- Make content scannable where appropriate
Good documentation has:
- Clear information hierarchy without redundant nesting
- Tables for structured data, bullets for actionable information
- Consistent formatting across similar content types
- Natural language flow without pseudo-headings
- Technical precision without corporate buzzwords
Bad documentation shows:
- Excessive bullet points hiding actual data
- Redundant heading levels that add no value
- Inconsistent structure across similar sections
- Corporate jargon and unnecessary superlatives
- Verbose examples that could be tabular
Apply these principles systematically to transform basic documentation into professional, scannable, and useful technical docs.