Two implementations of the exact same React component — a table with resizable columns — written in two radically different styles. Both optimized not just for reading, but for changing.
human-friendly-ResizableTable.tsx(~155 lines) — Optimized for human readability and human changeabilityllm-friendly-ResizableTable.tsx(~240 lines) — Optimized for LLM comprehension and LLM changeability
Both produce identical runtime behavior. The difference is entirely in how the code communicates its intent and how easy it is to modify.
| Dimension | Human-Friendly | LLM-Friendly | Why It Matters for LLMs |
|---|---|---|---|
| Variable names | col, widths, delta |
columnDefinition, columnWidthsByColumnKey, horizontalDragDeltaInPixels |
LLMs don't have "muscle memory" for abbreviations. delta could mean anything — time, position, diff. The verbose name eliminates disambiguation entirely. |
| Magic numbers | 50, 150, 6 inline |
tableStyleConfig.columnMinimumWidthInPixels, etc. |
An LLM can't "see" that 50 in a Math.max is a minimum width constraint. The constant name is the documentation. |
| Type names | Column<T>, Props<T> |
ResizableTableColumnDefinition<TRowData> |
Column is ambiguous across any codebase. The full name tells the LLM exactly what domain this type belongs to without needing file-level context. |
| JSDoc | None | On every type, function, and component | LLMs process doc comments as first-class semantic context. The @property and @example annotations create an explicit mapping between field names and their purpose that an LLM can reference during reasoning. |
| Function extraction | Logic inline in hooks | Named pure functions like buildHeaderCellStyle, createMouseDownHandlerForColumnResize |
Named functions give the LLM a grep-friendly handle. If asked "how is width clamped?", the LLM can locate the logic by name rather than reasoning about anonymous arrow functions. |
| Callback naming | onMouseDown, onMouseMove |
handleMouseMoveDuringDrag, handleMouseUpToEndDrag |
The human version relies on closure scope to know which mousedown handler this is. The LLM version encodes the full context chain in the name itself. |
This is the more interesting axis. "Easy to read" and "easy to change" optimize for different things.
The human-friendly version uses small, composable pieces with shared abstractions:
- Shared
stylesobject — change a color once and it propagates everywhere - Extracted sub-components (
ResizeHandle,HeaderCell,DataCell) — swap one implementation without touching others - Extracted hook (
useColumnWidths) — change state management without touching rendering - Narrow prop interfaces — each component declares exactly what it needs
When a human wants to change the header background, they update styles.th.background. When they want a different resize behavior, they modify ResizeHandle in isolation. Their IDE helps them navigate between pieces.
This works for humans because:
- Humans navigate code spatially with IDE features (go-to-definition, find-references)
- Humans build mental maps of component relationships over time
- Humans benefit from DRY because "change in one place" aligns with how they think
The LLM-friendly version uses the opposite approach:
- Single flat component, no sub-components — the LLM never has to reason across component boundaries or trace props through a tree
- Top-level
tableStyleConfigobject — every visual tunable in one named, documented block. "Change the header color" becomes "findheaderCellBackgroundColorin the config object" - Section markers (
// --- STATE: column widths ---,// --- RENDER: header row ---) — explicit anchors the LLM can search for, acting as a table of contents - No shared style abstractions —
buildHeaderCellStyleandbuildBodyCellStyleare separate functions that each read from the config independently. Changing how body cells look can never accidentally break headers. - Named intermediates at every step — when the LLM needs to insert logic between steps A and B, having
const headerCellStyle = ...andconst onMouseDownForThisColumn = ...as explicit variables makes insertion trivial @exampleblocks on types — show the LLM exactly what a valid modification looks like. Training signal for what the shape of a change should be.
This works for LLMs because:
- LLMs process files linearly with no IDE features, no go-to-definition, no spatial memory
- LLMs reason better with fewer files and less indirection — colocating everything removes an entire class of errors (wrong file, wrong component, missed a reference)
- LLMs benefit from duplication because isolated code means changes have local-only blast radius. DRY is an enemy here: a shared
styles.tdused in two places means the LLM must reason about whether changing it breaks the other consumer. - Section markers function like
ctrl+Ffor an LLM — they're semantic anchors in a flat token stream
Human version — 2 edits, needs mental model of where styles live:
- Find the
stylesobject (could be in this file, could be imported) - Change
styles.th.backgroundfrom"#f8fafc"to"#1e293b"
LLM version — 1 edit, self-evident from the config block:
- Change
tableStyleConfig.headerCellBackgroundColorfrom"#f8fafc"to"#1e293b"
Human version — navigate component tree:
- Understand that
DataCellrenders individual cells, not rows - Realize the
<tr>is in the parentResizableTable, not inDataCell - Add hover state to the right component at the right level
LLM version — search for section marker:
- Find
// --- RENDER: body rows --- - Add
onMouseEnter/onMouseLeaveto the<tr>that's right there - Add a hover color to
tableStyleConfig
Humans read code spatially — they scan structure, recognize shapes, and fill in gaps from experience. A senior React dev sees useRef(0), onMouseDown, and delta, and instantly knows "ah, this is a drag handler tracking horizontal offset." Concise code respects their limited visual bandwidth and rewards pattern recognition. Composition respects how they navigate with IDE tools.
LLMs read code linearly and literally — every token is weighted equally in the attention mechanism. They have no spatial intuition, no IDE, no pattern library built from years of coding, and no ability to "just know" that delta in this particular closure means horizontal pixel offset from the last mouse event. Flat, colocated, explicitly-named code means less reasoning, fewer inference steps, and a smaller blast radius for any change.
The principles that make code easy for humans to change — DRY, composition, abstraction — make it harder for LLMs to change, because every abstraction is an indirection the LLM must resolve.
The principles that make code easy for LLMs to change — duplication, colocation, flat structure — make it harder for humans to change, because humans see boilerplate and inconsistency risk.
-
Configuration objects are edit magnets. The
tableStyleConfigblock is an explicit, documented "here's what you can change" surface. An LLM asked to "make the table more compact" can scan config keys and adjustcellPaddingVerticalandcellPaddingHorizontalwithout reading any JSX. -
Section markers are grep targets.
// --- RENDER: header row ---lets the LLM jump to exactly the right spot in a flat file. Sub-components require the LLM to first figure out which component to edit. -
Duplication eliminates cross-reference reasoning. When header and body cell styles are independent functions, the LLM can modify one without loading the other into its context window. Shared styles require reading all consumers.
-
@exampleblocks are change templates. When an LLM sees@example { columnKey: "status", renderCellContent: ... }, it has a concrete template for how to add a new custom-rendered column. It doesn't need to infer the pattern from the implementation. -
Flat components mean single-file edits. The LLM version requires editing exactly one function in one file for any change. The human version might require coordinating edits across
ResizeHandle,HeaderCell,DataCell, and the parent component.
The LLM-friendly version is ~1.5x longer but contains ~zero ambiguity, has a single edit target for any change, and makes every tunable value discoverable from the config block.
Human-friendly code trusts the reader's brain and IDE to fill gaps. LLM-friendly code fills them all in advance.
As AI-assisted development becomes the norm, the question isn't whether this trade-off exists — it's where the balance point should be.