Patterns stolen from API documentation and translated into React components. Each one solved a real problem I kept running into.
Fork and adapt as needed.
Stripe uses idempotency keys to prevent duplicate charges when requests are retried. The component equivalent is duplicate prevention at the source, blocking rapid clicks before they trigger multiple submissions.
The problem: Users double-click submit buttons. Networks are slow. Requests get retried. You end up with duplicate form submissions, duplicate API calls, duplicate database entries.
function SubmitButton({ onSubmit, children }) {
const [status, setStatus] = useState('idle');
const isSubmitting = useRef(false);
const handleClick = async () => {
// Ref check prevents race condition between clicks
if (isSubmitting.current) return;
isSubmitting.current = true;
setStatus('submitting');
try {
await onSubmit();
setStatus('success');
} catch (error) {
setStatus('error');
} finally {
isSubmitting.current = false;
}
};
return (
<button
onClick={handleClick}
disabled={status === 'submitting'}
aria-busy={status === 'submitting'}
>
{status === 'submitting' ? 'Submitting...' : children}
</button>
);
}Why both ref and state? State updates are batched and async. If a user clicks twice before React re-renders, the state check alone won't catch it. The ref provides synchronous blocking.
Source: Stripe idempotent requests
Stripe's error handling documentation recommends exponential backoff for transient failures. The same pattern works for component-level API calls.
The problem: Network requests fail. Retrying immediately often fails again. Retrying too aggressively overwhelms the server. You need a strategy that backs off gracefully.
async function fetchWithRetry(
fn,
{ maxRetries = 3, baseDelayMs = 1000, maxDelayMs = 30000 } = {}
) {
let lastError;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
lastError = error;
// Don't retry client errors (4xx) except rate limits (429)
if (error.status >= 400 && error.status < 500 && error.status !== 429) {
throw error;
}
if (attempt < maxRetries) {
// Exponential backoff with jitter
const delay = Math.min(
baseDelayMs * Math.pow(2, attempt) + Math.random() * 1000,
maxDelayMs
);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
throw lastError;
}
// Usage in a component
function useDataWithRetry(fetchFn) {
const [data, setData] = useState(null);
const [error, setError] = useState(null);
const [isLoading, setIsLoading] = useState(true);
useEffect(() => {
fetchWithRetry(fetchFn)
.then(setData)
.catch(setError)
.finally(() => setIsLoading(false));
}, [fetchFn]);
return { data, error, isLoading };
}Why jitter? If many clients retry at exactly the same intervals, they create thundering herd problems. Random jitter spreads the load.
Why skip 4xx errors? Client errors won't succeed on retry. The request itself is wrong. Only retry transient failures like network errors, timeouts, and 5xx responses.
Source: Stripe error handling
Stripe returns errors with consistent structure: type, code, message, and relevant parameters. Components should expose errors the same way.
The problem: Your component throws an error or sets an error state. The consuming code has no way to distinguish between error types or respond appropriately.
// Define error types as constants
const ErrorCodes = {
VALIDATION_FAILED: 'validation_failed',
NETWORK_ERROR: 'network_error',
RATE_LIMITED: 'rate_limited',
NOT_FOUND: 'not_found',
PERMISSION_DENIED: 'permission_denied',
UNKNOWN: 'unknown'
} as const;
// Structured error object
interface ComponentError {
code: keyof typeof ErrorCodes;
message: string;
field?: string;
retryable: boolean;
details?: Record<string, unknown>;
}
// Factory functions for common errors
function validationError(field: string, message: string): ComponentError {
return {
code: 'VALIDATION_FAILED',
message,
field,
retryable: false
};
}
function networkError(originalError: Error): ComponentError {
return {
code: 'NETWORK_ERROR',
message: 'Unable to connect. Check your internet connection.',
retryable: true,
details: { originalMessage: originalError.message }
};
}
// Usage in a component
function DataLoader({ onError }) {
const handleError = (error) => {
if (error.name === 'TypeError' && error.message.includes('fetch')) {
onError(networkError(error));
} else if (error.status === 404) {
onError({
code: 'NOT_FOUND',
message: 'The requested resource does not exist.',
retryable: false
});
} else {
onError({
code: 'UNKNOWN',
message: 'Something went wrong.',
retryable: true,
details: { originalError: error }
});
}
};
// ...
}Why retryable? Consuming code can automatically retry or show a retry button for transient errors, while displaying final error states for permanent failures.
Source: Stripe API errors
GitHub returns validation errors with resource, field, and code properties. Your form components can do the same.
The problem: A form field turns red. Maybe "Error" appears. The user guesses what went wrong. The developer consuming the component has no programmatic way to respond to specific error types.
function TextField({
name,
value,
onChange,
validation,
...props
}) {
const [error, setError] = useState(null);
const validate = (inputValue) => {
const result = validation?.(inputValue);
if (result?.valid === false) {
setError({
field: name,
code: result.code, // 'required', 'format', 'length'
message: result.message, // Human-readable explanation
constraint: result.constraint // { min: 3, max: 100 }
});
return false;
}
setError(null);
return true;
};
return (
<div>
<input
name={name}
value={value}
onChange={(e) => onChange(e.target.value)}
onBlur={(e) => validate(e.target.value)}
aria-invalid={!!error}
aria-describedby={error ? `${name}-error` : undefined}
{...props}
/>
{error && (
<span
id={`${name}-error`}
role="alert"
data-error-code={error.code}
>
{error.message}
</span>
)}
</div>
);
}Why validate on blur? The role="alert" causes screen readers to announce immediately. Validating on every keystroke would fire announcements constantly. Blur validation is more accessible.
Why data-error-code? Gives consuming applications a hook to respond programmatically. Show different help text for required vs format errors. Track which validations fail most often.
Source: GitHub API troubleshooting
GitHub's API uses cursor-based pagination with Link headers. The same pattern works for component data loading.
The problem: You're loading a list that might have thousands of items. Loading everything at once is slow and wasteful. You need to paginate, but offset-based pagination breaks when items are added or removed.
function usePaginatedData(fetchPage) {
const [items, setItems] = useState([]);
const [cursor, setCursor] = useState(null);
const [hasMore, setHasMore] = useState(true);
const [isLoading, setIsLoading] = useState(false);
const loadMore = useCallback(async () => {
if (isLoading || !hasMore) return;
setIsLoading(true);
try {
const response = await fetchPage(cursor);
setItems(prev => [...prev, ...response.items]);
setCursor(response.nextCursor);
setHasMore(response.nextCursor !== null);
} finally {
setIsLoading(false);
}
}, [cursor, hasMore, isLoading, fetchPage]);
const reset = useCallback(() => {
setItems([]);
setCursor(null);
setHasMore(true);
}, []);
return { items, loadMore, hasMore, isLoading, reset };
}
// The fetch function returns cursor-based responses
async function fetchUsers(cursor) {
const params = cursor ? `?after=${cursor}&limit=20` : '?limit=20';
const response = await fetch(`/api/users${params}`);
const data = await response.json();
return {
items: data.users,
nextCursor: data.nextCursor // null when no more pages
};
}
// Usage
function UserList() {
const { items, loadMore, hasMore, isLoading } = usePaginatedData(fetchUsers);
return (
<div>
{items.map(user => <UserCard key={user.id} user={user} />)}
{hasMore && (
<button onClick={loadMore} disabled={isLoading}>
{isLoading ? 'Loading...' : 'Load more'}
</button>
)}
</div>
);
}Why cursor-based? Offset pagination breaks when items are inserted or deleted. If you're on page 3 and someone adds an item to page 1, you'll see duplicates or miss items. Cursors are stable.
Source: GitHub pagination
GitHub returns 404 for private resources you can't access, rather than 403, to avoid leaking information about what exists. Components can apply the same principle.
The problem: Your component displays different UI for "not found" vs "not authorized". But revealing that a resource exists when the user can't access it leaks information.
function ResourceLoader({ resourceId, children }) {
const [state, setState] = useState({ status: 'loading', data: null });
useEffect(() => {
async function load() {
try {
const response = await fetch(`/api/resources/${resourceId}`);
if (response.status === 404) {
// Could be truly not found, or could be forbidden
// We don't distinguish to avoid leaking info
setState({
status: 'not_found',
message: 'This resource doesn\'t exist or you don\'t have access.'
});
return;
}
if (!response.ok) {
setState({ status: 'error', message: 'Something went wrong.' });
return;
}
const data = await response.json();
setState({ status: 'success', data });
} catch (error) {
setState({ status: 'error', message: 'Unable to load resource.' });
}
}
load();
}, [resourceId]);
if (state.status === 'loading') return <Spinner />;
if (state.status === 'not_found') return <NotFoundMessage message={state.message} />;
if (state.status === 'error') return <ErrorMessage message={state.message} />;
return children(state.data);
}When to use this: Any time your application has private or tenant-scoped resources. Confirming that something exists, even without showing it, can be a security or privacy issue.
Source: GitHub 404 for private repos
APIs use Sunset headers to give consumers time to migrate. Components can do the same with console warnings.
The problem: You change a prop name or retire a component. Teams discover the breakage months later when they update dependencies. By then, nobody remembers what changed or why.
let hasWarnedOldButton = false;
function OldButton(props) {
if (process.env.NODE_ENV === 'development' && !hasWarnedOldButton) {
hasWarnedOldButton = true;
console.warn(
'[DEPRECATED] OldButton will be removed in v3.0.0 (March 2026). ' +
'Use Button with variant prop instead. ' +
'Migration guide: https://your-system.com/migration/button'
);
}
return <Button {...mapOldPropsToNew(props)} />;
}
// For prop deprecation within a component
function Card({
type, // deprecated
variant, // new prop
children
}) {
const hasWarnedRef = useRef(false);
if (process.env.NODE_ENV === 'development' && type && !hasWarnedRef.current) {
hasWarnedRef.current = true;
console.warn(
'[DEPRECATED] Card prop "type" will be removed in v3.0.0. ' +
'Use "variant" instead.'
);
}
const resolvedVariant = variant || type || 'default';
return <div className={`card-${resolvedVariant}`}>{children}</div>;
}Why the warning flag? Without it, the warning fires on every render. In a component that re-renders frequently, you'd flood the console. One warning per session is enough.
Why wrap the new component? The old component becomes a thin adapter. Behaviour stays consistent during migration. Teams can upgrade incrementally.
Source: RFC 8594 (Sunset header)
APIs use 429 responses and RateLimit headers to communicate limits. Components can expose performance characteristics through props and documentation.
The problem: Scroll events fire 100 times per second. Keystroke events come in bursts. Each event triggers state updates and re-renders. The application grinds to a halt.
// Debounce: execute after activity stops
function debounce(fn, delay) {
let timeoutId;
return (...args) => {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => fn(...args), delay);
};
}
// Throttle: execute at most once per interval, with trailing call
function throttle(fn, interval) {
let lastExecution = 0;
let timeoutId = null;
let latestArgs = null;
return (...args) => {
latestArgs = args;
const now = Date.now();
const remaining = interval - (now - lastExecution);
if (remaining <= 0) {
clearTimeout(timeoutId);
timeoutId = null;
lastExecution = now;
fn(...latestArgs);
} else if (!timeoutId) {
timeoutId = setTimeout(() => {
lastExecution = Date.now();
timeoutId = null;
fn(...latestArgs);
}, remaining);
}
};
}Why capture latestArgs? Without it, intermediate calls get dropped. If calls arrive at 0ms, 50ms, and 100ms with a 100ms interval, the naive version loses the 100ms call's arguments. This version always uses the most recent data.
A complete search input with rate limiting:
function SearchInput({
onSearch,
debounceMs = 300,
minQueryLength = 2
}) {
const [query, setQuery] = useState('');
const onSearchRef = useRef(onSearch);
const timeoutRef = useRef(null);
useEffect(() => {
onSearchRef.current = onSearch;
}, [onSearch]);
useEffect(() => {
return () => clearTimeout(timeoutRef.current);
}, []);
const debouncedSearch = useCallback((value) => {
clearTimeout(timeoutRef.current);
timeoutRef.current = setTimeout(() => {
if (value.length >= minQueryLength) {
onSearchRef.current(value);
}
}, debounceMs);
}, [debounceMs, minQueryLength]);
return (
<input
type="search"
value={query}
onChange={(e) => {
setQuery(e.target.value);
debouncedSearch(e.target.value);
}}
aria-label="Search"
/>
);
}Why the ref for onSearch? If the parent passes an inline function, it changes on every render. Without the ref pattern, you'd recreate the debounced function constantly, defeating the debounce entirely.
Source: IETF RateLimit headers
APIs version through URLs, headers, or query parameters. Component libraries can adopt similar strategies for breaking changes.
The problem: You need to make a breaking change to a component API, but teams can't upgrade all at once. You need both versions to coexist.
// Strategy 1: Versioned exports
// components/Button/index.js
export { Button } from './Button';
export { Button as ButtonV2 } from './ButtonV2';
// Strategy 2: Version prop with different behavior
function Button({ version = 1, ...props }) {
if (version === 2) {
return <ButtonV2 {...props} />;
}
return <ButtonV1 {...props} />;
}
// Strategy 3: Feature flags for gradual rollout
function Button(props) {
const { useNewButton } = useFeatureFlags();
if (useNewButton) {
return <ButtonNext {...props} />;
}
return <ButtonLegacy {...props} />;
}
// Strategy 4: Codemods for automated migration
// Provide a codemod that transforms:
// <Button type="primary" />
// into:
// <Button variant="primary" />
// Teams run it once and the migration is done.Which to use? Versioned exports work for big changes. Version props work when behaviour differs significantly. Feature flags work for A/B testing or gradual rollout. Codemods work when the change is mechanical and can be automated.
Source: Microsoft Azure API versioning
GraphQL schemas declare which fields can be null. Component props can do the same with TypeScript and default values.
The problem: A component receives undefined for a prop and crashes. Or it renders incorrectly because it assumed a value would exist. Prop contracts are unclear.
// Explicit about what's required vs optional
interface CardProps {
// Required: component won't render correctly without these
title: string;
// Optional with explicit defaults
variant?: 'default' | 'highlighted' | 'muted';
// Optional, truly nullable (absence is meaningful)
badge?: string | null;
// Children are required for this component
children: React.ReactNode;
}
function Card({
title,
variant = 'default', // Default makes the contract clear
badge, // No default = nullable
children
}: CardProps) {
return (
<div className={`card card-${variant}`}>
<h3>{title}</h3>
{badge !== undefined && badge !== null && (
<span className="badge">{badge}</span>
)}
{children}
</div>
);
}
// The contract is now clear:
// - title and children are required
// - variant defaults to 'default' if not provided
// - badge is only shown when explicitly providedWhy distinguish undefined from null? undefined typically means "not provided". null can mean "explicitly empty". Some components need to distinguish between "user didn't set this" and "user set this to nothing".
Source: GraphQL nullability
GraphQL uses union types for fields that can return different shapes. Components can use discriminated unions for complex state.
The problem: Your component has multiple states (loading, error, success, empty) and the data shape differs for each. Checking for properties is error-prone.
// Discriminated union for component state
type DataState<T> =
| { status: 'idle' }
| { status: 'loading' }
| { status: 'error'; error: Error; retryable: boolean }
| { status: 'success'; data: T }
| { status: 'empty'; message: string };
function useData<T>(fetchFn: () => Promise<T>): DataState<T> {
const [state, setState] = useState<DataState<T>>({ status: 'idle' });
const load = useCallback(async () => {
setState({ status: 'loading' });
try {
const data = await fetchFn();
if (Array.isArray(data) && data.length === 0) {
setState({ status: 'empty', message: 'No results found.' });
} else {
setState({ status: 'success', data });
}
} catch (error) {
setState({
status: 'error',
error: error as Error,
retryable: isRetryable(error)
});
}
}, [fetchFn]);
return state;
}
// Usage with exhaustive checking
function DataDisplay<T>({ state, render }: {
state: DataState<T>;
render: (data: T) => React.ReactNode
}) {
switch (state.status) {
case 'idle':
return null;
case 'loading':
return <Spinner />;
case 'error':
return (
<ErrorMessage
message={state.error.message}
showRetry={state.retryable}
/>
);
case 'empty':
return <EmptyState message={state.message} />;
case 'success':
return <>{render(state.data)}</>;
}
}Why discriminated unions? TypeScript narrows the type based on the status field. Inside case 'success', TypeScript knows state.data exists. Inside case 'error', it knows state.error exists. No more optional chaining everywhere.
Source: GraphQL union types
GraphQL validates inputs against the schema before resolvers run. Components can validate props before rendering.
The problem: Invalid props cause runtime errors or weird behaviour deep inside the component. The error message doesn't help the developer understand what they did wrong.
// Validation functions that return structured errors
function validateProps<T>(
props: T,
validators: Record<keyof T, (value: unknown) => string | null>
): { valid: true } | { valid: false; errors: Record<string, string> } {
const errors: Record<string, string> = {};
for (const [key, validator] of Object.entries(validators)) {
const error = validator(props[key as keyof T]);
if (error) {
errors[key] = error;
}
}
if (Object.keys(errors).length > 0) {
return { valid: false, errors };
}
return { valid: true };
}
// Usage in a component
interface DataGridProps {
columns: Column[];
data: Row[];
pageSize?: number;
}
const dataGridValidators = {
columns: (v: unknown) =>
!Array.isArray(v) || v.length === 0
? 'columns must be a non-empty array'
: null,
data: (v: unknown) =>
!Array.isArray(v)
? 'data must be an array'
: null,
pageSize: (v: unknown) =>
v !== undefined && (typeof v !== 'number' || v < 1 || v > 100)
? 'pageSize must be between 1 and 100'
: null,
};
function DataGrid(props: DataGridProps) {
if (process.env.NODE_ENV === 'development') {
const validation = validateProps(props, dataGridValidators);
if (!validation.valid) {
console.error('DataGrid received invalid props:', validation.errors);
}
}
// Render logic...
}Why validate in development only? Validation has a runtime cost. In production, TypeScript has already caught type errors at build time. Development validation catches mistakes during local development with helpful messages.
Source: Apollo errors as data
- Stripe API documentation
- Stripe error handling
- GitHub REST API
- GitHub pagination
- Microsoft Azure API design guidelines
- GraphQL best practices
- RFC 8594 (Sunset header)
- IETF RateLimit headers
From the article What I stole from API documentation