Analysis of implementing the AI Text Transform feature for Highspot, including streaming capabilities.
Based on the original technical design gist.
For teams wanting full control over UI.
Returns:
transform(text, options)- Transform with preset or custom promptKeyresult- The AI-generated textbusy- Loading stateerrors- Error messagescancel- Cancel in-flight requestreset- Reset state
Built-in Presets:
| Preset | Prompt Key |
|---|---|
shorten |
ai_refine_shorten |
elaborate |
ai_refine_elaborate |
professional |
ai_refine_professional |
Full UI out of the box with:
- Preset buttons (Shorten, Elaborate, Make it professional)
- Action buttons (Regenerate, Replace, Add below)
- Result display with label
- Error handling & loading state
The /api/v1/ai/llm/general endpoint already supports streaming via query parameters:
POST /api/v1/ai/llm/general?streaming=true&stream_format=sse
Location: ai-services/py/llmproxy/llmproxy/api/endpoints/general.py
SSE Event Format:
| Event | Data |
|---|---|
stream_start |
{"message": "Stream started", "feedback_id": "...", "domain_id": "...", "model": "..."} |
token |
{"content": "token text"} |
stream_end |
{"message": "Stream ended", "token_count": N, "feedback_id": "..."} |
stream_error |
{"error": "message"} |
1. SSE Parsing Helper (ChatUIStreamClient.ts):
function parseSSEEvent(data: string): { event?: string; data: string } | null {
const lines = data.split('\n');
let eventName: string | undefined;
let eventData = '';
for (const line of lines) {
if (line.startsWith('event: ')) {
eventName = line.substring(7).trim();
} else if (line.startsWith('data: ')) {
eventData = line.substring(6).trim();
}
}
return eventData ? { event: eventName, data: eventData } : null;
}2. Stream Reading Pattern:
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const chunks = buffer.split('\n\n');
buffer = chunks.pop() || ''; // Keep incomplete line in buffer
for (const chunk of chunks) {
const parsed = parseSSEEvent(chunk);
// ... handle event
}
}| Task | Effort |
|---|---|
| Backend changes | None needed - streaming already supported |
| Frontend streaming hook | ~1 day |
| Testing | ~0.5 day |
| Total | ~1.5 days |
Use existing useGeneralAIRequest hook:
import useGeneralAIRequest from '~/features/shared/components/ai/RoutesAIServices/hooks/useGeneralAIRequest';
const BUILT_IN_PROMPT_KEYS: Record<string, string> = {
shorten: 'ai_refine_shorten',
elaborate: 'ai_refine_elaborate',
professional: 'ai_refine_professional',
};
export const useAITextTransform = () => {
const currentUser = getCurrentUser();
const [result, setResult] = useState<string | null>(null);
const [activePreset, setActivePreset] = useState<string | null>(null);
const { onSubmit, busy, errors, onCancel, setErrors } = useGeneralAIRequest({
onSuccess: (response) => {
setResult(response?.choices?.[0]?.text ?? null);
},
});
const transform = useCallback(
(text: string, options: { preset?: string; promptKey?: string }) => {
const promptKey = options.promptKey ?? BUILT_IN_PROMPT_KEYS[options.preset!];
if (!promptKey) {
setErrors(['Invalid preset or missing promptKey']);
return;
}
setResult(null);
setActivePreset(options.preset ?? 'custom');
onSubmit({
domainId: currentUser.get('domain_id'),
userId: currentUser.get('id'),
requestProps: { prompt_key: promptKey, text },
args: { temperature: 0.3, enable_event_logging: true },
});
},
[currentUser, onSubmit, setErrors]
);
const reset = useCallback(() => {
setResult(null);
setActivePreset(null);
setErrors([]);
}, [setErrors]);
return { transform, result, activePreset, busy, errors, cancel: onCancel, reset };
};Same hook API, streaming implementation internally:
export const useAITextTransform = () => {
const isStreamingEnabled = useFeatureFlag('ai_refine_streaming');
// ... shared state ...
const transform = useCallback((text: string, options) => {
if (isStreamingEnabled) {
transformStreaming(text, options);
} else {
transformNonStreaming(text, options);
}
}, [isStreamingEnabled]);
// Consumers don't need to change!
return { transform, result, busy, errors, cancel, reset };
};const transformStreaming = async (text: string, promptKey: string) => {
setResult('');
setStreamingBusy(true);
abortRef.current = new AbortController();
const url = `${RoutesAIServices.general()}?streaming=true`;
try {
const response = await fetch(url, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Accept: 'text/event-stream',
'HS-CSRF': window.hs_csrf || '',
},
credentials: 'include',
body: JSON.stringify({
domain_id: currentUser.get('domain_id'),
user_id: currentUser.get('id'),
prompt_key: promptKey,
text,
args: { temperature: 0.3 },
}),
signal: abortRef.current.signal,
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (reader) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const chunks = buffer.split('\n\n');
buffer = chunks.pop() || '';
for (const chunk of chunks) {
const parsed = parseSSEEvent(chunk);
if (!parsed) continue;
const eventData = JSON.parse(parsed.data);
if (parsed.event === 'token' && eventData.content) {
setResult(prev => prev + eventData.content);
} else if (parsed.event === 'stream_error') {
setStreamingErrors([eventData.error]);
}
}
}
} catch (err: any) {
if (err.name !== 'AbortError') {
setStreamingErrors([err.message]);
}
} finally {
setStreamingBusy(false);
}
};packages/polar-ui/src/AIRefine/
├── index.ts # Exports
├── useAITextTransform.ts # The hook
├── AIRefinePanel.tsx # The component
├── AIRefinePanel.module.scss # Styles
├── types.ts # TypeScript types
├── __tests__/
│ ├── useAITextTransform.test.ts
│ └── AIRefinePanel.test.tsx
└── stories/
└── AIRefinePanel.stories.tsx
py/llmproxy/llmproxy/api/endpoints/general.py- Streaming endpoint implementationpy/shared/shared/handlers/streaming_callback_handler.py- SSE event formatting
features/shared/components/ai/RoutesAIServices/hooks/useGeneralAIRequest.js- Existing non-streaming hookfeatures/copilot/components/CopilotCenter/agent-platform/chatui/ChatUIStreamClient.ts- SSE streaming patternsfeatures/copilot/utility/streaming/streamCopilotAnswer.ts- AG-UI streaming examplefeatures/training/trainingShared/modals/RolePlayScenarioModal/hooks/useGenerateScenario.ts- Example general API usage
| Ticket | Description | Team |
|---|---|---|
| 1 | Register prompts in GrowthBook & deploy to SUs | AI Services |
| 2 | Create useAITextTransform hook |
Polar UI |
| 3 | Create AIRefinePanel component |
Polar UI |
| 4 | Add Storybook stories | Polar UI |
| 5 | Create Lexical AIRefinePlugin |
SmartPages |
| 6 | Feature flag setup | SmartPages |
| 7 | E2E tests | SmartPages |
- Should the panel support a "compact" mode for smaller spaces?
- Do we need analytics/tracking built in?
- Should we add more default presets? (summarize, translate, etc.)
- When to enable streaming by default?
Generated from conversation analysis on Feb 5, 2026