Skip to content

Instantly share code, notes, and snippets.

@azizpunjani
Created February 5, 2026 14:46
Show Gist options
  • Select an option

  • Save azizpunjani/6c8d0263b02f969da79d3f6646230652 to your computer and use it in GitHub Desktop.

Select an option

Save azizpunjani/6c8d0263b02f969da79d3f6646230652 to your computer and use it in GitHub Desktop.
AI Text Transform - Implementation Analysis (streaming, patterns, recommendations)

AI Text Transform - Implementation Analysis

Overview

Analysis of implementing the AI Text Transform feature for Highspot, including streaming capabilities.

Based on the original technical design gist.


Key Components

1. useAITextTransform (Hook)

For teams wanting full control over UI.

Returns:

  • transform(text, options) - Transform with preset or custom promptKey
  • result - The AI-generated text
  • busy - Loading state
  • errors - Error messages
  • cancel - Cancel in-flight request
  • reset - Reset state

Built-in Presets:

Preset Prompt Key
shorten ai_refine_shorten
elaborate ai_refine_elaborate
professional ai_refine_professional

2. AIRefinePanel (Component)

Full UI out of the box with:

  • Preset buttons (Shorten, Elaborate, Make it professional)
  • Action buttons (Regenerate, Replace, Add below)
  • Result display with label
  • Error handling & loading state

Streaming Analysis

Backend Status: ✅ Already Supported

The /api/v1/ai/llm/general endpoint already supports streaming via query parameters:

POST /api/v1/ai/llm/general?streaming=true&stream_format=sse

Location: ai-services/py/llmproxy/llmproxy/api/endpoints/general.py

SSE Event Format:

Event Data
stream_start {"message": "Stream started", "feedback_id": "...", "domain_id": "...", "model": "..."}
token {"content": "token text"}
stream_end {"message": "Stream ended", "token_count": N, "feedback_id": "..."}
stream_error {"error": "message"}

Frontend Patterns Found

1. SSE Parsing Helper (ChatUIStreamClient.ts):

function parseSSEEvent(data: string): { event?: string; data: string } | null {
  const lines = data.split('\n');
  let eventName: string | undefined;
  let eventData = '';

  for (const line of lines) {
    if (line.startsWith('event: ')) {
      eventName = line.substring(7).trim();
    } else if (line.startsWith('data: ')) {
      eventData = line.substring(6).trim();
    }
  }
  return eventData ? { event: eventName, data: eventData } : null;
}

2. Stream Reading Pattern:

const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  buffer += decoder.decode(value, { stream: true });
  const chunks = buffer.split('\n\n');
  buffer = chunks.pop() || ''; // Keep incomplete line in buffer

  for (const chunk of chunks) {
    const parsed = parseSSEEvent(chunk);
    // ... handle event
  }
}

Effort Estimate

Task Effort
Backend changes None needed - streaming already supported
Frontend streaming hook ~1 day
Testing ~0.5 day
Total ~1.5 days

Recommended Implementation Approach

Phase 1: Ship Without Streaming (v1)

Use existing useGeneralAIRequest hook:

import useGeneralAIRequest from '~/features/shared/components/ai/RoutesAIServices/hooks/useGeneralAIRequest';

const BUILT_IN_PROMPT_KEYS: Record<string, string> = {
  shorten: 'ai_refine_shorten',
  elaborate: 'ai_refine_elaborate',
  professional: 'ai_refine_professional',
};

export const useAITextTransform = () => {
  const currentUser = getCurrentUser();
  const [result, setResult] = useState<string | null>(null);
  const [activePreset, setActivePreset] = useState<string | null>(null);

  const { onSubmit, busy, errors, onCancel, setErrors } = useGeneralAIRequest({
    onSuccess: (response) => {
      setResult(response?.choices?.[0]?.text ?? null);
    },
  });

  const transform = useCallback(
    (text: string, options: { preset?: string; promptKey?: string }) => {
      const promptKey = options.promptKey ?? BUILT_IN_PROMPT_KEYS[options.preset!];
      if (!promptKey) {
        setErrors(['Invalid preset or missing promptKey']);
        return;
      }

      setResult(null);
      setActivePreset(options.preset ?? 'custom');

      onSubmit({
        domainId: currentUser.get('domain_id'),
        userId: currentUser.get('id'),
        requestProps: { prompt_key: promptKey, text },
        args: { temperature: 0.3, enable_event_logging: true },
      });
    },
    [currentUser, onSubmit, setErrors]
  );

  const reset = useCallback(() => {
    setResult(null);
    setActivePreset(null);
    setErrors([]);
  }, [setErrors]);

  return { transform, result, activePreset, busy, errors, cancel: onCancel, reset };
};

Phase 2: Add Streaming Behind Feature Flag

Same hook API, streaming implementation internally:

export const useAITextTransform = () => {
  const isStreamingEnabled = useFeatureFlag('ai_refine_streaming');
  
  // ... shared state ...

  const transform = useCallback((text: string, options) => {
    if (isStreamingEnabled) {
      transformStreaming(text, options);
    } else {
      transformNonStreaming(text, options);
    }
  }, [isStreamingEnabled]);

  // Consumers don't need to change!
  return { transform, result, busy, errors, cancel, reset };
};

Streaming Implementation

const transformStreaming = async (text: string, promptKey: string) => {
  setResult('');
  setStreamingBusy(true);
  abortRef.current = new AbortController();

  const url = `${RoutesAIServices.general()}?streaming=true`;

  try {
    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        Accept: 'text/event-stream',
        'HS-CSRF': window.hs_csrf || '',
      },
      credentials: 'include',
      body: JSON.stringify({
        domain_id: currentUser.get('domain_id'),
        user_id: currentUser.get('id'),
        prompt_key: promptKey,
        text,
        args: { temperature: 0.3 },
      }),
      signal: abortRef.current.signal,
    });

    const reader = response.body?.getReader();
    const decoder = new TextDecoder();
    let buffer = '';

    while (reader) {
      const { done, value } = await reader.read();
      if (done) break;

      buffer += decoder.decode(value, { stream: true });
      const chunks = buffer.split('\n\n');
      buffer = chunks.pop() || '';

      for (const chunk of chunks) {
        const parsed = parseSSEEvent(chunk);
        if (!parsed) continue;

        const eventData = JSON.parse(parsed.data);
        
        if (parsed.event === 'token' && eventData.content) {
          setResult(prev => prev + eventData.content);
        } else if (parsed.event === 'stream_error') {
          setStreamingErrors([eventData.error]);
        }
      }
    }
  } catch (err: any) {
    if (err.name !== 'AbortError') {
      setStreamingErrors([err.message]);
    }
  } finally {
    setStreamingBusy(false);
  }
};

File Structure

packages/polar-ui/src/AIRefine/
├── index.ts                    # Exports
├── useAITextTransform.ts       # The hook
├── AIRefinePanel.tsx           # The component
├── AIRefinePanel.module.scss   # Styles
├── types.ts                    # TypeScript types
├── __tests__/
│   ├── useAITextTransform.test.ts
│   └── AIRefinePanel.test.tsx
└── stories/
    └── AIRefinePanel.stories.tsx

Key Files Referenced

Backend (ai-services)

  • py/llmproxy/llmproxy/api/endpoints/general.py - Streaming endpoint implementation
  • py/shared/shared/handlers/streaming_callback_handler.py - SSE event formatting

Frontend (web/client)

  • features/shared/components/ai/RoutesAIServices/hooks/useGeneralAIRequest.js - Existing non-streaming hook
  • features/copilot/components/CopilotCenter/agent-platform/chatui/ChatUIStreamClient.ts - SSE streaming patterns
  • features/copilot/utility/streaming/streamCopilotAnswer.ts - AG-UI streaming example
  • features/training/trainingShared/modals/RolePlayScenarioModal/hooks/useGenerateScenario.ts - Example general API usage

JIRA Breakdown (from original design)

Ticket Description Team
1 Register prompts in GrowthBook & deploy to SUs AI Services
2 Create useAITextTransform hook Polar UI
3 Create AIRefinePanel component Polar UI
4 Add Storybook stories Polar UI
5 Create Lexical AIRefinePlugin SmartPages
6 Feature flag setup SmartPages
7 E2E tests SmartPages

Open Questions

  1. Should the panel support a "compact" mode for smaller spaces?
  2. Do we need analytics/tracking built in?
  3. Should we add more default presets? (summarize, translate, etc.)
  4. When to enable streaming by default?

Generated from conversation analysis on Feb 5, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment