AI Assistant Toolkit
Stream ghost text suggestions over the canvas with AiToolkit.
AiToolkit adds a streaming ghost-text writing assistant to Scrivr. Suggestions are rendered
as a canvas overlay — the document is not modified during streaming. When the stream ends the
content is inserted as a tracked change so the user can review it before accepting.
How It Works
AiToolkit bundles three sub-extensions:
UniqueId— assigns a stablenodeIdto every block. Used to anchor ghost text to the right position.GhostText— renders streaming text as a canvas overlay below the anchor block.AiCaret— animates a pulsing cursor during streaming.
The core flow:
- Add
AiToolkitto extensions. - Get a handle:
const ai = getAiToolkit(editor) - Read context:
ai.getContext()orai.getBlocks() - Call
await ai.generateSuggestion(nodeId, stream)— handles streaming, overlay, and insertion.
Setup
import { useScrivrEditor, Scrivr, StarterKit } from '@scrivr/react';
import { AiToolkit } from '@scrivr/plugins';
export function AiEditor() {
const editor = useScrivrEditor({
extensions: [
StarterKit,
AiToolkit,
],
});
return <Scrivr editor={editor} style={{ height: '100vh' }} />;
}Reading Document Context
import { getAiToolkit } from '@scrivr/plugins';
const ai = getAiToolkit(editor);
// Text context around the cursor — use this to build your LLM prompt
const context = ai.getContext({ beforeChars: 1000, afterChars: 200 });
// { before: string, after: string, selection: string, cursorPos: number, totalLength: number }
// All blocks with stable nodeIds — use nodeId to anchor ghost text
const blocks = ai.getBlocks();
// [{ nodeId: 'abc123', text: 'First paragraph...' }, ...]
// Markdown of a selection range
const { from, to } = editor.getState().selection;
const selectedMd = ai.getMarkdownRange(from, to);
// Document split into chunks for LLM context windows
const chunks = ai.getTextChunks(4000);Streaming a Suggestion
generateSuggestion takes a nodeId and an AsyncIterable<string>. It streams cosmetically
as ghost text, then inserts the result after the anchor block as a tracked change.
import { getAiToolkit } from '@scrivr/plugins';
async function runSuggestion(editor: Editor) {
const ai = getAiToolkit(editor);
// Build context for the LLM
const context = ai.getContext({ beforeChars: 1000 });
// Get the nodeId of the block to insert after
const blocks = ai.getBlocks();
const anchorBlock = blocks[blocks.length - 1]; // last block
if (!anchorBlock) return;
// Fetch a streaming response from your backend
const response = await fetch('/api/ai/complete', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ before: context.before }),
});
// Convert ReadableStream to AsyncIterable<string>
async function* toTextStream(stream: ReadableStream<Uint8Array>): AsyncIterable<string> {
const reader = stream.getReader();
const decoder = new TextDecoder();
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
yield decoder.decode(value);
}
} finally {
reader.releaseLock();
}
}
// Stream ghost text cosmetically, then insert as a tracked change
await ai.generateSuggestion(anchorBlock.nodeId, toTextStream(response.body!));
}Backend API Route
import { streamText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';
export async function POST(req: Request) {
const { before } = await req.json();
const result = streamText({
model: anthropic('claude-sonnet-4-6'),
system: 'You are a writing assistant. Continue the document naturally.',
prompt: `Continue after:\n\n${before}`,
});
return result.toTextStreamResponse();
}ANTHROPIC_API_KEY must be set in your backend environment.
Low-Level Streaming
If you need more control — for example to clear ghost text early or handle errors — use
streamGhostText directly instead of generateSuggestion:
const ai = getAiToolkit(editor);
try {
// Streams cosmetically only — document is NOT modified
const generated = await ai.streamGhostText(anchorBlock.nodeId, stream);
// Clear the overlays
ai.clearGhostText();
ai.clearAiCaret();
// Now decide what to do with `generated`...
} catch (err) {
ai.clearGhostText();
ai.clearAiCaret();
throw err;
}