API Reference (TypeScript)
The complete TypeScript API for LLM Context Forge. The API surface is intentionally identical in behavior to the Python SDK.
TokenCounter
Provides exact token counting for specific model encodings.
import { TokenCounter } from 'llm-context-forge';
const counter = new TokenCounter("gpt-4o");
Methods
count(text: string): number
Calculates the exact token count.
fitsInWindow(text: string, reserveOutput: number = 0): boolean
Returns true if the tokens for text + reserveOutput fit within the model's limit.
DocumentChunker
Splits text into token-safe portions using specified strategies.
import { DocumentChunker, ChunkStrategy } from 'llm-context-forge';
const chunker = new DocumentChunker("claude-3-5-sonnet");
Options Interface
interface ChunkOptions {
maxTokens: number;
overlapTokens?: number;
}
Methods
chunk(text: string, strategy: ChunkStrategy, options: ChunkOptions): string[]
Splits the string according to the strategy, ensuring no chunk exceeds maxTokens.
ContextWindow
Provides a priority-based packing mechanism for assembling RAG prompts.
import { ContextWindow, Priority } from 'llm-context-forge';
const window = new ContextWindow("gpt-4o");
Methods
addBlock(content: string, priority: Priority, blockId?: string): void
Enqueues content. Passing a priority of 0 (Priority.CRITICAL) guarantees the content is included, or it throws a ContextOverflowError.
assemble(options?: { maxTokens?: number }): string
Packs the prompt, honoring exact limits. If maxTokens is omitted, it defaults to the model's context window.
usage(): UsageStats
Returns { tokensUsed: number, included: string[], excluded: string[] }.
CostCalculator
Calculates pricing estimates for prompts and completions.
import { CostCalculator } from 'llm-context-forge';
const calc = new CostCalculator("gpt-4o");
Methods
estimatePrompt(text: string): CostEstimate
Returns { usd: number } for the given prompt based on exact token counting.