AST-aware code chunking for semantic search and RAG pipelines.
Uses tree-sitter to split source code at semantic boundaries (functions, classes, methods) rather than arbitrary character limits. Each chunk includes rich context: scope chain, imports, siblings, and entity signatures.
- AST-aware: Splits at semantic boundaries, never mid-function
- Rich context: Scope chain, imports, siblings, entity signatures
- Contextualized text: Pre-formatted for embedding models
- Multi-language: TypeScript, JavaScript, Python, Rust, Go, Java
- Streaming: Process large files incrementally
- Effect support: First-class Effect integration
- Edge-ready: Works in Cloudflare Workers and other edge runtimes via WASM
Traditional text splitters chunk code by character count or line breaks, often cutting functions in half or separating related code. code-chunk takes a different approach:
Source code is parsed into an Abstract Syntax Tree (AST) using tree-sitter. This gives us a structured representation of the code that understands language grammar.
We traverse the AST to extract semantic entities: functions, methods, classes, interfaces, types, and imports. For each entity, we capture:
- Name and type
- Full signature (e.g.,
async getUser(id: string): Promise<User>) - Docstring/comments if present
- Byte and line ranges
Entities are organized into a hierarchical scope tree that captures nesting relationships. A method inside a class knows its parent; a nested function knows its containing function. This enables us to provide scope context like UserService > getUser.
Code is split at semantic boundaries while respecting the maxChunkSize limit. The chunker:
- Prefers to keep complete entities together
- Splits oversized entities at logical points (statement boundaries)
- Never cuts mid-expression or mid-statement
- Merges small adjacent chunks to reduce fragmentation
Each chunk is enriched with contextual metadata:
- Scope chain: Where this code lives (e.g., inside which class/function)
- Entities: What's defined in this chunk
- Siblings: What comes before/after (for continuity)
- Imports: What dependencies are used
This context is formatted into contextualizedText, optimized for embedding models to understand semantic relationships.
bun add code-chunk
# or
npm install code-chunkimport { chunk } from 'code-chunk'
const chunks = await chunk('src/user.ts', sourceCode)
for (const c of chunks) {
console.log(c.text)
console.log(c.context.scope) // [{ name: 'UserService', type: 'class' }]
console.log(c.context.entities) // [{ name: 'getUser', type: 'method', ... }]
}Use contextualizedText for better embedding quality in RAG systems:
for (const c of chunks) {
const embedding = await embed(c.contextualizedText)
await vectorDB.upsert({
id: `${filepath}:${c.index}`,
embedding,
metadata: { filepath, lines: c.lineRange }
})
}The contextualizedText prepends semantic context to the raw code:
# src/services/user.ts
# Scope: UserService
# Defines: async getUser(id: string): Promise<User>
# Uses: Database
# After: constructor
async getUser(id: string): Promise<User> {
return this.db.query('SELECT * FROM users WHERE id = ?', [id])
}
Process chunks incrementally without loading everything into memory:
import { chunkStream } from 'code-chunk'
for await (const c of chunkStream('src/large.ts', code)) {
await process(c)
}Create a chunker instance when processing multiple files with the same config:
import { createChunker } from 'code-chunk'
const chunker = createChunker({
maxChunkSize: 2048,
contextMode: 'full',
siblingDetail: 'signatures',
})
for (const file of files) {
const chunks = await chunker.chunk(file.path, file.content)
}For Effect-based pipelines:
import { chunkStreamEffect } from 'code-chunk'
import { Effect, Stream } from 'effect'
const program = Stream.runForEach(
chunkStreamEffect('src/utils.ts', code),
(chunk) => Effect.log(chunk.text)
)
await Effect.runPromise(program)The default entry point uses Node.js APIs to load tree-sitter WASM files from the filesystem. For edge runtimes, use the code-chunk/wasm entry point which accepts pre-loaded WASM binaries.
import { createChunker } from 'code-chunk/wasm'
import treeSitterWasm from 'web-tree-sitter/tree-sitter.wasm'
import typescriptWasm from 'tree-sitter-typescript/tree-sitter-typescript.wasm'
import javascriptWasm from 'tree-sitter-javascript/tree-sitter-javascript.wasm'
export default {
async fetch(request: Request): Promise<Response> {
const chunker = await createChunker({
treeSitter: treeSitterWasm,
languages: {
typescript: typescriptWasm,
javascript: javascriptWasm,
},
})
const code = await request.text()
const chunks = await chunker.chunk('input.ts', code)
return Response.json(chunks)
},
}The createChunker function from code-chunk/wasm accepts a WasmConfig object:
interface WasmConfig {
treeSitter: WasmBinary
languages: Partial<Record<Language, WasmBinary>>
}
type WasmBinary = Uint8Array | ArrayBuffer | Response | stringtreeSitter: Theweb-tree-sitterruntime WASM binarylanguages: Map of language names to their grammar WASM binaries
Only include the languages you need to minimize bundle size.
The WASM entry point throws specific errors:
WasmParserError: Parser initialization or parsing failedWasmGrammarError: No WASM binary provided for requested languageWasmChunkingError: Chunking process failedUnsupportedLanguageError: File extension not recognized
import {
WasmParserError,
WasmGrammarError,
WasmChunkingError,
UnsupportedLanguageError
} from 'code-chunk/wasm'
try {
const chunks = await chunker.chunk('input.ts', code)
} catch (error) {
if (error instanceof WasmGrammarError) {
console.error(`Language not loaded: ${error.language}`)
}
}Chunk source code into semantic pieces with context.
Parameters:
filepath: File path (used for language detection)code: Source code stringoptions: Optional configuration
Returns: Promise<Chunk[]>
Throws: ChunkingError, UnsupportedLanguageError
Stream chunks as they're generated. Useful for large files.
Returns: AsyncGenerator<Chunk>
Note: chunk.totalChunks is -1 in streaming mode (unknown upfront).
Effect-native streaming API for composable pipelines.
Returns: Stream.Stream<Chunk, ChunkingError | UnsupportedLanguageError>
Create a reusable chunker instance with default options.
Returns: Chunker with chunk() and stream() methods
Create a chunker for edge runtimes with pre-loaded WASM binaries.
import { createChunker } from 'code-chunk/wasm'Parameters:
config:WasmConfigwithtreeSitterandlanguagesWASM binariesoptions: OptionalChunkOptions
Returns: Promise<Chunker>
Throws: WasmParserError, WasmGrammarError, WasmChunkingError, UnsupportedLanguageError
Low-level parser class for edge runtimes. Use this when you need direct access to parsing without chunking.
import { WasmParser } from 'code-chunk/wasm'
const parser = new WasmParser(config)
await parser.init()
const result = await parser.parse(code, 'typescript')
console.log(result.tree.rootNode)Format chunk text with semantic context prepended. Useful for custom embedding pipelines.
Returns: string
Detect programming language from file extension.
Returns: Language | null
| Option | Type | Default | Description |
|---|---|---|---|
maxChunkSize |
number |
1500 |
Maximum chunk size in bytes |
contextMode |
'none' | 'minimal' | 'full' |
'full' |
How much context to include |
siblingDetail |
'none' | 'names' | 'signatures' |
'signatures' |
Level of sibling detail |
filterImports |
boolean |
false |
Filter out import statements |
language |
Language |
auto | Override language detection |
overlapLines |
number |
10 |
Lines from previous chunk to include in contextualizedText |
| Language | Extensions |
|---|---|
| TypeScript | .ts, .tsx, .mts, .cts |
| JavaScript | .js, .jsx, .mjs, .cjs |
| Python | .py, .pyi |
| Rust | .rs |
| Go | .go |
| Java | .java |
ChunkingError: Thrown when chunking fails (parsing error, extraction error, etc.)
UnsupportedLanguageError: Thrown when the file extension is not supported
Both errors have a _tag property for Effect-style error handling.
MIT