v0.1 · Open source · MIT

A queryable graph of your codebase, for AI agents.

larkx indexes any project into a compact graph that Claude Code, Cursor, GitHub Copilot, and any MCP-compatible AI client can read in seconds, typically using around 60-85% fewer tokens than reading source files, with savings up to ~90% on focused tasks.

$ npm install -g larkx
up to 90%
fewer AI tokens*
6
MCP tools
11
languages
8
frameworks
* Ballpark estimate (not a measured benchmark), assumes ~3.5 chars/token and average file sizes. Actual savings depend on your AI's tokenizer, project, and task. Run larkx stats on your project for indexed estimates, or open the calculator.
From zero to indexed in three commands
terminal
What's included

Nine focused features. No marketing fluff.

Tiered context

Four detail levels: paths, symbols, signatures, AI summaries. The AI picks the cheapest level per task.

Folder scoping

Restrict any query to a subtree. Token cost drops roughly in proportion to the folder size, often very large savings on focused tasks.

AI summaries

One-line summaries per file via Claude Haiku. Cached forever, pennies to generate.

MCP-native

Works with Claude Code, Cursor, Continue, Cline, and any Model Context Protocol client.

Reachability dead code

BFS from entry points. Auto-detects Next.js routes, NestJS modules, and more.

Incremental indexing

SHA-256 file hashing. Reindex is near-instant after the initial build.

Visual graph UI

Browser explorer with Module view and Graph view. Zoom, search, VS Code deep links.

Security defaults

Excludes .env, *.key, *.pem, credentials* by default. Never enters AI context.

Zero config

larkx init asks two questions, sets up MCP, agent files, and hooks.

Estimate

How many tokens would you save?

Adjust the sliders to match your project.

LevelIncludesTokensSavings
L1PathsJust file paths + language~1.6K99%
L2Symbols+ function & class names + imports~16K87%
L3Signatures+ full function signatures~30K75%
L4Summaries+ one-line AI summary per file~50K58%
Reading every file directly~120Kbaseline
Estimated best case99% reduction · level 1 + scoping
Heads up, this is an estimate, not a measurement. Numbers assume ~3.5 characters per token, an average file size of ~2 KB, and average symbol counts. Your actual savings depend on your AI's tokenizer (Claude/GPT/Gemini differ), file size distribution, and how the agent uses tools. Treat these figures as ballpark, use larkx stats in your project for indexed estimates that better match your codebase.

Two commands. Five minutes.

Install, init, index, done. No accounts. No telemetry. Runs entirely on your machine.

Read the setup guide