Token Optimization

Best practices

Prompt patterns that keep token usage low.

1. Always seed the conversation with MCP-first instruction

If you used larkx init, this is already done via the UserPromptSubmit hook. If not, add one line to your CLAUDE.md / .cursorrules:

bash
Always use larkx MCP tools first for any code navigation:
get_project_index, search_symbol, get_file_summary, get_impact, get_call_chain, get_dead_code.
Only fall back to reading files when MCP returns no result.

2. Tell the AI to start cheap

Add to your agent prompt: "Begin with get_project_index level 1. Only request higher levels if needed for the specific task."

3. Use folder scoping aggressively

When you know which subtree matters, tell the AI:

bash
# Bad
"Refactor my authentication code"

# Better
"Refactor my auth code in src/auth. Use folder scoping when calling get_project_index."

4. Prefer specific tools over broad ones

If asking…Use this toolNot this
"Where is X?"search_symbolget_project_index
"What's in this file?"get_file_summaryReading the file
"What if I change Y?"get_impactGrep across project

5. Avoid re-reading after compaction

When context auto-compacts, the AI often re-explores. Keep an llms.txt or pinned message with the key file map so the AI doesn't re-fetch get_project_index repeatedly.

6. Index regularly

Stale index = AI tries paths that don't exist anymore. Use larkx index --watch during active development.