InfiniLoom Documentation
InfiniLoom is a high-performance repository context generator for large language models. Transform your codebase into optimized formats for Claude, GPT-4, Gemini, Llama, Mistral, DeepSeek, Qwen, and other AI models.
Installation
# Using Cargo (Rust)
cargo install infiniloom
# Using Homebrew
brew tap Topos-Labs/infiniloom
brew install --cask infiniloom
# Using pip (Python)
pip install infiniloom
# Using npm (Node.js)
npm install -g infiniloom
Quick Start
# Pack a repository for Claude
infiniloom pack ./my-project --model claude --output context.xml
# Generate a repository map
infiniloom map ./my-project
# Scan for secrets
infiniloom scan ./my-project --security-check
Commands
pack
Transform repository into LLM-optimized context
map
Generate PageRank-ranked symbol map
scan
Analyze repository statistics and security
index
Build persistent symbol index
diff
Generate context for code changes
impact
Analyze change impact for files
chunk
Split repository into multiple contexts
init
Create configuration file
info
Show version and configuration
Output Formats
InfiniLoom supports multiple output formats optimized for different use cases:
- XML — Claude-optimized structured format
- Markdown — GPT-optimized human-readable format
- JSON — Machine-readable structured data
- YAML — Gemini-optimized long context format
- TOON — Most token-efficient compressed format
- Plain — Simple text output
Supported Models
Token counting and optimization for:
- Anthropic Claude (Claude 3, Claude 4)
- OpenAI GPT (GPT-4, GPT-4o)
- Google Gemini
- Meta Llama
- Mistral
- DeepSeek
- Qwen
- Cohere
- Grok