Home InfiniLoom Documentation GitHub
Documentation Menu
Getting Started

pack

Transforms repository content into LLM-optimized context. The primary command for generating AI-ready code context.

Usage

infiniloom pack [PATH] [OPTIONS]

Description

The pack command scans a repository, detects programming languages, processes file contents, counts tokens, and generates optimized output for large language models. It supports multiple output formats and models.

Options

-f, --format <FORMAT>

Output format: xml, markdown, json, yaml, toon, plain (default: xml)

-m, --model <MODEL>

Target model for token counting: claude, gpt4, gpt4o, gemini, llama, mistral, deepseek, qwen, cohere, grok

-o, --output <PATH>

Output file path (default: stdout)

--max-tokens <NUM>

Maximum token budget for output

--compression <LEVEL>

Compression level: none, minimal, balanced, focused, aggressive, semantic, extreme

--security-check

Scan for secrets and sensitive data

--redact-secrets

Redact detected secrets in output

--hidden

Include hidden files in output

--symbols

Enable AST-based symbol extraction

-i, --include <PATTERN>

Include only files matching glob pattern (repeatable)

-e, --exclude <PATTERN>

Exclude files/directories matching pattern (repeatable)

--include-tests

Include test files in output

-v, --verbose

Show detailed progress

Examples

# Basic pack of current directory
infiniloom pack

# Pack with specific format and model
infiniloom pack -f markdown -m gpt4o -o context.md

# Limit to 50,000 tokens
infiniloom pack --max-tokens 50000

# Pack with security scanning
infiniloom pack --security-check -o context.xml

Supported Models