Skip to content

CLI Commands


The CLI Commands reference provides comprehensive documentation for all available commands in the docs-mcp tool. These commands enable automated documentation generation, interactive AI assistance, and repository analysis through a unified command-line interface powered by Claude AI models and the Bunli framework.

The docs-mcp CLI includes five core commands designed for documentation generation and AI-assisted code exploration:

  • ask: Quick question answering from the command line
  • build-repo-docs: Comprehensive automated documentation generation for code repositories
  • chat: Interactive conversational interface with LLM assistance
  • create-description: AI-powered description generation for documentation files
  • create-llms-txt: Project guidance file generation for AI coding assistants

All commands require the ANTHROPIC_API_KEY environment variable to be set for API access.

Ask a question to an LLM and receive a streamed response directly to your terminal.

Usage:

Terminal window
docs-mcp ask -q "Your question here"

Options:

  • -q, --question (required, default: “What is the meaning of life?”): The question to ask

This command streams responses from Anthropic’s Claude Haiku model, displaying the query and piping the response directly to stdout with colored terminal output.

handler: async ({ flags, colors }) => {
const { textStream } = streamText({
model: anthropic("claude-haiku-4-5-20251001"),
prompt: flags.question,
});
for await (const textPart of textStream) {
process.stdout.write(textPart);
}
};

Walk a code repository and build comprehensive documentation using AI agents through a five-stage pipeline.

Usage:

Terminal window
docs-mcp build-repo-docs -r ./my-repo -o ./docs

Options:

  • -r, --repo (required): Path to the repository root
  • -o, --output (default: ”./docs”): Output directory for generated docs

This command executes a sophisticated multi-step pipeline:

  1. Building file tree: Scans the repository structure
  2. Extracting structure: Analyzes code organization and relationships
  3. Summarising: Creates summaries of code components
  4. Planning topics: Determines documentation organization and topics
  5. Writing docs: Generates final documentation files

Results are persisted in a local SQLite database (.docs-mcp.db) and token usage is tracked throughout the process.

await runStep("Building file tree", () => buildFileTree(flags.repo, db));
await runStep("Extracting structure", () => extractStructure(db, usage));
await runStep("Summarising", () => summarise(db, usage));
await runStep("Planning topics", () => planTopics(db, usage));
await runStep("Writing docs", () => writeDocs(db, flags.repo, flags.output, usage));

Launch an interactive terminal user interface (TUI) for conversational chat with an LLM.

Usage:

Terminal window
docs-mcp chat

Options: None

This command provides an immersive interactive experience for exploring code documentation and asking questions in a conversational format using the Chat TUI component.

export default defineCommand({
name: 'chat',
description: 'Interactive TUI chat with an LLM',
tui: { renderer: { bufferMode: 'alternate' } },
render: () => <Chat />,
})

Generate a description summary from documentation sources through a four-stage AI pipeline. The output is written to the description frontmatter field.

Usage:

Terminal window
docs-mcp create-description -f ./docs/guide.mdx

Options:

  • -f, --file (required): Path to the .mdx or .md file to process

This command orchestrates a four-step pipeline for transforming documentation:

  1. Analyze: Extracts key information from the article
  2. Compose: Creates a structured description with purpose, context, approach, and findings
  3. Polish: Refines the description for style compliance
  4. Insert: Writes the description into the file’s frontmatter description field

The pipeline supports .mdx and .md file formats with format-specific syntax handling.

Generate a project guidance file (CLAUDE.md / llms.txt) from documentation descriptions to optimize AI coding assistants.

Usage:

Terminal window
docs-mcp create-llms-txt -d ./src/content/docs -o ./CLAUDE.md

Options:

  • -o, --output (optional, default: ”./CLAUDE.md”): Output file path
  • -d, --content-dir (optional, default: ”./src/content/docs”): Content directory to scan

This command implements a three-stage pipeline:

  1. Collect: Extracts descriptions from Markdown files in the content directory
  2. Synthesize: Transforms collected descriptions into structured guidance
  3. Write: Persists the final guidance file to disk

The generated file serves as a comprehensive project overview for AI coding assistants like Claude.

All commands inherit the following global options from the Bunli framework:

  • Environment Variables: All commands require ANTHROPIC_API_KEY to be set in your environment
  • Colored Output: Commands automatically use colored terminal output when available
  • Error Handling: Failed steps are reported with error context and suggestions

Setting the API Key:

Terminal window
export ANTHROPIC_API_KEY=your-anthropic-api-key

Shell Completion: The CLI provides shell completion support through the Bunli plugin system for improved terminal experience across bash, zsh, and fish shells.