Load Llms Auto
Production-ready command that handles load, process, external, documentation. Includes structured workflows, validation checks, and reusable patterns for documentation.
Load Llms Auto
Automatically fetch and cache external LLM context files (llms.txt) for enriching AI-assisted development sessions.
When to Use This Command
Run this command when you need to:
- Load project-specific AI context from a remote llms.txt file into your development session
- Cache external documentation and context files for offline AI-assisted coding
- Fetch and merge multiple llms.txt sources into a unified context document
Consider alternatives when:
- You already have a local CLAUDE.md or context file that covers your needs
- The external documentation is available as a standard API reference you can search on demand
Quick Start
Configuration
name: load-llms-auto type: command category: documentation
Example Invocation
claude command:run load-llms-auto --source https://raw.githubusercontent.com/org/repo/main/llms.txt
Example Output
Source: https://raw.githubusercontent.com/org/repo/main/llms.txt
Status: 200 OK
Content-Length: 14,832 bytes
Processing:
[+] Downloaded llms.txt (478 lines)
[+] Validated format: standard llms.txt structure
[+] Extracted sections: 12 context blocks
[+] Cached to .claude/context/org-repo-llms.txt
[+] Merged with existing project context
Context Summary:
- Project overview and architecture (2,400 words)
- API reference with 34 endpoints
- Data model with 18 entities
- Coding conventions and patterns
- Known limitations and workarounds
Context loaded. AI assistant now has project-specific knowledge.
Cache expires: 2026-03-22 (7 days)
Core Concepts
LLMs Context Loading Overview
| Aspect | Details |
|---|---|
| Source Formats | llms.txt, llms-full.txt, CLAUDE.md, and plain markdown |
| Network Fetch | HTTP/HTTPS download with timeout and retry logic |
| Validation | Checks structure, encoding, and content size before caching |
| Caching | Local file cache with configurable TTL to avoid repeated fetches |
| Merging | Combines multiple context sources into a unified document |
Context Loading Workflow
Source URL
|
v
+------------------+
| Fetch Content |---> HTTP GET with timeout
+------------------+
|
v
+------------------+
| Validate Format |---> Structure, encoding, size
+------------------+
|
v
+------------------+
| Cache Locally |---> .claude/context/ directory
+------------------+
|
v
+------------------+
| Merge Context |---> Combine with existing context
+------------------+
|
v
+------------------+
| Confirm Loaded |---> Summary of available context
+------------------+
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
| source | string | required | URL or local path to the llms.txt file to load |
| cache_dir | string | .claude/context | Directory to store cached context files |
| ttl_days | integer | 7 | Cache time-to-live in days before re-fetching |
| merge | boolean | true | Merge with existing context rather than replacing it |
| max_size_kb | integer | 500 | Maximum file size in kilobytes to accept |
Best Practices
-
Pin to Specific Commits - Use raw GitHub URLs with a specific commit hash rather than a branch name. Branch content can change unexpectedly, causing your AI context to drift without notice.
-
Set Reasonable Cache TTLs - A 7-day cache TTL balances freshness with network efficiency. For rapidly evolving projects, reduce to 1 day. For stable reference documentation, extend to 30 days.
-
Validate Before Trusting - Always verify that the fetched content is well-formed and within expected size limits. A corrupted or unexpectedly large file can degrade AI assistant performance.
-
Keep Context Focused - Load only the context relevant to your current task. Loading a 500KB context file about an entire organization when you only need one library's API wastes context window space.
-
Version Your Context Files - When authoring llms.txt for your own projects, include a version or last-updated field. Consumers can check whether their cached version is current without downloading the full file.
Common Issues
-
Fetch Fails With 404 - The URL path is incorrect or the file was moved. Verify the raw content URL by opening it in a browser. GitHub raw URLs follow the pattern raw.githubusercontent.com/owner/repo/branch/path.
-
Context Too Large for Session - The llms.txt file exceeds the AI context window. Use the max_size_kb parameter to limit what is loaded, or split large context files into topic-specific modules and load only what is needed.
-
Cached Content Is Stale - The remote file was updated but the local cache has not expired. Delete the cached file manually or set ttl_days to 0 to force a fresh fetch on the next invocation.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Git Commit Message Generator
Generates well-structured conventional commit messages by analyzing staged changes. Follows Conventional Commits spec with scope detection.
React Component Scaffolder
Scaffolds a complete React component with TypeScript types, Tailwind styles, Storybook stories, and unit tests. Follows project conventions automatically.
CI/CD Pipeline Generator
Generates GitHub Actions workflows for CI/CD including linting, testing, building, and deploying. Detects project stack automatically.