L

Load Llms Auto

Production-ready command that handles load, process, external, documentation. Includes structured workflows, validation checks, and reusable patterns for documentation.

CommandClipticsdocumentationv1.0.0MIT
0 views0 copies

Load Llms Auto

Automatically fetch and cache external LLM context files (llms.txt) for enriching AI-assisted development sessions.

When to Use This Command

Run this command when you need to:

  • Load project-specific AI context from a remote llms.txt file into your development session
  • Cache external documentation and context files for offline AI-assisted coding
  • Fetch and merge multiple llms.txt sources into a unified context document

Consider alternatives when:

  • You already have a local CLAUDE.md or context file that covers your needs
  • The external documentation is available as a standard API reference you can search on demand

Quick Start

Configuration

name: load-llms-auto type: command category: documentation

Example Invocation

claude command:run load-llms-auto --source https://raw.githubusercontent.com/org/repo/main/llms.txt

Example Output

Source: https://raw.githubusercontent.com/org/repo/main/llms.txt
Status: 200 OK
Content-Length: 14,832 bytes

Processing:
  [+] Downloaded llms.txt (478 lines)
  [+] Validated format: standard llms.txt structure
  [+] Extracted sections: 12 context blocks
  [+] Cached to .claude/context/org-repo-llms.txt
  [+] Merged with existing project context

Context Summary:
  - Project overview and architecture (2,400 words)
  - API reference with 34 endpoints
  - Data model with 18 entities
  - Coding conventions and patterns
  - Known limitations and workarounds

Context loaded. AI assistant now has project-specific knowledge.
Cache expires: 2026-03-22 (7 days)

Core Concepts

LLMs Context Loading Overview

AspectDetails
Source Formatsllms.txt, llms-full.txt, CLAUDE.md, and plain markdown
Network FetchHTTP/HTTPS download with timeout and retry logic
ValidationChecks structure, encoding, and content size before caching
CachingLocal file cache with configurable TTL to avoid repeated fetches
MergingCombines multiple context sources into a unified document

Context Loading Workflow

  Source URL
       |
       v
  +------------------+
  | Fetch Content    |---> HTTP GET with timeout
  +------------------+
       |
       v
  +------------------+
  | Validate Format  |---> Structure, encoding, size
  +------------------+
       |
       v
  +------------------+
  | Cache Locally    |---> .claude/context/ directory
  +------------------+
       |
       v
  +------------------+
  | Merge Context    |---> Combine with existing context
  +------------------+
       |
       v
  +------------------+
  | Confirm Loaded   |---> Summary of available context
  +------------------+

Configuration

ParameterTypeDefaultDescription
sourcestringrequiredURL or local path to the llms.txt file to load
cache_dirstring.claude/contextDirectory to store cached context files
ttl_daysinteger7Cache time-to-live in days before re-fetching
mergebooleantrueMerge with existing context rather than replacing it
max_size_kbinteger500Maximum file size in kilobytes to accept

Best Practices

  1. Pin to Specific Commits - Use raw GitHub URLs with a specific commit hash rather than a branch name. Branch content can change unexpectedly, causing your AI context to drift without notice.

  2. Set Reasonable Cache TTLs - A 7-day cache TTL balances freshness with network efficiency. For rapidly evolving projects, reduce to 1 day. For stable reference documentation, extend to 30 days.

  3. Validate Before Trusting - Always verify that the fetched content is well-formed and within expected size limits. A corrupted or unexpectedly large file can degrade AI assistant performance.

  4. Keep Context Focused - Load only the context relevant to your current task. Loading a 500KB context file about an entire organization when you only need one library's API wastes context window space.

  5. Version Your Context Files - When authoring llms.txt for your own projects, include a version or last-updated field. Consumers can check whether their cached version is current without downloading the full file.

Common Issues

  1. Fetch Fails With 404 - The URL path is incorrect or the file was moved. Verify the raw content URL by opening it in a browser. GitHub raw URLs follow the pattern raw.githubusercontent.com/owner/repo/branch/path.

  2. Context Too Large for Session - The llms.txt file exceeds the AI context window. Use the max_size_kb parameter to limit what is loaded, or split large context files into topic-specific modules and load only what is needed.

  3. Cached Content Is Stale - The remote file was updated but the local cache has not expired. Delete the cached file manually or set ttl_days to 0 to force a fresh fetch on the next invocation.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates