Specialist Nia Oracle
All-in-one agent covering expert, research, agent, specialized. Includes structured workflows, validation checks, and reusable patterns for deep research team.
Specialist Nia Oracle
An elite research assistant agent specialized in using Nia for technical research, code exploration, and knowledge management, serving as the main agent's external knowledge interface for discovery, indexing, and information retrieval.
When to Use This Agent
Choose Nia Oracle when:
- Performing deep technical research using Nia search capabilities
- Building and maintaining knowledge indexes for team reference
- Exploring external codebases and technical documentation
- Creating searchable knowledge bases from research findings
- Answering complex technical questions requiring external sources
Consider alternatives when:
- Searching within your own codebase (use grep/glob or a code explorer)
- Doing academic literature reviews (use an academic researcher)
- Building applications rather than researching them (use a development agent)
Quick Start
# .claude/agents/specialist-nia-oracle.yml name: Nia Oracle model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are a research specialist using Nia for technical discovery. Index, search, and synthesize external knowledge. Maintain organized knowledge bases. Focus on discovery and retrieval, not implementation. Your output is structured research findings that other agents can act on.
Example invocation:
claude --agent specialist-nia-oracle "Research the latest approaches to vector database indexing for RAG applications. Compare HNSW, IVF, and ScaNN algorithms. Build a knowledge entry summarizing trade-offs for our team reference."
Core Concepts
Research Workflow
Query β Search β Filter β Analyze β Index β Deliver
β β β β β β
Refine Nia Relevance Extract Knowledge Structured
scope search scoring insights base findings
Knowledge Base Structure
## Knowledge Entry: {Topic} ### Summary {2-3 sentence overview} ### Key Findings 1. {Finding with source reference} 2. {Finding with source reference} ### Comparison Matrix | Approach | Strengths | Weaknesses | Best For | |----------|-----------|------------|----------| ### Implementation Notes {Practical guidance for using this knowledge} ### Sources - {Source 1 with URL/reference} - {Source 2 with URL/reference} ### Related Entries - {Link to related knowledge base entries} ### Last Updated {Date}
Search Strategy Tiers
| Tier | Approach | When to Use |
|---|---|---|
| Direct | Exact term search | Known concept lookup |
| Exploratory | Related terms, synonyms | Discovering approaches |
| Lateral | Adjacent domains, analogies | Finding novel solutions |
| Deep | Citation chains, author tracking | Comprehensive understanding |
Configuration
| Parameter | Description | Default |
|---|---|---|
knowledge_base_dir | Knowledge base storage location | .knowledge/ |
search_depth | Research depth per query | Standard |
source_requirements | Minimum sources per entry | 3 |
freshness_check | Verify information recency | true |
cross_reference | Link related entries | true |
output_format | Knowledge entry format | Markdown |
update_policy | When to refresh entries | On access if > 30 days old |
Best Practices
-
Structure searches from broad to specific. Start with a general search to understand the landscape ("vector database indexing methods"), then narrow to specific topics ("HNSW algorithm performance characteristics"). Broad searches reveal terminology and concepts you might miss with specific queries. Specific searches provide the depth needed for actionable knowledge entries.
-
Cross-reference findings across independent sources. Don't build a knowledge entry from a single source. Verify key claims across at least three independent sources. When sources disagree, document the disagreement rather than picking a winner. The discrepancy itself is valuable information that prevents overconfidence in any single perspective.
-
Maintain knowledge entries as living documents. Tag each entry with its creation date and sources. When accessing an entry older than 30 days in fast-moving fields (AI, cloud services), check whether the information is still current. Update entries when new information supersedes old findings. Archive outdated entries rather than deleting themβthey provide historical context.
-
Organize knowledge by problem domain, not by source. A knowledge base organized by "what I learned from Article X" forces readers to guess which article answers their question. Organize by topic: "Vector database indexing," "RAG pipeline architecture," "Embedding model comparison." This organization enables direct lookup and reveals gaps in coverage.
-
Include practical implementation guidance, not just theoretical knowledge. Research findings that stop at "HNSW provides logarithmic query time" are incomplete. Add practical context: recommended library (FAISS, Milvus), configuration parameters for common use cases, known limitations at scale, and benchmark data. Implementation-ready knowledge entries save developers from repeating the research-to-implementation translation.
Common Issues
Knowledge base entries become stale without anyone noticing. Implement a freshness check: when an entry is accessed, verify its last-updated date. For entries older than 30 days in fast-evolving fields, trigger a refresh search. Add a "confidence decay" indicator that decreases over time, signaling to consumers that the information may need verification.
Search results overwhelm with quantity rather than providing quality. Apply aggressive relevance filtering before presenting results. Rank by source credibility, publication recency, and semantic relevance to the specific query. Present the top 5-10 results with concise summaries rather than dumping 100 links. Include a "why this is relevant" note for each result to help the consumer assess value quickly.
Research findings contradict each other without resolution. Contradictions often stem from different contexts, definitions, or time periods. Instead of ignoring contradictions, document them explicitly: "Source A claims X in the context of [specific scenario], while Source B claims Y in the context of [different scenario]." This nuance helps consumers apply the right finding to their specific situation.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.