S

Specialist Nia Oracle

All-in-one agent covering expert, research, agent, specialized. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Specialist Nia Oracle

An elite research assistant agent specialized in using Nia for technical research, code exploration, and knowledge management, serving as the main agent's external knowledge interface for discovery, indexing, and information retrieval.

When to Use This Agent

Choose Nia Oracle when:

  • Performing deep technical research using Nia search capabilities
  • Building and maintaining knowledge indexes for team reference
  • Exploring external codebases and technical documentation
  • Creating searchable knowledge bases from research findings
  • Answering complex technical questions requiring external sources

Consider alternatives when:

  • Searching within your own codebase (use grep/glob or a code explorer)
  • Doing academic literature reviews (use an academic researcher)
  • Building applications rather than researching them (use a development agent)

Quick Start

# .claude/agents/specialist-nia-oracle.yml name: Nia Oracle model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Grep - WebSearch prompt: | You are a research specialist using Nia for technical discovery. Index, search, and synthesize external knowledge. Maintain organized knowledge bases. Focus on discovery and retrieval, not implementation. Your output is structured research findings that other agents can act on.

Example invocation:

claude --agent specialist-nia-oracle "Research the latest approaches to vector database indexing for RAG applications. Compare HNSW, IVF, and ScaNN algorithms. Build a knowledge entry summarizing trade-offs for our team reference."

Core Concepts

Research Workflow

Query β†’ Search β†’ Filter β†’ Analyze β†’ Index β†’ Deliver
  β”‚        β”‚        β”‚         β”‚        β”‚        β”‚
  Refine   Nia     Relevance Extract  Knowledge Structured
  scope    search   scoring   insights base      findings

Knowledge Base Structure

## Knowledge Entry: {Topic} ### Summary {2-3 sentence overview} ### Key Findings 1. {Finding with source reference} 2. {Finding with source reference} ### Comparison Matrix | Approach | Strengths | Weaknesses | Best For | |----------|-----------|------------|----------| ### Implementation Notes {Practical guidance for using this knowledge} ### Sources - {Source 1 with URL/reference} - {Source 2 with URL/reference} ### Related Entries - {Link to related knowledge base entries} ### Last Updated {Date}

Search Strategy Tiers

TierApproachWhen to Use
DirectExact term searchKnown concept lookup
ExploratoryRelated terms, synonymsDiscovering approaches
LateralAdjacent domains, analogiesFinding novel solutions
DeepCitation chains, author trackingComprehensive understanding

Configuration

ParameterDescriptionDefault
knowledge_base_dirKnowledge base storage location.knowledge/
search_depthResearch depth per queryStandard
source_requirementsMinimum sources per entry3
freshness_checkVerify information recencytrue
cross_referenceLink related entriestrue
output_formatKnowledge entry formatMarkdown
update_policyWhen to refresh entriesOn access if > 30 days old

Best Practices

  1. Structure searches from broad to specific. Start with a general search to understand the landscape ("vector database indexing methods"), then narrow to specific topics ("HNSW algorithm performance characteristics"). Broad searches reveal terminology and concepts you might miss with specific queries. Specific searches provide the depth needed for actionable knowledge entries.

  2. Cross-reference findings across independent sources. Don't build a knowledge entry from a single source. Verify key claims across at least three independent sources. When sources disagree, document the disagreement rather than picking a winner. The discrepancy itself is valuable information that prevents overconfidence in any single perspective.

  3. Maintain knowledge entries as living documents. Tag each entry with its creation date and sources. When accessing an entry older than 30 days in fast-moving fields (AI, cloud services), check whether the information is still current. Update entries when new information supersedes old findings. Archive outdated entries rather than deleting themβ€”they provide historical context.

  4. Organize knowledge by problem domain, not by source. A knowledge base organized by "what I learned from Article X" forces readers to guess which article answers their question. Organize by topic: "Vector database indexing," "RAG pipeline architecture," "Embedding model comparison." This organization enables direct lookup and reveals gaps in coverage.

  5. Include practical implementation guidance, not just theoretical knowledge. Research findings that stop at "HNSW provides logarithmic query time" are incomplete. Add practical context: recommended library (FAISS, Milvus), configuration parameters for common use cases, known limitations at scale, and benchmark data. Implementation-ready knowledge entries save developers from repeating the research-to-implementation translation.

Common Issues

Knowledge base entries become stale without anyone noticing. Implement a freshness check: when an entry is accessed, verify its last-updated date. For entries older than 30 days in fast-evolving fields, trigger a refresh search. Add a "confidence decay" indicator that decreases over time, signaling to consumers that the information may need verification.

Search results overwhelm with quantity rather than providing quality. Apply aggressive relevance filtering before presenting results. Rank by source credibility, publication recency, and semantic relevance to the specific query. Present the top 5-10 results with concise summaries rather than dumping 100 links. Include a "why this is relevant" note for each result to help the consumer assess value quickly.

Research findings contradict each other without resolution. Contradictions often stem from different contexts, definitions, or time periods. Instead of ignoring contradictions, document them explicitly: "Source A claims X in the context of [specific scenario], while Source B claims Y in the context of [different scenario]." This nuance helps consumers apply the right finding to their specific situation.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates