Denario Elite
Production-ready skill that handles multiagent, system, scientific, research. Includes structured workflows, validation checks, and reusable patterns for scientific.
Denario Elite
A scientific computing skill for automated research workflows using Denario — the multiagent AI system designed to orchestrate scientific research from data analysis through publication, with agents for literature review, data processing, statistical analysis, figure generation, and manuscript drafting.
When to Use This Skill
Choose Denario Elite when:
- Automating multi-step scientific analysis workflows
- Orchestrating literature review, data processing, and visualization
- Building reproducible research pipelines with AI agent coordination
- Generating publication-ready figures and preliminary manuscript drafts
Consider alternatives when:
- You need a specific analysis tool (use the dedicated tool directly)
- You're doing manual, exploratory data analysis (use Jupyter notebooks)
- You need real-time collaboration on analysis (use shared notebooks)
- You need domain-specific pipelines (use Nextflow, Snakemake, etc.)
Quick Start
claude "Set up a Denario workflow to analyze my gene expression dataset"
from denario import ResearchWorkflow, Agent # Define research workflow workflow = ResearchWorkflow( title="Differential Gene Expression Analysis", data_sources=["counts_matrix.csv", "sample_metadata.csv"] ) # Configure agents workflow.add_agent(Agent.LITERATURE_REVIEW, { "query": "RNA-seq differential expression best practices 2024", "max_papers": 20 }) workflow.add_agent(Agent.DATA_PROCESSOR, { "pipeline": "deseq2", "normalization": "median_of_ratios", "filters": {"min_count": 10, "min_samples": 3} }) workflow.add_agent(Agent.STATISTICIAN, { "test": "wald", "correction": "BH", "alpha": 0.05, "log2fc_threshold": 1.0 }) workflow.add_agent(Agent.VISUALIZER, { "plots": ["volcano", "heatmap", "pca", "ma_plot"], "format": "publication_ready" }) # Execute workflow results = workflow.run() print(results.summary)
Core Concepts
Denario Agent Types
| Agent | Role | Output |
|---|---|---|
| Literature Review | Search and summarize relevant papers | Literature summary |
| Data Processor | Clean, normalize, transform data | Processed datasets |
| Statistician | Run statistical tests and modeling | Test results, p-values |
| Visualizer | Generate figures and plots | Publication-ready figures |
| Writer | Draft manuscript sections | Text drafts |
| Reviewer | Check analysis validity | Quality report |
Workflow Orchestration
# Agents communicate through shared context workflow = ResearchWorkflow(title="My Study") # Sequential pipeline workflow.pipeline([ ("data_ingestion", Agent.DATA_PROCESSOR), ("quality_control", Agent.REVIEWER), ("analysis", Agent.STATISTICIAN), ("visualization", Agent.VISUALIZER), ("manuscript", Agent.WRITER) ]) # Each agent receives output from the previous step results = workflow.run() # Access individual agent outputs figures = results.get_agent_output("visualization") manuscript = results.get_agent_output("manuscript")
Reproducibility Framework
# All workflow steps are logged and reproducible workflow.export_provenance("provenance.json") # Provenance includes: # - Input data checksums # - Parameter configurations # - Software versions # - Intermediate results # - Execution timestamps # Reproduce from provenance reproduced = ResearchWorkflow.from_provenance("provenance.json") reproduced.run()
Configuration
| Parameter | Description | Default |
|---|---|---|
agents | List of agents in the pipeline | Required |
data_sources | Input data file paths | Required |
output_dir | Directory for results | ./results |
figure_format | Output figure format | pdf |
reproducibility | Record full provenance | true |
Best Practices
-
Define clear analysis objectives upfront. Vague research questions produce vague results. Specify exactly what you want to test, which comparisons to make, and what significance thresholds to use before configuring the workflow.
-
Include a quality control agent. Always add a Reviewer agent that validates data quality, checks statistical assumptions, and flags potential issues. Automated analysis without quality checks can produce misleading results.
-
Export provenance for reproducibility. Every workflow should generate a complete provenance record — input data hashes, parameters, software versions, and intermediate results. This enables exact reproduction and peer review.
-
Validate agent outputs manually. AI-generated analysis can contain errors — verify statistical results against manual calculations for a subset of data, check that figures accurately represent the underlying data, and review any generated text for accuracy.
-
Version your workflows. Store workflow configurations in version control alongside your data analysis scripts. This tracks how your analysis evolved and enables colleagues to review or reproduce your work.
Common Issues
Agent produces unexpected output format. Verify the agent configuration matches your data type. A statistical agent configured for continuous data won't work correctly on categorical variables. Check the agent's expected input format against your actual data.
Workflow fails at intermediate step. Check agent logs for the specific error. Common causes: data format mismatch between agents, missing dependencies, or insufficient memory for large datasets. Fix the failing step and re-run from that point.
Generated figures don't meet journal requirements. Configure the Visualizer agent with specific journal requirements: DPI (300+), dimensions (column/page width), font sizes, and color accessibility. Most agents default to screen-quality output — publication requirements must be specified explicitly.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.