D

Denario Elite

Production-ready skill that handles multiagent, system, scientific, research. Includes structured workflows, validation checks, and reusable patterns for scientific.

SkillClipticsscientificv1.0.0MIT
0 views0 copies

Denario Elite

A scientific computing skill for automated research workflows using Denario — the multiagent AI system designed to orchestrate scientific research from data analysis through publication, with agents for literature review, data processing, statistical analysis, figure generation, and manuscript drafting.

When to Use This Skill

Choose Denario Elite when:

  • Automating multi-step scientific analysis workflows
  • Orchestrating literature review, data processing, and visualization
  • Building reproducible research pipelines with AI agent coordination
  • Generating publication-ready figures and preliminary manuscript drafts

Consider alternatives when:

  • You need a specific analysis tool (use the dedicated tool directly)
  • You're doing manual, exploratory data analysis (use Jupyter notebooks)
  • You need real-time collaboration on analysis (use shared notebooks)
  • You need domain-specific pipelines (use Nextflow, Snakemake, etc.)

Quick Start

claude "Set up a Denario workflow to analyze my gene expression dataset"
from denario import ResearchWorkflow, Agent # Define research workflow workflow = ResearchWorkflow( title="Differential Gene Expression Analysis", data_sources=["counts_matrix.csv", "sample_metadata.csv"] ) # Configure agents workflow.add_agent(Agent.LITERATURE_REVIEW, { "query": "RNA-seq differential expression best practices 2024", "max_papers": 20 }) workflow.add_agent(Agent.DATA_PROCESSOR, { "pipeline": "deseq2", "normalization": "median_of_ratios", "filters": {"min_count": 10, "min_samples": 3} }) workflow.add_agent(Agent.STATISTICIAN, { "test": "wald", "correction": "BH", "alpha": 0.05, "log2fc_threshold": 1.0 }) workflow.add_agent(Agent.VISUALIZER, { "plots": ["volcano", "heatmap", "pca", "ma_plot"], "format": "publication_ready" }) # Execute workflow results = workflow.run() print(results.summary)

Core Concepts

Denario Agent Types

AgentRoleOutput
Literature ReviewSearch and summarize relevant papersLiterature summary
Data ProcessorClean, normalize, transform dataProcessed datasets
StatisticianRun statistical tests and modelingTest results, p-values
VisualizerGenerate figures and plotsPublication-ready figures
WriterDraft manuscript sectionsText drafts
ReviewerCheck analysis validityQuality report

Workflow Orchestration

# Agents communicate through shared context workflow = ResearchWorkflow(title="My Study") # Sequential pipeline workflow.pipeline([ ("data_ingestion", Agent.DATA_PROCESSOR), ("quality_control", Agent.REVIEWER), ("analysis", Agent.STATISTICIAN), ("visualization", Agent.VISUALIZER), ("manuscript", Agent.WRITER) ]) # Each agent receives output from the previous step results = workflow.run() # Access individual agent outputs figures = results.get_agent_output("visualization") manuscript = results.get_agent_output("manuscript")

Reproducibility Framework

# All workflow steps are logged and reproducible workflow.export_provenance("provenance.json") # Provenance includes: # - Input data checksums # - Parameter configurations # - Software versions # - Intermediate results # - Execution timestamps # Reproduce from provenance reproduced = ResearchWorkflow.from_provenance("provenance.json") reproduced.run()

Configuration

ParameterDescriptionDefault
agentsList of agents in the pipelineRequired
data_sourcesInput data file pathsRequired
output_dirDirectory for results./results
figure_formatOutput figure formatpdf
reproducibilityRecord full provenancetrue

Best Practices

  1. Define clear analysis objectives upfront. Vague research questions produce vague results. Specify exactly what you want to test, which comparisons to make, and what significance thresholds to use before configuring the workflow.

  2. Include a quality control agent. Always add a Reviewer agent that validates data quality, checks statistical assumptions, and flags potential issues. Automated analysis without quality checks can produce misleading results.

  3. Export provenance for reproducibility. Every workflow should generate a complete provenance record — input data hashes, parameters, software versions, and intermediate results. This enables exact reproduction and peer review.

  4. Validate agent outputs manually. AI-generated analysis can contain errors — verify statistical results against manual calculations for a subset of data, check that figures accurately represent the underlying data, and review any generated text for accuracy.

  5. Version your workflows. Store workflow configurations in version control alongside your data analysis scripts. This tracks how your analysis evolved and enables colleagues to review or reproduce your work.

Common Issues

Agent produces unexpected output format. Verify the agent configuration matches your data type. A statistical agent configured for continuous data won't work correctly on categorical variables. Check the agent's expected input format against your actual data.

Workflow fails at intermediate step. Check agent logs for the specific error. Common causes: data format mismatch between agents, missing dependencies, or insufficient memory for large datasets. Fix the failing step and re-run from that point.

Generated figures don't meet journal requirements. Configure the Visualizer agent with specific journal requirements: DPI (300+), dimensions (column/page width), font sizes, and color accessibility. Most agents default to screen-quality output — publication requirements must be specified explicitly.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates