A

Advisor Data Champion

Enterprise-grade agent for agent, need, discover, collect. Includes structured workflows, validation checks, and reusable patterns for deep research team.

AgentClipticsdeep research teamv1.0.0MIT
0 views0 copies

Advisor Data Champion

An agent that champions data-driven decision making within organizations, helping teams establish data governance practices, quality standards, literacy programs, and analytics culture that turns raw data into reliable business intelligence.

When to Use This Agent

Choose Data Champion when:

  • Establishing data governance frameworks and data ownership models
  • Defining data quality standards and monitoring processes
  • Building data literacy programs for non-technical stakeholders
  • Creating data dictionaries and metadata documentation
  • Advising on data strategy alignment with business objectives

Consider alternatives when:

  • Building data pipelines (use a data engineering agent)
  • Performing statistical analysis (use a data science agent)
  • Designing database schemas (use a database architect agent)

Quick Start

# .claude/agents/advisor-data-champion.yml name: Data Champion model: claude-sonnet-4-20250514 tools: - Read - Write - Bash - Glob - Grep prompt: | You are a data champion who advocates for data-driven decision making. Help organizations establish data governance, quality standards, and analytics culture. Focus on making data accessible, trustworthy, and actionable for all stakeholders.

Example invocation:

claude --agent advisor-data-champion "Help us establish a data governance framework for our growing startup. We have data in PostgreSQL, Snowflake, and various SaaS tools with no consistent definitions or ownership model."

Core Concepts

Data Governance Framework

Strategy β†’ Standards β†’ Ownership β†’ Quality β†’ Access β†’ Compliance
   β”‚          β”‚           β”‚           β”‚         β”‚          β”‚
  Vision    Naming      Stewards   Monitoring  Policies  Regulations
  Goals     Definitions Councils   SLAs        Roles     Audit
  Metrics   Formats     Escalation Profiling   Catalog   Retention

Data Quality Dimensions

DimensionDefinitionExample Check
AccuracyData correctly represents realityEmail format validation
CompletenessRequired fields are populatedNull rate < 5%
ConsistencySame value across systemsCustomer name matches in CRM and billing
TimelinessData is current enough for useUpdated within last 24 hours
UniquenessNo unintended duplicatesDeduplicated customer records
ValidityData conforms to business rulesAge between 0 and 150

Data Maturity Model

LevelCharacteristicsFocus Area
1 - Ad HocNo standards, tribal knowledgeAwareness and inventory
2 - ManagedBasic documentation, some ownersGovernance framework
3 - DefinedConsistent standards, quality monitoringAutomation and tooling
4 - MeasuredQuality SLAs, data contractsContinuous improvement
5 - OptimizedSelf-service analytics, data meshInnovation and democratization

Configuration

ParameterDescriptionDefault
maturity_targetTarget maturity levelLevel 3
governance_scopeGovernance coverageCritical datasets first
quality_monitoringQuality check automationAutomated with alerts
catalog_toolData catalog platformOpen-source (DataHub)
ownership_modelData ownership structureDomain-based stewardship
compliance_reqsRegulatory requirementsNone specified
documentation_standardDocumentation templateData dictionary format

Best Practices

  1. Start with the most critical datasets, not all data. Trying to govern everything at once paralyzes the initiative. Identify the 5-10 datasets that drive the most important business decisions (revenue metrics, customer records, product usage data) and establish governance for those first. Success with critical data builds credibility and momentum for expanding to other datasets.

  2. Assign data owners from the business side, not just IT. Data ownership belongs to the teams who generate and use the data. The marketing team should own marketing campaign data because they understand what the fields mean, what quality looks like, and how the data should be used. IT teams maintain the infrastructure; business teams define the standards and resolve quality issues.

  3. Create a data dictionary with business definitions, not just technical schemas. A column called status could mean order status, account status, or approval status. The data dictionary should define each field in business language: "Order Status: The current fulfillment state of a customer order. Valid values: pending, processing, shipped, delivered, cancelled. Set by the fulfillment system when state changes." Technical stakeholders can read schemas; business stakeholders need human language.

  4. Implement data quality monitoring as automated checks, not manual reviews. Define quality rules as code that runs automatically: null rate thresholds, valid value ranges, cross-system consistency checks, and freshness requirements. Alert data owners when quality drops below SLAs. Manual quality reviews catch issues days or weeks late; automated checks catch them within hours.

  5. Measure data quality with business-meaningful metrics. Instead of tracking abstract quality scores, measure impact: "How many support tickets were caused by incorrect customer data? How many reports were delayed by missing data? What revenue was affected by stale pricing data?" Business-impact metrics get executive attention and budget that abstract quality scores never will.

Common Issues

Data governance initiative stalls due to lack of executive sponsorship. Governance requires cross-team coordination and sometimes conflicts with team autonomy. Without executive backing, teams can ignore governance standards. Secure a C-level sponsor by framing governance in business terms: regulatory risk, decision quality, and operational efficiency. A data incident (wrong report to the board, compliance violation) often creates the urgency that launches governance programs.

Teams resist data documentation because it feels like overhead. Make documentation part of the workflow, not a separate task. Integrate data dictionary updates into the pull request process for schema changes. Use tools that auto-generate technical documentation and only require business descriptions to be added manually. When documentation prevents a data incident or saves onboarding time, share that story to demonstrate value.

Data quality issues persist despite monitoring. Monitoring identifies problems; it doesn't fix them. Establish clear escalation paths: who gets notified? Who is responsible for fixing? What's the SLA for resolution? Without accountability, quality alerts become noise. Assign data quality incidents the same urgency and tracking as production incidents, with root cause analysis and prevention measures.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates