Se Responsible Ai Companion
Enterprise-grade agent for responsible, specialist, ensuring, works. Includes structured workflows, validation checks, and reusable patterns for web tools.
SE Responsible AI Companion
Audit, detect, and remediate bias, accessibility gaps, and ethical risks across AI-powered systems and user-facing applications.
When to Use This Agent
Choose this agent when you need to:
- Evaluate ML pipelines and decision systems for demographic bias, fairness drift, and disparate impact across protected groups
- Audit user-facing interfaces for inclusive language patterns and culturally sensitive content delivery
- Establish responsible AI governance frameworks including privacy impact assessments and algorithmic transparency documentation
Consider alternatives when:
- Your task is purely technical SEO or search optimization without ethical AI implications
- You need full-stack accessibility remediation with code-level fixes rather than policy-level auditing
Quick Start
Configuration
name: se-responsible-ai-companion type: agent category: web-tools
Example Invocation
claude agent:invoke se-responsible-ai-companion "Audit our recommendation engine for demographic bias across age, gender, and ethnicity"
Example Output
Responsible AI Audit Report
===========================
System: Product Recommendation Engine v3.2
BIAS ANALYSIS
- Gender: Disparity ratio 0.87 (PASS, threshold > 0.80)
- Age: Disparity ratio 0.62 (FAIL, users 65+ receive 38% fewer results)
- Ethnicity: Name-encoding bias detected in 3 input features
REMEDIATION PRIORITY
1. [CRITICAL] Remove name-derived features from model pipeline
2. [HIGH] Retrain age-segmented model with balanced sampling
3. [MEDIUM] Add ARIA labels to recommendation card grid
Overall Fairness Score: 64/100 (Needs Improvement)
Core Concepts
Responsible AI Framework Overview
| Aspect | Details |
|---|---|
| Fairness Metrics | Statistical parity, equalized odds, demographic parity ratio, disparate impact analysis |
| Bias Categories | Selection bias, measurement bias, representation bias, aggregation bias |
| Privacy Controls | Data minimization, purpose limitation, consent verification, PII masking |
| Explainability | SHAP values, LIME explanations, counterfactual reasoning, feature attribution |
| Compliance | EU AI Act risk tiers, NIST AI RMF, IEEE 7000 series, ISO/IEC 42001 |
Ethical AI Audit Architecture
+-----------------+ +------------------+ +-----------------+
| Data Pipeline |---->| Bias Detection |---->| Fairness |
| Assessment | | Engine | | Scorecard |
+-----------------+ +------------------+ +-----------------+
| | |
v v v
+-----------------+ +------------------+ +-----------------+
| Privacy Impact | | Explainability | | Remediation |
| Analysis | | Report Builder | | Plan Builder |
+-----------------+ +------------------+ +-----------------+
Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
bias_dimensions | array | ["gender","age","ethnicity"] | Protected attributes to evaluate for disparate impact |
fairness_threshold | float | 0.80 | Minimum disparity ratio before flagging a violation |
accessibility_level | string | "AA" | WCAG conformance target (A, AA, or AAA) |
privacy_framework | string | "gdpr" | Regulatory framework for privacy assessment |
explainability_depth | string | "standard" | Explanation detail level: minimal, standard, comprehensive |
Best Practices
-
Test with Diverse Synthetic Populations - Generate test inputs spanning multiple demographic intersections rather than single-axis testing. Intersectional bias often surfaces failures invisible to univariate analysis. Use culturally diverse name sets and edge-case characters in every evaluation cycle.
-
Establish Continuous Fairness Monitoring - Model drift and shifting data distributions reintroduce bias over time. Implement automated fairness dashboards that trigger alerts when disparity ratios cross thresholds in production.
-
Document Explainability at Decision Points - Every automated decision affecting users should carry a human-readable explanation. Log feature attributions and counterfactual explanations for regulatory compliance and audit trails.
-
Apply the Principle of Least Data - Collect only data strictly necessary for the stated purpose. Audit feature stores for proxy variables that encode protected attributes indirectly, such as zip codes encoding race.
-
Engage Diverse Stakeholders in Review - Include domain experts, affected community representatives, and accessibility specialists in review processes. Their lived experience reveals blind spots statistical analysis cannot detect.
Common Issues
-
Proxy Variable Leakage - Features like geographic location or device type can serve as proxies for protected attributes. Run correlation analysis between all inputs and protected dimensions, removing features with coefficients above 0.3.
-
Evaluation Dataset Skew - Test datasets that do not reflect real demographics produce misleading fairness scores. Stratify evaluation sets to ensure minimum representation per group and report per-group metrics.
-
Accessibility Regression After Updates - UI updates frequently break previously compliant features. Integrate automated WCAG testing into CI/CD pipelines and maintain regression suites for keyboard navigation and screen reader compatibility.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
API Endpoint Builder
Agent that scaffolds complete REST API endpoints with controller, service, route, types, and tests. Supports Express, Fastify, and NestJS.
Documentation Auto-Generator
Agent that reads your codebase and generates comprehensive documentation including API docs, architecture guides, and setup instructions.
Ai Ethics Advisor Partner
All-in-one agent covering ethics, responsible, development, specialist. Includes structured workflows, validation checks, and reusable patterns for ai specialists.