Advanced Security Platform
Battle-tested skill for repository, grounded, threat, modeling. Includes structured workflows, validation checks, and reusable patterns for security.
Advanced Security Platform
Build comprehensive application security programs with automated testing pipelines, vulnerability management workflows, security monitoring, and compliance tracking. This skill covers SAST/DAST integration, CI/CD security gates, vulnerability triage, risk scoring, and security program metrics.
When to Use This Skill
Choose Advanced Security Platform when you need to:
- Integrate security testing into CI/CD pipelines (SAST, DAST, SCA)
- Build vulnerability management workflows with triage and tracking
- Establish security metrics and KPIs for program effectiveness
- Implement security gates that prevent vulnerable code from reaching production
Consider alternatives when:
- You need manual penetration testing (use pentest-specific skills)
- You need specific tool configuration (use tool-specific skills)
- You need compliance-specific frameworks (use SOC 2, ISO 27001 guides)
Quick Start
from dataclasses import dataclass, field from typing import List, Dict, Optional from datetime import datetime, timedelta from enum import Enum import json class VulnSeverity(Enum): CRITICAL = 4 HIGH = 3 MEDIUM = 2 LOW = 1 INFO = 0 class VulnStatus(Enum): NEW = "new" TRIAGED = "triaged" IN_PROGRESS = "in_progress" FIXED = "fixed" ACCEPTED_RISK = "accepted_risk" FALSE_POSITIVE = "false_positive" @dataclass class Vulnerability: id: str title: str severity: VulnSeverity source: str # sast, dast, sca, pentest, bug_bounty component: str description: str status: VulnStatus = VulnStatus.NEW assignee: str = "" discovered: datetime = field(default_factory=datetime.now) sla_days: int = 0 cve: str = "" def __post_init__(self): sla_map = {4: 7, 3: 30, 2: 90, 1: 180, 0: 365} self.sla_days = sla_map.get(self.severity.value, 365) @property def sla_remaining(self) -> int: deadline = self.discovered + timedelta(days=self.sla_days) return (deadline - datetime.now()).days @property def overdue(self) -> bool: return self.sla_remaining < 0 and self.status not in ( VulnStatus.FIXED, VulnStatus.FALSE_POSITIVE, VulnStatus.ACCEPTED_RISK ) class SecurityDashboard: """Track and report on security program metrics.""" def __init__(self): self.vulnerabilities: List[Vulnerability] = [] def add_vulnerability(self, vuln: Vulnerability): self.vulnerabilities.append(vuln) def metrics(self) -> Dict: """Calculate security program metrics.""" open_vulns = [v for v in self.vulnerabilities if v.status not in (VulnStatus.FIXED, VulnStatus.FALSE_POSITIVE)] overdue = [v for v in open_vulns if v.overdue] fixed = [v for v in self.vulnerabilities if v.status == VulnStatus.FIXED] # Mean time to remediate mttr_days = [] for v in fixed: # Simplified — in production, track fix_date mttr_days.append(v.sla_days * 0.7) # Placeholder return { 'total_vulns': len(self.vulnerabilities), 'open_vulns': len(open_vulns), 'overdue': len(overdue), 'by_severity': {s.name: len([v for v in open_vulns if v.severity == s]) for s in VulnSeverity}, 'mttr_avg_days': sum(mttr_days) / max(len(mttr_days), 1), 'fix_rate': len(fixed) / max(len(self.vulnerabilities), 1) * 100, } def report(self): m = self.metrics() print(f"=== SECURITY PROGRAM METRICS ===") print(f"Total vulnerabilities: {m['total_vulns']}") print(f"Open: {m['open_vulns']} | Overdue: {m['overdue']}") print(f"Fix rate: {m['fix_rate']:.0f}%") print(f"Avg time to remediate: {m['mttr_avg_days']:.0f} days") print(f"\nOpen by severity:") for sev, count in m['by_severity'].items(): if count: print(f" {sev}: {count}") # dashboard = SecurityDashboard() # dashboard.add_vulnerability(Vulnerability( # "VULN-001", "SQL Injection in login", VulnSeverity.CRITICAL, # "dast", "auth-service", "Login endpoint vulnerable to SQLi" # )) # dashboard.report()
Core Concepts
Security Testing Pipeline
| Stage | Tool Type | When | Blocking? |
|---|---|---|---|
| Pre-commit | Secret scanner (gitleaks) | Before git push | Yes |
| CI Build | SAST (Semgrep, SonarQube) | On PR/merge | Critical only |
| CI Build | SCA (Snyk, npm audit) | On PR/merge | Critical/High |
| Pre-deploy | DAST (ZAP, Nuclei) | Staging environment | Critical only |
| Post-deploy | Runtime monitoring (WAF logs) | Production | Alert |
| Periodic | Penetration testing | Quarterly | Report |
Configuration
| Parameter | Description | Default |
|---|---|---|
sla_critical | Days to fix critical vulnerabilities | 7 |
sla_high | Days to fix high vulnerabilities | 30 |
sla_medium | Days to fix medium vulnerabilities | 90 |
sla_low | Days to fix low vulnerabilities | 180 |
block_on_critical | Block deployments for critical findings | true |
scan_frequency | How often to run periodic scans | "weekly" |
reporting_cadence | How often to generate metrics reports | "monthly" |
risk_acceptance_approver | Who can accept risk for unpatched vulns | "CISO" |
Best Practices
-
Gate deployments on critical and high severity findings — Integrate security scanners into CI/CD and block merges or deployments when critical vulnerabilities are found. This shifts security left and prevents known vulnerabilities from reaching production. Allow exceptions only with documented risk acceptance.
-
Define SLA targets by severity and measure compliance — Critical: 7 days, High: 30 days, Medium: 90 days, Low: 180 days. Track SLA compliance as a key security metric. Overdue vulnerabilities indicate process problems that need management attention.
-
Triage all findings before assigning to developers — Security scanners produce false positives. A security team member should triage findings before routing to development teams. False positives erode developer trust in the security program and waste engineering time.
-
Track mean time to remediate (MTTR) as the primary metric — MTTR measures how quickly the organization fixes vulnerabilities. It's more actionable than vulnerability counts. A decreasing MTTR indicates improving security maturity. Track separately by severity level.
-
Combine automated scanning with periodic manual testing — Automated tools catch known vulnerability patterns. Manual penetration testing finds business logic flaws, complex attack chains, and creative attacks that scanners miss. Both are necessary for comprehensive security coverage.
Common Issues
Security gates block all deployments during initial rollout — When first adding security scanning, existing codebases often have many findings. Start with monitoring mode (report but don't block), then gradually enable blocking as the backlog is reduced. Set a baseline and block only new findings initially.
Developer teams ignore security findings — Integrate findings into the team's existing issue tracker (Jira, GitHub Issues) rather than a separate security tool. Assign findings to specific developers with clear remediation guidance. Make security fix SLAs part of team performance metrics.
Multiple scanners report the same vulnerability differently — Deduplicate across scanners by matching on file path, line number, and vulnerability type. Use a vulnerability management platform that normalizes findings from multiple sources. Without deduplication, teams waste time triaging duplicate reports.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.