Cloud Penetration Testing Studio
Battle-tested skill for skill, should, used, user. Includes structured workflows, validation checks, and reusable patterns for security.
Cloud Penetration Testing Studio
Conduct authorized security assessments across multi-cloud environments including AWS, Azure, and GCP. This skill covers cloud-specific attack techniques, identity and access misconfigurations, storage exposure testing, serverless security assessment, and cloud-native privilege escalation paths.
When to Use This Skill
Choose Cloud Penetration Testing Studio when you need to:
- Perform authorized security testing across multiple cloud providers
- Identify IAM misconfigurations, overprivileged roles, and access policy issues
- Test cloud storage (S3, Azure Blob, GCS) for data exposure
- Assess serverless function security and container escape risks
Consider alternatives when:
- You need AWS-specific deep testing (use AWS Penetration Expert)
- You need cloud security posture management (use Prowler, ScoutSuite)
- You need compliance auditing (use cloud-native compliance tools)
Quick Start
pip install boto3 azure-identity azure-mgmt-resource google-cloud-storage pip install scoutsuite prowler
import json from dataclasses import dataclass, field from typing import List, Dict @dataclass class CloudFinding: provider: str service: str resource: str severity: str title: str description: str evidence: str = "" remediation: str = "" class MultiCloudAssessor: """Multi-cloud security assessment framework.""" def __init__(self): self.findings: List[CloudFinding] = [] def assess_aws(self, session): """Run AWS security checks.""" import boto3 # Check S3 public access s3 = session.client('s3') try: buckets = s3.list_buckets()['Buckets'] for bucket in buckets: try: pub_access = s3.get_public_access_block(Bucket=bucket['Name']) config = pub_access['PublicAccessBlockConfiguration'] if not all([ config['BlockPublicAcls'], config['IgnorePublicAcls'], config['BlockPublicPolicy'], config['RestrictPublicBuckets'] ]): self.findings.append(CloudFinding( provider='AWS', service='S3', resource=bucket['Name'], severity='HIGH', title='S3 bucket public access block not fully enabled', description='Public access block settings are partially disabled', remediation='Enable all four public access block settings' )) except Exception: self.findings.append(CloudFinding( provider='AWS', service='S3', resource=bucket['Name'], severity='MEDIUM', title='S3 public access block not configured', description='No public access block configuration found', remediation='Configure public access block on the bucket' )) except Exception as e: print(f"AWS S3 check failed: {e}") # Check for IMDSv1 ec2 = session.client('ec2') try: instances = ec2.describe_instances() for reservation in instances['Reservations']: for instance in reservation['Instances']: metadata_options = instance.get('MetadataOptions', {}) if metadata_options.get('HttpTokens') != 'required': self.findings.append(CloudFinding( provider='AWS', service='EC2', resource=instance['InstanceId'], severity='HIGH', title='IMDSv1 enabled (SSRF risk)', description='Instance allows IMDSv1 which is exploitable via SSRF', remediation='Set HttpTokens to "required" to enforce IMDSv2' )) except Exception as e: print(f"AWS EC2 check failed: {e}") def generate_report(self): """Generate assessment report.""" by_severity = {} for f in self.findings: by_severity.setdefault(f.severity, []).append(f) print(f"\n{'='*60}") print(f"CLOUD SECURITY ASSESSMENT REPORT") print(f"{'='*60}") print(f"Total findings: {len(self.findings)}") for sev in ['CRITICAL', 'HIGH', 'MEDIUM', 'LOW', 'INFO']: count = len(by_severity.get(sev, [])) if count: print(f" {sev}: {count}") for sev in ['CRITICAL', 'HIGH', 'MEDIUM']: findings = by_severity.get(sev, []) if findings: print(f"\n--- {sev} FINDINGS ---") for f in findings: print(f"\n[{f.provider}/{f.service}] {f.title}") print(f" Resource: {f.resource}") print(f" {f.description}") if f.remediation: print(f" Fix: {f.remediation}") # assessor = MultiCloudAssessor() # assessor.assess_aws(boto3.Session(profile_name='pentest')) # assessor.generate_report()
Core Concepts
Cloud Attack Surface Comparison
| Attack Vector | AWS | Azure | GCP |
|---|---|---|---|
| Metadata endpoint | 169.254.169.254 (IMDSv1/v2) | 169.254.169.254 (IMDS) | metadata.google.internal |
| Identity service | IAM (users, roles, policies) | Entra ID (RBAC, PIM) | IAM (service accounts) |
| Storage exposure | S3 (ACLs, policies, public) | Blob (SAS tokens, public) | GCS (ACLs, IAM) |
| Serverless | Lambda (env vars, roles) | Functions (managed identity) | Cloud Functions (SA) |
| Key management | KMS, Secrets Manager | Key Vault | KMS, Secret Manager |
| Logging | CloudTrail, CloudWatch | Monitor, Sentinel | Cloud Audit Logs |
Configuration
| Parameter | Description | Default |
|---|---|---|
providers | Cloud providers to assess (aws, azure, gcp) | All configured |
regions | Regions to test | All available |
check_categories | Security check categories | All |
severity_threshold | Minimum severity to report | "LOW" |
output_format | Report format (json, html, pdf) | "json" |
parallel_checks | Run checks in parallel | true |
skip_destructive | Skip tests that modify resources | true |
evidence_collection | Capture evidence screenshots/outputs | true |
Best Practices
-
Map the full cloud attack surface before testing — Use ScoutSuite or Prowler for automated inventory of all cloud resources, identities, and configurations. This provides a comprehensive view of what needs testing and prevents blind spots in regions or services you didn't know existed.
-
Test cross-cloud trust boundaries — Many organizations have trust relationships between cloud providers (AWS roles that trust Azure AD, GCP service accounts linked to AWS). These trust boundaries are often the weakest link and enable lateral movement between clouds.
-
Focus on identity and access management first — IAM misconfigurations are the most common and impactful cloud security issues. Overprivileged roles, unused credentials, and excessive trust policies enable privilege escalation. Start every cloud pentest with IAM enumeration.
-
Check for secrets in environment variables and metadata — Serverless functions, containers, and VMs often have secrets (API keys, database credentials, tokens) exposed through environment variables, instance metadata, or configuration files. These are low-hanging fruit in cloud assessments.
-
Coordinate with cloud provider notification requirements — AWS, Azure, and GCP have penetration testing policies. AWS no longer requires pre-approval for most services but prohibits testing shared infrastructure. Azure and GCP have similar policies. Review and comply with each provider's rules before testing.
Common Issues
Test credentials lack sufficient permissions for comprehensive assessment — Cloud pentesting requires read access to IAM, storage, compute, networking, and logging services. Provide a checklist of required permissions to the client before the engagement and verify access before starting.
Automated scanners generate false positives on shared responsibility items — Cloud scanners may flag infrastructure-level controls that are the cloud provider's responsibility (e.g., physical security, hypervisor patching). Understand the shared responsibility model for each provider and filter findings accordingly.
Cloud resources change during the testing window — Cloud environments are dynamic — resources are created and destroyed constantly. Take snapshots of the environment state at the start of testing and note any changes. Coordinate with DevOps teams to avoid testing resources that are being actively modified.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.