S

Smart Terraform Code Generation Suite

Boost productivity with intelligent generate and validate infrastructure as code. Built for Claude Code with best practices and real-world patterns.

SkillCommunitydevopsv1.0.0MIT
0 views0 copies

Terraform Code Generation Suite

Automated Terraform infrastructure-as-code generation toolkit covering module creation, provider configuration, state management, and cloud resource provisioning patterns for AWS, GCP, and Azure.

When to Use This Skill

Choose Terraform Code Generation when:

  • Provisioning cloud infrastructure with reproducible configurations
  • Creating reusable Terraform modules for team standards
  • Migrating manual cloud setups to infrastructure-as-code
  • Setting up multi-environment deployments (dev/staging/prod)
  • Generating Terraform from existing cloud resources

Consider alternatives when:

  • Need container orchestration — use Kubernetes manifests
  • AWS-only with native tools — consider AWS CDK
  • Simple deployments — use Pulumi for general-purpose languages

Quick Start

# Activate Terraform generation claude skill activate smart-terraform-code-generation-suite # Generate infrastructure claude "Generate Terraform for a production VPC with public/private subnets on AWS" # Create module claude "Create a reusable Terraform module for an ECS Fargate service"

Example: AWS Infrastructure

# main.tf — Production VPC Setup terraform { required_version = ">= 1.7" required_providers { aws = { source = "hashicorp/aws", version = "~> 5.0" } } backend "s3" { bucket = "myapp-terraform-state" key = "production/vpc/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-locks" encrypt = true } } provider "aws" { region = var.aws_region default_tags { tags = { Environment = var.environment ManagedBy = "terraform" Project = var.project_name } } } module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.5.0" name = "${var.project_name}-${var.environment}" cidr = var.vpc_cidr azs = data.aws_availability_zones.available.names private_subnets = var.private_subnet_cidrs public_subnets = var.public_subnet_cidrs enable_nat_gateway = true single_nat_gateway = var.environment != "production" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.project_name}-vpc" } } # variables.tf variable "aws_region" { type = string default = "us-east-1" } variable "environment" { type = string description = "Environment name (dev, staging, production)" validation { condition = contains(["dev", "staging", "production"], var.environment) error_message = "Environment must be dev, staging, or production." } } variable "project_name" { type = string } variable "vpc_cidr" { type = string default = "10.0.0.0/16" }

Core Concepts

Terraform Project Structure

FilePurposeContents
main.tfPrimary resource definitionsProviders, modules, resources
variables.tfInput variable declarationsVariable blocks with types and defaults
outputs.tfOutput value definitionsExported resource attributes
terraform.tfBackend and provider versionsRequired providers, state backend
locals.tfLocal computed valuesDerived values, tag maps
data.tfData source lookupsAMIs, AZs, account info
*.tfvarsVariable values per environmentEnvironment-specific settings

State Management

StrategyDescriptionBest For
S3 + DynamoDBRemote state with lockingAWS teams
GCSGoogle Cloud Storage backendGCP teams
Terraform CloudManaged state + runsEnterprise teams
LocalFile-based state (dev only)Learning, prototyping
# Common Terraform workflows # Initialize project terraform init # Preview changes terraform plan -var-file=production.tfvars -out=plan.tfplan # Apply changes terraform apply plan.tfplan # Import existing resource terraform import aws_s3_bucket.existing my-bucket-name # Generate from existing infrastructure terraform plan -generate-config-out=generated.tf # State management terraform state list terraform state show aws_instance.web terraform state mv aws_instance.old aws_instance.new

Configuration

ParameterDescriptionDefault
cloud_providerTarget cloud: aws, gcp, azureaws
state_backendState storage: s3, gcs, terraform-cloud, locals3
environment_strategyMulti-env: workspaces, directories, terragruntdirectories
module_styleModule organization: monorepo, registry, locallocal
naming_conventionResource naming pattern{project}-{env}-{resource}
lock_provider_versionsPin provider versionstrue

Best Practices

  1. Use remote state with locking from day one — Local state files can be lost, conflict in team settings, and have no concurrent access protection. Set up S3+DynamoDB (AWS) or equivalent backend before writing any resources.

  2. Pin provider and module versions explicitly — Use version = "~> 5.0" for providers and exact versions for modules. Unpinned providers can break your infrastructure when new versions introduce breaking changes during terraform init.

  3. Separate state files by blast radius — Don't put your VPC, database, and application in the same state file. Separate by infrastructure layer (networking, data, compute) so a bad apply to the application layer can't accidentally destroy the database.

  4. Use variable validation blocks — Add validation blocks to variables to catch invalid inputs before apply. Validate CIDR ranges, environment names, instance types, and any constrained values at plan time.

  5. Tag every resource consistently using default_tags — Use provider-level default_tags to ensure every resource gets Environment, Project, ManagedBy, and Team tags automatically. This enables cost allocation, security auditing, and resource cleanup.

Common Issues

State file gets corrupted or out of sync with actual infrastructure. Use terraform state list and terraform state show to inspect state. For drift, run terraform plan to detect differences. Use terraform import to bring existing resources under management. Never manually edit state files — use terraform state mv and terraform state rm.

Terraform plan shows changes on every run despite no code changes. This happens when resources have attributes set by the provider that aren't in your code (like default security group rules). Use lifecycle { ignore_changes } for provider-managed attributes, or explicitly set them in code to match the actual state.

Applying changes destroys and recreates resources that should be updated in-place. Some attribute changes force resource recreation (like changing an EC2 instance type requires stop/start). Review the plan carefully — ~ means update, -/+ means destroy and create. Use lifecycle { prevent_destroy = true } on critical resources like databases.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates