Smart Terraform Code Generation Suite
Boost productivity with intelligent generate and validate infrastructure as code. Built for Claude Code with best practices and real-world patterns.
Terraform Code Generation Suite
Automated Terraform infrastructure-as-code generation toolkit covering module creation, provider configuration, state management, and cloud resource provisioning patterns for AWS, GCP, and Azure.
When to Use This Skill
Choose Terraform Code Generation when:
- Provisioning cloud infrastructure with reproducible configurations
- Creating reusable Terraform modules for team standards
- Migrating manual cloud setups to infrastructure-as-code
- Setting up multi-environment deployments (dev/staging/prod)
- Generating Terraform from existing cloud resources
Consider alternatives when:
- Need container orchestration — use Kubernetes manifests
- AWS-only with native tools — consider AWS CDK
- Simple deployments — use Pulumi for general-purpose languages
Quick Start
# Activate Terraform generation claude skill activate smart-terraform-code-generation-suite # Generate infrastructure claude "Generate Terraform for a production VPC with public/private subnets on AWS" # Create module claude "Create a reusable Terraform module for an ECS Fargate service"
Example: AWS Infrastructure
# main.tf — Production VPC Setup terraform { required_version = ">= 1.7" required_providers { aws = { source = "hashicorp/aws", version = "~> 5.0" } } backend "s3" { bucket = "myapp-terraform-state" key = "production/vpc/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-locks" encrypt = true } } provider "aws" { region = var.aws_region default_tags { tags = { Environment = var.environment ManagedBy = "terraform" Project = var.project_name } } } module "vpc" { source = "terraform-aws-modules/vpc/aws" version = "5.5.0" name = "${var.project_name}-${var.environment}" cidr = var.vpc_cidr azs = data.aws_availability_zones.available.names private_subnets = var.private_subnet_cidrs public_subnets = var.public_subnet_cidrs enable_nat_gateway = true single_nat_gateway = var.environment != "production" enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.project_name}-vpc" } } # variables.tf variable "aws_region" { type = string default = "us-east-1" } variable "environment" { type = string description = "Environment name (dev, staging, production)" validation { condition = contains(["dev", "staging", "production"], var.environment) error_message = "Environment must be dev, staging, or production." } } variable "project_name" { type = string } variable "vpc_cidr" { type = string default = "10.0.0.0/16" }
Core Concepts
Terraform Project Structure
| File | Purpose | Contents |
|---|---|---|
main.tf | Primary resource definitions | Providers, modules, resources |
variables.tf | Input variable declarations | Variable blocks with types and defaults |
outputs.tf | Output value definitions | Exported resource attributes |
terraform.tf | Backend and provider versions | Required providers, state backend |
locals.tf | Local computed values | Derived values, tag maps |
data.tf | Data source lookups | AMIs, AZs, account info |
*.tfvars | Variable values per environment | Environment-specific settings |
State Management
| Strategy | Description | Best For |
|---|---|---|
| S3 + DynamoDB | Remote state with locking | AWS teams |
| GCS | Google Cloud Storage backend | GCP teams |
| Terraform Cloud | Managed state + runs | Enterprise teams |
| Local | File-based state (dev only) | Learning, prototyping |
# Common Terraform workflows # Initialize project terraform init # Preview changes terraform plan -var-file=production.tfvars -out=plan.tfplan # Apply changes terraform apply plan.tfplan # Import existing resource terraform import aws_s3_bucket.existing my-bucket-name # Generate from existing infrastructure terraform plan -generate-config-out=generated.tf # State management terraform state list terraform state show aws_instance.web terraform state mv aws_instance.old aws_instance.new
Configuration
| Parameter | Description | Default |
|---|---|---|
cloud_provider | Target cloud: aws, gcp, azure | aws |
state_backend | State storage: s3, gcs, terraform-cloud, local | s3 |
environment_strategy | Multi-env: workspaces, directories, terragrunt | directories |
module_style | Module organization: monorepo, registry, local | local |
naming_convention | Resource naming pattern | {project}-{env}-{resource} |
lock_provider_versions | Pin provider versions | true |
Best Practices
-
Use remote state with locking from day one — Local state files can be lost, conflict in team settings, and have no concurrent access protection. Set up S3+DynamoDB (AWS) or equivalent backend before writing any resources.
-
Pin provider and module versions explicitly — Use
version = "~> 5.0"for providers and exact versions for modules. Unpinned providers can break your infrastructure when new versions introduce breaking changes duringterraform init. -
Separate state files by blast radius — Don't put your VPC, database, and application in the same state file. Separate by infrastructure layer (networking, data, compute) so a bad apply to the application layer can't accidentally destroy the database.
-
Use variable validation blocks — Add
validationblocks to variables to catch invalid inputs before apply. Validate CIDR ranges, environment names, instance types, and any constrained values at plan time. -
Tag every resource consistently using default_tags — Use provider-level
default_tagsto ensure every resource gets Environment, Project, ManagedBy, and Team tags automatically. This enables cost allocation, security auditing, and resource cleanup.
Common Issues
State file gets corrupted or out of sync with actual infrastructure. Use terraform state list and terraform state show to inspect state. For drift, run terraform plan to detect differences. Use terraform import to bring existing resources under management. Never manually edit state files — use terraform state mv and terraform state rm.
Terraform plan shows changes on every run despite no code changes. This happens when resources have attributes set by the provider that aren't in your code (like default security group rules). Use lifecycle { ignore_changes } for provider-managed attributes, or explicitly set them in code to match the actual state.
Applying changes destroys and recreates resources that should be updated in-place. Some attribute changes force resource recreation (like changing an EC2 instance type requires stop/start). Review the plan carefully — ~ means update, -/+ means destroy and create. Use lifecycle { prevent_destroy = true } on critical resources like databases.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.