S

Setup Kubernetes Deployment Processor

Powerful command for configure, comprehensive, kubernetes, deployment. Includes structured workflows, validation checks, and reusable patterns for deployment.

CommandClipticsdeploymentv1.0.0MIT
0 views0 copies

Setup Kubernetes Deployment Processor

Generate complete Kubernetes deployment manifests, services, and ingress configurations for your application.

When to Use This Command

Run this command when you need to:

  • Create production-grade Kubernetes manifests for a containerized application from scratch
  • Configure auto-scaling, health probes, and resource limits for a Kubernetes deployment
  • Set up ingress, TLS, and service mesh configurations for cluster networking

Consider alternatives when:

  • You are deploying to a serverless platform or PaaS that abstracts Kubernetes away
  • You already use Helm charts or Kustomize overlays and need to modify existing templates

Quick Start

Configuration

name: setup-kubernetes-deployment-processor type: command category: deployment

Example Invocation

claude command:run setup-kubernetes-deployment-processor --app api-server --namespace production

Example Output

Application: api-server
Namespace: production
Detected: Dockerfile present, port 8080 exposed

Generated Kubernetes Manifests:
  [+] deployment.yaml (3 replicas, resource limits set)
  [+] service.yaml (ClusterIP, port 8080)
  [+] ingress.yaml (TLS termination, host: api.example.com)
  [+] hpa.yaml (min: 2, max: 10, target CPU: 70%)
  [+] configmap.yaml (12 configuration entries)
  [+] secret.yaml (template with 4 secret references)
  [+] networkpolicy.yaml (ingress from frontend namespace only)
  [+] pdb.yaml (minAvailable: 1)

Resource Estimates:
  CPU request/limit: 250m / 1000m per pod
  Memory request/limit: 256Mi / 512Mi per pod
  Total cluster overhead: 750m CPU, 768Mi memory (3 replicas)

Apply with: kubectl apply -f k8s/ -n production

Core Concepts

Kubernetes Manifest Overview

AspectDetails
DeploymentPod template, replica count, update strategy, and resource limits
ServiceClusterIP or LoadBalancer exposing pods internally or externally
IngressHTTP routing, TLS termination, and host-based virtual hosting
HPAHorizontal Pod Autoscaler scaling on CPU, memory, or custom metrics
SecurityNetworkPolicy, PodSecurityStandards, RBAC, and secrets management

Manifest Generation Workflow

  Application Analysis
        |
        v
  +--------------------+
  | Detect Container   |---> Dockerfile, ports, env vars
  +--------------------+
        |
        v
  +--------------------+
  | Resource Estimation|---> CPU, memory based on runtime
  +--------------------+
        |
        v
  +--------------------+
  | Generate Manifests |---> Deployment, Service, Ingress
  +--------------------+
        |
        v
  +--------------------+
  | Security Policies  |---> NetworkPolicy, RBAC, PDB
  +--------------------+
        |
        v
  +--------------------+
  | Validation         |---> kubeval / kubeconform check
  +--------------------+

Configuration

ParameterTypeDefaultDescription
appstringrequiredApplication name used for labels, selectors, and resource naming
namespacestringdefaultKubernetes namespace to deploy into
replicasinteger3Initial replica count for the deployment
ingress_hoststringnoneHostname for ingress routing and TLS certificate
storagestringnonePersistent volume size if the application requires storage

Best Practices

  1. Set Resource Requests and Limits - Always define CPU and memory requests and limits. Without them, the scheduler cannot make informed placement decisions and pods risk being OOMKilled or starving neighbors.

  2. Use Readiness and Liveness Probes - Configure readiness probes to prevent traffic from reaching unready pods, and liveness probes to restart stuck processes. Use startup probes for applications with slow initialization.

  3. Apply Pod Disruption Budgets - Define a PodDisruptionBudget to ensure at least one pod remains available during voluntary disruptions like node upgrades. Without a PDB, cluster maintenance can take down all replicas simultaneously.

  4. Separate Configuration from Code - Store all configuration in ConfigMaps and secrets rather than baking values into container images. This allows the same image to run across development, staging, and production environments.

  5. Label Everything Consistently - Use standardized labels (app, version, component, managed-by) on all resources. Consistent labeling enables efficient querying, monitoring, and policy enforcement across the cluster.

Common Issues

  1. Pods Stuck in CrashLoopBackOff - The application is crashing on startup. Check container logs with kubectl logs, verify environment variables are set correctly, and ensure the health check endpoint is responding before the probe timeout.

  2. Ingress Returns 502 Bad Gateway - The service selector does not match pod labels, or the target port does not match the container port. Verify that the service port, target port, and container port form a consistent chain.

  3. HPA Not Scaling - The metrics-server is not installed or the deployment does not have resource requests defined. HPA requires resource requests to calculate utilization percentages for scaling decisions.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates