Get Available Dynamic
Streamline your workflow with this skill, should, used, start. Includes structured workflows, validation checks, and reusable patterns for scientific.
Get Available Dynamic
A scientific computing skill for detecting available computational resources and generating optimization recommendations. Get Available Dynamic helps you inventory CPU, GPU, memory, and storage resources to make informed decisions about parallel computing, batch sizes, and resource allocation for scientific workloads.
When to Use This Skill
Choose Get Available Dynamic when:
- Assessing available compute resources before running large analyses
- Optimizing batch sizes and parallelism for your hardware
- Detecting GPU availability for deep learning workloads
- Generating resource allocation strategies for pipeline execution
Consider alternatives when:
- You need cluster job scheduling (use SLURM, PBS, or Kubernetes)
- You need cloud resource provisioning (use Terraform or CloudFormation)
- You need system monitoring over time (use Prometheus or Grafana)
- You need simple system info (use
psutildirectly)
Quick Start
claude "Detect my available resources and recommend settings for a large analysis"
import psutil import platform import os def get_system_resources(): """Detect available computational resources""" resources = { "platform": { "system": platform.system(), "machine": platform.machine(), "python": platform.python_version(), }, "cpu": { "physical_cores": psutil.cpu_count(logical=False), "logical_cores": psutil.cpu_count(logical=True), "frequency_mhz": psutil.cpu_freq().current if psutil.cpu_freq() else None, }, "memory": { "total_gb": psutil.virtual_memory().total / (1024**3), "available_gb": psutil.virtual_memory().available / (1024**3), "used_percent": psutil.virtual_memory().percent, }, "disk": { "total_gb": psutil.disk_usage("/").total / (1024**3), "free_gb": psutil.disk_usage("/").free / (1024**3), } } # Check GPU availability try: import torch resources["gpu"] = { "available": torch.cuda.is_available(), "count": torch.cuda.device_count() if torch.cuda.is_available() else 0, "names": [torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())] if torch.cuda.is_available() else [], "memory_gb": [torch.cuda.get_device_properties(i).total_mem / (1024**3) for i in range(torch.cuda.device_count())] if torch.cuda.is_available() else [] } except ImportError: resources["gpu"] = {"available": False, "note": "PyTorch not installed"} return resources resources = get_system_resources() for category, info in resources.items(): print(f"\n{category.upper()}:") for key, val in info.items(): print(f" {key}: {val}")
Core Concepts
Resource Detection
| Resource | Detection Method | Key Metric |
|---|---|---|
| CPU Cores | psutil.cpu_count() | Physical vs. logical cores |
| Memory | psutil.virtual_memory() | Available (not total) GB |
| GPU | torch.cuda / nvidia-smi | VRAM and compute capability |
| Disk | psutil.disk_usage() | Free space for temp files |
| Network | psutil.net_if_stats() | Bandwidth for distributed |
Recommendation Engine
def recommend_settings(resources): """Generate optimization recommendations""" recs = {} cores = resources["cpu"]["physical_cores"] mem_gb = resources["memory"]["available_gb"] # Parallel workers recs["n_workers"] = max(1, cores - 1) # Leave 1 core for OS recs["threads_per_worker"] = 2 if cores >= 8 else 1 # Memory-based batch sizing recs["max_batch_memory_gb"] = mem_gb * 0.7 # Leave 30% headroom # Dask/pandas recommendation if mem_gb < 8: recs["data_engine"] = "pandas (data fits in memory)" elif mem_gb < 32: recs["data_engine"] = "pandas with chunking or Dask" else: recs["data_engine"] = "Dask or Vaex for out-of-core" # GPU recommendations if resources.get("gpu", {}).get("available"): gpu_mem = resources["gpu"]["memory_gb"][0] recs["gpu_batch_size"] = int(gpu_mem * 128) # ~128 samples per GB recs["model_precision"] = "fp16" if gpu_mem < 16 else "fp32" else: recs["gpu_note"] = "No GPU — use CPU-optimized algorithms" return recs
Pipeline Configuration
def configure_pipeline(resources, data_size_gb): """Auto-configure pipeline based on resources and data size""" config = {} mem = resources["memory"]["available_gb"] cores = resources["cpu"]["physical_cores"] if data_size_gb < mem * 0.5: config["strategy"] = "in-memory" config["chunk_size"] = None elif data_size_gb < mem * 2: config["strategy"] = "chunked" config["chunk_size_gb"] = mem * 0.3 else: config["strategy"] = "out-of-core" config["chunk_size_gb"] = mem * 0.2 config["parallel_jobs"] = min(cores - 1, 8) config["temp_dir"] = "/tmp" if resources["disk"]["free_gb"] > data_size_gb * 2 else "./temp" return config
Configuration
| Parameter | Description | Default |
|---|---|---|
memory_headroom | Reserve % of memory for OS | 30% |
cpu_reserve | Cores to leave for OS | 1 |
gpu_memory_fraction | Max GPU memory to use | 0.9 |
disk_headroom_gb | Minimum free disk to maintain | 10 |
check_interval | Resource check frequency | On demand |
Best Practices
-
Check available memory, not total memory. Total memory includes memory used by other processes. Base batch sizes and partition counts on
availablememory to avoid OOM errors during execution. -
Leave CPU cores for system processes. Use
physical_cores - 1for parallel workers. Using all cores can make the system unresponsive and may slow down overall throughput due to context switching. -
Profile actual memory usage before scaling. Run your pipeline on a small data subset and measure peak memory with
psutil.Process().memory_info().rss. Use this measurement to calculate the maximum data size for full-scale runs. -
Use GPU memory for batch size calculations. Deep learning batch sizes should be calculated from GPU VRAM, not system RAM. A rough guide:
batch_size = gpu_memory_gb * 128for typical CNN models, adjusted based on model size and input dimensions. -
Log resource utilization during execution. Monitor CPU, memory, and GPU usage during pipeline runs. This data helps optimize future runs — underutilized resources indicate room for larger batches or more parallelism.
Common Issues
Pipeline crashes with OOM despite checking resources. Memory usage can spike during intermediate operations (sorting, joining, model loading). Set memory headroom to 30-40% instead of using all available memory. Monitor peak usage, not average usage.
GPU detected but CUDA operations fail. The GPU may be in use by another process or have insufficient free memory. Check nvidia-smi for current GPU utilization. Set CUDA_VISIBLE_DEVICES to select a specific GPU.
Resource detection differs between local and cluster environments. Cluster nodes may have resource limits imposed by the job scheduler (SLURM, PBS) that differ from physical resources. Check SLURM_CPUS_PER_TASK and SLURM_MEM_PER_NODE environment variables on clusters.
Reviews
No reviews yet. Be the first to review this template!
Similar Templates
Full-Stack Code Reviewer
Comprehensive code review skill that checks for security vulnerabilities, performance issues, accessibility, and best practices across frontend and backend code.
Test Suite Generator
Generates comprehensive test suites with unit tests, integration tests, and edge cases. Supports Jest, Vitest, Pytest, and Go testing.
Pro Architecture Workspace
Battle-tested skill for architectural, decision, making, framework. Includes structured workflows, validation checks, and reusable patterns for development.