D

Dynamic Programmatic Video Creation Workshop

Professional-grade skill designed for create videos programmatically with React components. Built for Claude Code with best practices and real-world patterns.

SkillCommunitycreativev1.0.0MIT
0 views0 copies

Programmatic Video Creation Workshop

Automated video generation toolkit for creating videos programmatically using FFmpeg, Remotion, and code-based approaches for data visualizations, social media content, and dynamic video templates.

When to Use This Skill

Choose Programmatic Video Creation when:

  • Generating data-driven videos from datasets or APIs
  • Creating templated social media video content at scale
  • Building automated video rendering pipelines
  • Making code-based animated explanations or tutorials
  • Producing dynamic video variants from templates

Consider alternatives when:

  • Need manual video editing — use Premiere, DaVinci Resolve
  • Need live streaming — use OBS or streaming tools
  • Need AI-generated video — use AI video generation models

Quick Start

# Activate video creation toolkit claude skill activate dynamic-programmatic-video-creation-workshop # Create a data visualization video claude "Create an animated bar chart video from sales_data.csv using Remotion" # Generate social media content claude "Generate 10 quote card videos for Instagram Reels from quotes.json"

Example: FFmpeg Video Assembly

# Create video from image sequence ffmpeg -framerate 30 -pattern_type glob -i 'frames/*.png' \ -c:v libx264 -pix_fmt yuv420p -crf 23 output.mp4 # Add audio to video ffmpeg -i video.mp4 -i audio.mp3 \ -c:v copy -c:a aac -shortest output.mp4 # Create slideshow with transitions ffmpeg -loop 1 -t 3 -i slide1.jpg \ -loop 1 -t 3 -i slide2.jpg \ -loop 1 -t 3 -i slide3.jpg \ -filter_complex \ "[0:v]fade=t=out:st=2.5:d=0.5[v0]; \ [1:v]fade=t=in:st=0:d=0.5,fade=t=out:st=2.5:d=0.5[v1]; \ [2:v]fade=t=in:st=0:d=0.5[v2]; \ [v0][v1][v2]concat=n=3:v=1:a=0" \ -c:v libx264 -pix_fmt yuv420p slideshow.mp4 # Add animated text overlay ffmpeg -i input.mp4 -vf \ "drawtext=text='Hello World':fontcolor=white:fontsize=48:\ x=(w-text_w)/2:y=h-th-50:\ enable='between(t,1,4)':\ fontfile=/path/to/font.ttf" \ output.mp4 # Create picture-in-picture ffmpeg -i main.mp4 -i overlay.mp4 \ -filter_complex \ "[1:v]scale=320:240[pip]; \ [0:v][pip]overlay=W-w-10:H-h-10" \ pip_output.mp4

Example: Remotion Video Component

// Remotion - React-based video creation import { useCurrentFrame, useVideoConfig, spring, interpolate } from 'remotion'; export const DataBarChart: React.FC<{ data: DataPoint[] }> = ({ data }) => { const frame = useCurrentFrame(); const { fps } = useVideoConfig(); const maxValue = Math.max(...data.map(d => d.value)); return ( <div style={{ padding: 60, background: '#0f172a', height: '100%' }}> <h1 style={{ color: '#f1f5f9', fontSize: 48, marginBottom: 40 }}> Sales by Region </h1> <div style={{ display: 'flex', alignItems: 'flex-end', gap: 20, height: 400 }}> {data.map((item, i) => { const progress = spring({ frame: frame - i * 5, fps, config: { damping: 200 }, }); const height = interpolate(progress, [0, 1], [0, (item.value / maxValue) * 350]); return ( <div key={item.label} style={{ textAlign: 'center', flex: 1 }}> <div style={{ height, background: `hsl(${220 + i * 30}, 70%, 60%)`, borderRadius: '8px 8px 0 0', transition: 'height 0.3s', }} /> <span style={{ color: '#94a3b8', fontSize: 14, marginTop: 8 }}> {item.label} </span> </div> ); })} </div> </div> ); };

Core Concepts

Video Creation Approaches

ApproachTechnologyBest For
FFmpeg CLICommand-line video processingCombining media, transcoding, filters
RemotionReact components rendered as videoData viz, branded content, templates
Motion CanvasTypeScript animation engineTechnical animations, diagrams
ManimPython mathematical animationsMath/science explainers
Puppeteer + FFmpegBrowser screenshots to videoWeb content recording
Canvas API + EncoderHTML5 Canvas framesCustom rendering pipelines

Video Specifications

PlatformResolutionDurationAspect RatioFormat
YouTube1920x1080Any16:9MP4 (H.264)
Instagram Reels1080x192015-90s9:16MP4
TikTok1080x192015-180s9:16MP4
Twitter/X1280x720+<140s16:9 or 1:1MP4
LinkedIn1920x1080<10min16:9MP4
// Batch video generation from templates interface VideoTemplate { id: string; composition: string; defaultProps: Record<string, any>; } async function batchRender( template: VideoTemplate, variants: Record<string, any>[], outputDir: string ) { for (const [i, props] of variants.entries()) { const mergedProps = { ...template.defaultProps, ...props }; // Remotion CLI render await exec(`npx remotion render ${template.composition} \ ${outputDir}/video_${i}.mp4 \ --props='${JSON.stringify(mergedProps)}'`); } } // Generate 50 personalized videos const variants = customers.map(c => ({ name: c.name, stats: c.yearlyStats, highlights: c.topMoments, })); await batchRender(yearInReviewTemplate, variants, './output/');

Configuration

ParameterDescriptionDefault
resolutionOutput resolution (width x height)1920x1080
fpsFrames per second30
codecVideo codec: h264, h265, vp9h264
crfConstant Rate Factor (quality, 0-51)23
audio_codecAudio codec: aac, mp3, opusaac
pixel_formatPixel formatyuv420p
durationVideo duration in secondsTemplate-dependent
output_formatContainer format: mp4, webm, movmp4

Best Practices

  1. Use CRF mode for consistent quality across scenes — Constant Rate Factor adjusts bitrate per scene to maintain consistent visual quality. CRF 23 is a good default for H.264 — lower values increase quality and file size, higher values decrease both.

  2. Render at target resolution from the start — Don't render at 4K and downscale. This wastes render time and can introduce scaling artifacts. Match your render resolution to the delivery platform's optimal resolution.

  3. Separate rendering from encoding for flexibility — Render frames as PNG sequences, then encode to video. This allows re-encoding with different settings without re-rendering, makes it easy to resume interrupted renders, and enables quality inspection of individual frames.

  4. Use hardware acceleration when available — FFmpeg supports NVIDIA (nvenc), AMD (amf), and Apple (videotoolbox) hardware encoding. Hardware encoding is 5-10x faster than software encoding with comparable quality: -c:v h264_videotoolbox on macOS.

  5. Template everything for scale — Design video components as parameterized templates with typed inputs. This enables batch generation of hundreds of personalized videos from data files without manual intervention.

Common Issues

Rendered video has color banding in gradients. This is caused by 8-bit color depth limitations in H.264. Use yuv420p10le pixel format for 10-bit color which eliminates banding. Alternatively, add subtle grain/noise to mask banding in 8-bit output.

Audio and video go out of sync in longer videos. Audio drift occurs when video and audio have slightly different durations or frame rates. Use -async 1 in FFmpeg to resync, or ensure both streams share the same timebase. For Remotion, set explicit audio duration matching video frames.

Batch rendering is too slow for large-scale video generation. Parallelize rendering across CPU cores using worker pools. For Remotion, use renderFrames with concurrent rendering. For FFmpeg, split work across multiple processes on different segments and concatenate. Cloud rendering on GPU instances dramatically accelerates large batches.

Community

Reviews

Write a review

No reviews yet. Be the first to review this template!

Similar Templates