DSPy-style framework for optimizing text-to-video prompts via VBench metric feedback.
ViDSPy brings the power of DSPy's declarative programming paradigm to text-to-video generation. Optimize your prompts for any text-to-video API (Runway, Pika, Replicate, etc.) using VBench quality metrics and VLM-based feedback.
Note: ViDSPy optimizes how you prompt video generation models. It does not generate videos itself - you bring your own text-to-video API.
- DSPy-style Optimization: Tune instructions (prompt templates) and demonstrations (few-shot examples) using canonical DSPy optimizers
- VBench Integration: Full support for all 10 CORE_METRICS from VBench evaluation
- Multiple Optimizers: BootstrapFewShot, LabeledFewShot, MIPROv2, COPRO, and GEPA
- Flexible VLM Backends: OpenRouter (cloud) and HuggingFace (local) support
- Composite Metrics: Weighted combination of video quality (60%) and text-video alignment (40%)
pip install vidspyFor full functionality with VBench evaluation:
pip install vidspy[vbench]For development:
pip install vidspy[all]ViDSPy can be configured in two ways:
from vidspy import ViDSPy
vidspy = ViDSPy(
vlm_backend="openrouter",
vlm_model="google/gemini-2.5-flash",
api_key="your-api-key",
device="auto"
)Create a vidspy_config.yaml file from the template:
cp vidspy_config.yaml.example vidspy_config.yamlEdit the configuration file:
# vidspy_config.yaml
# VLM Provider Settings
vlm:
backend: openrouter # or "huggingface"
model: google/gemini-2.5-flash
# api_key: your-api-key-here # Or set OPENROUTER_API_KEY env var
# Optimizer LLM Settings
# This LLM is used by DSPy optimizers (MIPROv2, COPRO, GEPA) to generate instruction variations
optimizer:
lm: openai/gpt-4o-mini # Any model from OpenRouter
# api_key: your-api-key-here # Or set OPENROUTER_OPTIMIZER_API_KEY env var
# Optimization Settings
optimization:
default_optimizer: mipro_v2
max_bootstrapped_demos: 4
max_labeled_demos: 4
# Metric Settings
metrics:
quality_weight: 0.6
alignment_weight: 0.4
quality_metrics:
- subject_consistency
- motion_smoothness
- temporal_flickering
- human_anatomy
- aesthetic_quality
- imaging_quality
alignment_metrics:
- object_class
- human_action
- spatial_relationship
- overall_consistency
# Target Thresholds
targets:
human_anatomy: 0.85
alignment: 0.80
# Cache Settings
cache:
dir: ~/.cache/vidspy
vbench_models: ~/.cache/vbench
# Hardware Settings
hardware:
device: auto # "cuda", "cpu", or "auto"
dtype: float16Then use ViDSPy without any arguments - it automatically loads the config:
from vidspy import ViDSPy, VideoChainOfThought, Example
# ViDSPy automatically finds and loads vidspy_config.yaml
vidspy = ViDSPy()
# All settings from the config file are now applied!
trainset = [Example(prompt="a cat jumping", video_path="cat.mp4")]
optimized = vidspy.optimize(VideoChainOfThought("prompt -> video"), trainset)Config file search order:
ViDSPy automatically searches for vidspy_config.yaml in:
- Current working directory:
./vidspy_config.yaml - User config directory:
~/.vidspy/config.yaml - User home directory:
~/vidspy_config.yaml
Custom config path:
You can also specify a custom config file location:
vidspy = ViDSPy(config_path="/path/to/custom_config.yaml")Important:
- Arguments passed directly to
ViDSPy()always override config file values - API keys should be in environment variables or
.envfile (not in the config file):
# .env
OPENROUTER_API_KEY=your-api-key-here
OPENROUTER_OPTIMIZER_API_KEY=your-api-key-here # For optimizer LLMViDSPy uses two different LLMs:
- VLM (Vision Language Model): Analyzes videos and enhances prompts during inference
- Optimizer LLM: Used by MIPROv2, COPRO, and GEPA optimizers to generate instruction variations during optimization
Note: BootstrapFewShot and LabeledFewShot optimizers do NOT require an optimizer LLM - they work without this configuration.
Configure the Optimizer LLM:
from vidspy import ViDSPy
vidspy = ViDSPy(
vlm_backend="openrouter",
vlm_model="google/gemini-2.5-flash", # For video analysis
optimizer_lm="openai/gpt-4o-mini", # For optimization
optimizer_api_key="your-openrouter-api-key" # Or set OPENROUTER_OPTIMIZER_API_KEY env var
)Or in vidspy_config.yaml:
vlm:
backend: openrouter
model: google/gemini-2.5-flash
optimizer:
lm: openai/gpt-4o-mini # Any OpenRouter modelChoosing an Optimizer LLM:
- For most users:
openai/gpt-4o-mini(fast and cost-effective) - For better quality:
openai/gpt-4ooranthropic/claude-3-5-sonnet - Any model from OpenRouter is supported
Note: If optimizer_api_key is not specified, it will use the VLM API key as fallback.
Important: ViDSPy is a prompt optimization framework that sits on top of existing text-to-video models. It does NOT generate videos itself. Instead, it optimizes how you prompt external video generation services.
User Prompt β ViDSPy (optimize prompt) β Text-to-Video API β Generated Video
β
VBench + VLM (evaluate quality)
β
Learn better prompting strategies
You need to provide a video_generator function that connects to your preferred text-to-video service:
from vidspy import VideoChainOfThought
def runway_generator(prompt: str, **kwargs) -> str:
"""Generate video using Runway Gen-3 API."""
import requests
response = requests.post(
"https://api.runwayml.com/v1/generate",
headers={"Authorization": f"Bearer {RUNWAY_API_KEY}"},
json={
"prompt": prompt,
"model": "gen3",
"duration": kwargs.get("duration", 5)
}
)
video_url = response.json()["output"]["url"]
# Download and save video locally
video_path = f"outputs/{response.json()['id']}.mp4"
# ... download logic ...
return video_path
# Create module with your generator
module = VideoChainOfThought(
"prompt -> video",
video_generator=runway_generator
)import replicate
def replicate_generator(prompt: str, **kwargs) -> str:
"""Generate video using Replicate API."""
output = replicate.run(
"stability-ai/stable-video-diffusion",
input={"prompt": prompt}
)
# Save video from output URL
video_path = f"outputs/{uuid.uuid4()}.mp4"
# ... download logic ...
return video_path
module = VideoChainOfThought(
"prompt -> video",
video_generator=replicate_generator
)from pika import PikaClient
def pika_generator(prompt: str, **kwargs) -> str:
"""Generate video using Pika Labs API."""
client = PikaClient(api_key=PIKA_API_KEY)
video = client.generate_video(
prompt=prompt,
aspect_ratio="16:9",
duration=3
)
return video.download_path
module = VideoChainOfThought(
"prompt -> video",
video_generator=pika_generator
)ViDSPy works with any text-to-video API. Popular options include:
| Service | API Available | Notes |
|---|---|---|
| Runway Gen-3 | β Yes | High quality, good motion |
| Pika Labs | β Yes | Creative effects, good for social media |
| Stability AI Video | β Yes | Open weights available |
| Replicate | β Yes | Multiple models (CogVideo, SVD, etc.) |
| LumaAI Dream Machine | β Yes | Cinematic quality |
| HaiperAI | β Yes | Fast generation |
| Morph Studio | β Yes | Style control |
def my_video_generator(prompt: str, **kwargs) -> str:
"""
Your custom video generation function.
Args:
prompt: Enhanced prompt from ViDSPy
**kwargs: Additional parameters (duration, aspect_ratio, etc.)
Returns:
Local path to the generated video file
"""
# 1. Call your text-to-video API
# 2. Download the generated video
# 3. Save it locally
# 4. Return the file path
video_path = "path/to/generated/video.mp4"
return video_pathfrom vidspy import ViDSPy, VideoChainOfThought, Example
# Step 1: Define your video generator (connect to your text-to-video API)
def my_video_generator(prompt: str, **kwargs) -> str:
"""Your text-to-video API integration."""
# Example: Runway, Pika, Replicate, etc.
import my_video_api
video = my_video_api.generate(prompt=prompt)
return video.save_path()
# Step 2: Initialize ViDSPy with OpenRouter VLM backend
# Note: Optimizer LLM defaults to "openai/gpt-4o-mini" via OpenRouter
# You can customize: ViDSPy(vlm_backend="openrouter", optimizer_lm="openai/gpt-4o")
vidspy = ViDSPy(vlm_backend="openrouter")
# Step 3: Create training examples (use videos you've already generated)
trainset = [
Example(prompt="a cat jumping over a fence", video_path="cat_jump.mp4"),
Example(prompt="a dog running in a park", video_path="dog_run.mp4"),
Example(prompt="a bird flying through clouds", video_path="bird_fly.mp4"),
]
# Step 4: Create module with your video generator
module = VideoChainOfThought(
"prompt -> video",
video_generator=my_video_generator # Connect your generator!
)
# Step 5: Optimize prompting strategy
optimized = vidspy.optimize(
module,
trainset,
optimizer="mipro_v2" # Multi-stage instruction + demo optimization
)
# Step 6: Generate videos with optimized prompts
result = optimized("a dolphin swimming in the ocean")
print(f"Generated video: {result.video_path}")
print(f"Optimized prompt used: {result.enhanced_prompt}")ViDSPy uses VBench's 10 CORE_METRICS split into two categories:
| Metric | Description |
|---|---|
subject_consistency |
Temporal stability of subjects |
motion_smoothness |
Natural motion quality |
temporal_flickering |
Absence of temporal jitter |
human_anatomy |
Correct hands/faces/torso rendering |
aesthetic_quality |
Artistic/visual beauty |
imaging_quality |
Technical clarity and sharpness |
| Metric | Description |
|---|---|
object_class |
Prompt objects appear correctly |
human_action |
Prompt actions performed correctly |
spatial_relationship |
Correct spatial layout |
overall_consistency |
Holistic text-video alignment |
from vidspy.metrics import composite_reward, quality_score, alignment_score
# Default composite metric (60% quality + 40% alignment)
score = composite_reward(example, prediction)
# Quality-only score
q_score = quality_score(example, prediction)
# Alignment-only score
a_score = alignment_score(example, prediction)
# Custom metric configuration
from vidspy.metrics import VBenchMetric
custom_metric = VBenchMetric(
quality_weight=0.5,
alignment_weight=0.5,
quality_metrics=["motion_smoothness", "aesthetic_quality"],
alignment_metrics=["object_class", "overall_consistency"]
)ViDSPy provides 5 DSPy-compatible optimizers:
| Optimizer | Description | Key Parameters |
|---|---|---|
VidBootstrapFewShot |
Auto-generate/select few-shots | max_bootstrapped_demos=4 |
VidLabeledFewShot |
Static few-shot assignment | k=3 |
VidMIPROv2 |
Multi-stage instruction + demo optimization | num_candidates=10, auto="light" |
VidCOPRO |
Cooperative multi-LM instruction optimization | breadth=5, depth=3 |
VidGEPA |
Generate + Evaluate + Propose + Accept | auto="light" |
# Bootstrap few-shot
optimized = vidspy.optimize(
module, trainset,
optimizer="bootstrap",
max_bootstrapped_demos=4
)
# MIPROv2 with more candidates
optimized = vidspy.optimize(
module, trainset,
optimizer="mipro_v2",
num_candidates=15,
auto="medium"
)
# COPRO with custom search
optimized = vidspy.optimize(
module, trainset,
optimizer="copro",
breadth=10,
depth=5
)Vision Language Models (VLMs) in ViDSPy are used for:
- π Prompt enhancement - Improving user prompts before generation
- π Video analysis - Understanding generated video content
- π― Quality assessment - Analyzing text-video alignment
- π§ Chain-of-thought reasoning - Planning video generation strategies
Note: VLMs do NOT generate videos. They help optimize the prompts you send to your text-to-video API.
Cloud-based multimodal VLMs via unified API:
vidspy = ViDSPy(
vlm_backend="openrouter",
vlm_model="google/gemini-2.5-flash",
api_key="your-api-key" # Or set OPENROUTER_API_KEY env var
)Example models:
google/gemini-2.5-flashgoogle/gemini-1.5-proanthropic/claude-opus-4.5openai/gpt-4o
Local video VLMs for offline usage:
vidspy = ViDSPy(
vlm_backend="huggingface",
vlm_model="llava-hf/llava-v1.6-mistral-7b-hf",
device="cuda"
)ViDSPy provides several module types for different prompting strategies.
When creating video modules, you'll typically use simple signature strings like "prompt -> video" or "prompt -> video_path". The library internally handles all the complex reasoning steps (scene analysis, motion planning, etc.) based on the module type you choose. You don't need to worry about the internal signature detailsβjust use these standard formats and ViDSPy takes care of the rest.
from vidspy import VideoPredict, VideoChainOfThought, VideoReAct, VideoEnsemble
# Define your video generator once
def my_video_gen(prompt, **kwargs):
# Your text-to-video API call
return video_path
# Simple prediction with prompt enhancement
# Just use "prompt -> video_path" - ViDSPy handles the internal complexity
predictor = VideoPredict(
"prompt -> video_path",
video_generator=my_video_gen
)
# Chain-of-thought reasoning (analyzes scene, motion, style before generating)
# The simple "prompt -> video" signature is all you need
cot = VideoChainOfThought(
"prompt -> video",
video_generator=my_video_gen
)
# ReAct-style iterative refinement (generates, evaluates, refines)
# Same simple signature - internal reasoning is handled automatically
react = VideoReAct(
"prompt -> video",
video_generator=my_video_gen,
max_iterations=3
)
# Ensemble multiple approaches (tries different strategies, picks best)
ensemble = VideoEnsemble([
VideoPredict(video_generator=my_video_gen),
VideoChainOfThought(video_generator=my_video_gen),
], selection_metric=composite_reward)# Via CLI
vidspy setup
# Via Python
from vidspy import setup_vbench_models
setup_vbench_models() # Downloads to ~/.cache/vbenchimport os
import replicate
from vidspy import (
ViDSPy,
VideoChainOfThought,
Example,
composite_reward,
VBenchMetric,
)
# Set API keys (or pass directly to ViDSPy)
os.environ["OPENROUTER_API_KEY"] = "your-vlm-api-key"
os.environ["OPENROUTER_OPTIMIZER_API_KEY"] = "your-optimizer-api-key"
# Define video generator (using Replicate as example)
def replicate_video_generator(prompt: str, **kwargs) -> str:
"""Generate video using Replicate API."""
output = replicate.run(
"stability-ai/stable-video-diffusion",
input={"prompt": prompt}
)
# Download and save video
video_path = f"outputs/{hash(prompt)}.mp4"
# ... download logic ...
return video_path
# Initialize ViDSPy with all available parameters
vidspy = ViDSPy(
# VLM Provider Settings (corresponds to vlm: section in config)
vlm_backend="openrouter", # or "huggingface"
vlm_model="google/gemini-2.5-flash", # VLM for video analysis
api_key="your-vlm-api-key", # Or set OPENROUTER_API_KEY env var
# Optimizer LLM Settings (corresponds to optimizer: section in config)
optimizer_lm="openai/gpt-4o-mini", # LLM for optimizers (MIPROv2, COPRO, GEPA)
optimizer_api_key="your-optimizer-api-key", # Or set OPENROUTER_OPTIMIZER_API_KEY env var
# Cache Settings (corresponds to cache.dir in config)
cache_dir="~/.cache/vidspy", # Cache directory for models
# Note: VBench models cache (cache.vbench_models) is set via setup_vbench_models()
# Hardware Settings (corresponds to hardware: section in config)
device="auto", # "cuda", "cpu", or "auto"
# Note: hardware.dtype from config is used internally by HuggingFace VLM
# Config File (optional)
# config_path="vidspy_config.yaml", # Load all settings from config file
)
# Prepare training data (videos you've already generated)
trainset = [
Example(
prompt="a person walking through a forest",
video_path="data/walk_forest.mp4"
),
Example(
prompt="a car driving on a highway",
video_path="data/car_highway.mp4"
),
# ... more examples
]
# Split for validation
valset = trainset[-2:]
trainset = trainset[:-2]
# Create module with your video generator
module = VideoChainOfThought(
"prompt -> video",
video_generator=replicate_video_generator
)
# Optional: Create custom metric with specific settings
# (Corresponds to metrics: section in config file)
custom_metric = VBenchMetric(
quality_weight=0.6, # Corresponds to metrics.quality_weight in config
alignment_weight=0.4, # Corresponds to metrics.alignment_weight in config
quality_metrics=[ # Corresponds to metrics.quality_metrics in config
"subject_consistency",
"motion_smoothness",
"temporal_flickering",
"human_anatomy",
"aesthetic_quality",
"imaging_quality"
],
alignment_metrics=[ # Corresponds to metrics.alignment_metrics in config
"object_class",
"human_action",
"spatial_relationship",
"overall_consistency"
]
)
# Note: Target thresholds (targets: section in config) are for reference/documentation
# Optimize prompting strategy with optimization settings
# (These correspond to the optimization: section in config file)
optimized = vidspy.optimize(
module,
trainset,
valset=valset,
metric=custom_metric, # Or use composite_reward for defaults (60% quality, 40% alignment)
optimizer="mipro_v2", # Corresponds to optimization.default_optimizer in config
num_candidates=10,
max_bootstrapped_demos=4, # Corresponds to optimization.max_bootstrapped_demos in config
max_labeled_demos=4, # Corresponds to optimization.max_labeled_demos in config
)
# Evaluate on test set
testset = [Example(prompt="a boat on a lake", video_path="data/boat.mp4")]
results = vidspy.evaluate(optimized, testset)
print(f"Mean Score: {results['mean_score']:.4f}")
print(f"Quality: {results['details'][0].get('quality_score', 'N/A')}")
print(f"Alignment: {results['details'][0].get('alignment_score', 'N/A')}")
# Generate new videos with optimized prompts
result = optimized("a butterfly landing on a flower")
print(f"Generated: {result.video_path}")
print(f"Enhanced prompt: {result.enhanced_prompt}")# Show help
vidspy --help
# Setup VBench models
vidspy setup
vidspy setup --cache-dir /path/to/cache
# Check dependencies
vidspy setup --check-only
# Optimize a module
vidspy optimize trainset.json --optimizer mipro_v2 --output optimized_model
# Evaluate a module
vidspy evaluate testset.json --module optimized_model --output results.json
# Show information
vidspy infovidspy/
βββ pyproject.toml # Package configuration
βββ README.md # This file
βββ .env.example # Environment variables template
βββ vidspy_config.yaml.example # Configuration file template
βββ vidspy/
β βββ __init__.py # Main exports
β βββ core.py # ViDSPy main class, Example
β βββ signatures.py # VideoSignature, etc.
β βββ modules.py # VideoPredict, VideoChainOfThought
β βββ optimizers.py # VidBootstrapFewShot, VidMIPROv2, etc.
β βββ metrics.py # VBench wrappers, composite_reward
β βββ providers.py # OpenRouterVLM, HuggingFaceVLM
β βββ setup.py # Setup utilities
β βββ cli.py # Command-line interface
βββ examples/
β βββ basic_usage.py
βββ tests/
βββ test_basic.py
For production-quality videos, aim for:
- Human Anatomy: β₯ 0.85
- Text-Video Alignment: β₯ 0.80
- DSPy - Declarative Self-improving Python
- VBench - Video generation benchmark
- OpenRouter - Unified AI API
MIT License - see LICENSE for details.
Contributions welcome! Please read our contributing guide first.
If you find ViDSPy useful, please consider giving it a star!