Thin Python wrappers around current video generation HTTP APIs and official SDKs. Model names and paths follow vendor docs as of early 2026 (verify in each provider’s console if a call fails).
Note, the code examples show below are usibg the code files as imports, this is not offcial syntax, this is if you use my code
| Module | Provider | Notes |
|---|---|---|
openai_video.py |
OpenAI | Videos / Sora-class API — videos.create / create_and_poll (no model argument in wrapper; account default applies). |
runway_video.py |
Runway | Developer API — text_to_video / image_to_video, X-Runway-Version: 2024-11-06. |
google_veo_video.py |
Gemini Veo 3.1 — google-genai, poll operations.get. |
|
luma_video.py |
Luma | Dream Machine API — POST .../generations/video, requires model (ray-2, ray-flash-2). |
bytedance_video.py |
BytePlus / Volcengine | ModelArk video tasks — POST .../contents/generations/tasks, Bearer ARK_API_KEY. |
meta_video.py |
Meta | No public Movie Gen API — raises NotImplementedError. |
Use the project virtualenv (recommended):
cd /path/to/videogen
python3 -m venv .venv
source .venv/bin/activate
pip install openai python-dotenv requests google-genaiCopy .env.example to .env and fill in keys.
from openai_video import generate_video_openai
job = generate_video_openai(
"A calico cat playing a piano on stage",
seconds="8",
size="1280x720",
wait=False, # True → create_and_poll until completed/failed
)
print(job.id, job.status)from runway_video import generate_video_runway
result = generate_video_runway(
"A serene landscape with mountains and a river at sunset",
turbo=True, # gen4_turbo vs gen4.5
ratio="1280:720",
duration=5,
)from google_veo_video import generate_video_veo
video = generate_video_veo(
"A bustling city street during a rainy night",
aspect_ratio="16:9",
duration_seconds=8,
resolution="1080p", # optional: 720p, 1080p, 4k
# reference_image_paths=["./ref1.png", "./ref2.png"], # optional, local files
)
print(getattr(video, "uri", video))from luma_video import generate_video_luma
video = generate_video_luma(
"A futuristic cityscape with flying cars",
model="ray-2",
aspect_ratio="16:9",
resolution="720p",
duration="5s",
)Set BYTEDANCE_ARK_MODEL (or model= argument) to the model id or endpoint id from the Ark console.
from bytedance_video import create_bytedance_video_task, generate_video_bytedance
task = create_bytedance_video_task("A cartoon character dancing in a park")
# or poll until a terminal status:
# result = generate_video_bytedance("...", character_reference="https://...")generate_video_meta is intentionally not implemented — there is no public Movie Gen REST API for arbitrary keys.