diff --git a/docs.json b/docs.json index 1110d90b..c9fb6887 100644 --- a/docs.json +++ b/docs.json @@ -525,7 +525,8 @@ "public-endpoints/models/wan-2-6-t2i", "public-endpoints/models/z-image-turbo", "public-endpoints/models/nano-banana-edit", - "public-endpoints/models/nano-banana-pro-edit" + "public-endpoints/models/nano-banana-pro-edit", + "public-endpoints/models/nano-banana-2-edit" ] }, { @@ -545,7 +546,11 @@ "public-endpoints/models/wan-2-2-i2v", "public-endpoints/models/wan-2-2-t2v", "public-endpoints/models/wan-2-1-i2v", - "public-endpoints/models/wan-2-1-t2v" + "public-endpoints/models/wan-2-1-t2v", + "public-endpoints/models/wan-2-6-i2v", + "public-endpoints/models/p-video", + "public-endpoints/models/vidu-q3-i2v", + "public-endpoints/models/vidu-q3-t2v" ] }, { diff --git a/public-endpoints/models/nano-banana-2-edit.mdx b/public-endpoints/models/nano-banana-2-edit.mdx new file mode 100644 index 00000000..36fbd666 --- /dev/null +++ b/public-endpoints/models/nano-banana-2-edit.mdx @@ -0,0 +1,187 @@ +--- +title: "Nano Banana 2 Edit" +sidebarTitle: "Nano Banana 2" +description: "Google's latest multi-image editing model with resolution options up to 4K." +--- + +Nano Banana 2 Edit is Google's latest image editing model for combining and editing multiple reference images. It supports up to 14 input images and offers resolution options from 1K to 4K output. For best results, use 1-3 reference images. + + + Test Nano Banana 2 Edit in the Runpod Hub playground. + + +| | | +|---|---| +| **Endpoint** | `https://api.runpod.ai/v2/google-nano-banana-2-edit/runsync` | +| **Pricing** | \$0.0875 (1K), \$0.13 (2K), \$0.175 (4K) | +| **Type** | Image editing | + +## Request + +All parameters are passed within the `input` object in the request body. + + + Array of reference image URLs to edit or combine. Supports up to 14 images, but 1-3 images is recommended for best stability. + + + + Text description of the desired edit or how to combine the images. + + + + Output resolution. Options: `1k`, `2k`, `4k`. + + + + Output aspect ratio (e.g., `1:1`, `3:2`, `16:9`, `9:16`, `4:5`). + + + + Output format. Options: `png`, `jpeg`. + + + + Return result as base64 string instead of URL. + + + +```bash cURL +curl -X POST "https://api.runpod.ai/v2/google-nano-banana-2-edit/runsync" \ + -H "Authorization: Bearer $RUNPOD_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "input": { + "images": [ + "https://example.com/subject.jpg", + "https://example.com/style-reference.jpg" + ], + "prompt": "Apply the artistic style from the second image to the subject in the first image", + "resolution": "1k", + "aspect_ratio": "1:1", + "output_format": "png" + } + }' +``` + +```python Python +import requests + +response = requests.post( + "https://api.runpod.ai/v2/google-nano-banana-2-edit/runsync", + headers={ + "Authorization": f"Bearer {RUNPOD_API_KEY}", + "Content-Type": "application/json", + }, + json={ + "input": { + "images": [ + "https://example.com/subject.jpg", + "https://example.com/style-reference.jpg", + ], + "prompt": "Apply the artistic style from the second image to the subject in the first image", + "resolution": "1k", + "aspect_ratio": "1:1", + "output_format": "png", + } + }, +) + +result = response.json() +print(result["output"]["image_url"]) +``` + +```javascript JavaScript +const response = await fetch( + "https://api.runpod.ai/v2/google-nano-banana-2-edit/runsync", + { + method: "POST", + headers: { + Authorization: `Bearer ${RUNPOD_API_KEY}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + input: { + images: [ + "https://example.com/subject.jpg", + "https://example.com/style-reference.jpg", + ], + prompt: "Apply the artistic style from the second image to the subject in the first image", + resolution: "1k", + aspect_ratio: "1:1", + output_format: "png", + }, + }), + } +); + +const result = await response.json(); +console.log(result.output.image_url); +``` + + +## Response + + + Unique identifier for the request. + + + + Request status. Returns `COMPLETED` on success, `FAILED` on error. + + + + Time in milliseconds the request spent in queue before processing began. + + + + Time in milliseconds the model took to edit the images. + + + + The generation result containing the image URL and cost. + + + URL of the edited image. This URL expires after 7 days. + + + + Cost of the generation in USD. + + + + +```json 200 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "COMPLETED", + "delayTime": 21, + "executionTime": 6234, + "output": { + "image_url": "https://image.runpod.ai/abc123/output.png", + "cost": 0.0875 + } +} +``` + +```json 400 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "FAILED", + "error": "Invalid image URL in images array" +} +``` + + + +Image URLs expire after 7 days. Download and store edited images immediately if you need to keep them. + + +## Cost calculation + +Nano Banana 2 Edit pricing varies by output resolution: + +| Resolution | Price per image | +|------------|-----------------| +| 1K | \$0.0875 | +| 2K | \$0.13 | +| 4K | \$0.175 | diff --git a/public-endpoints/models/p-video.mdx b/public-endpoints/models/p-video.mdx new file mode 100644 index 00000000..2beac9f0 --- /dev/null +++ b/public-endpoints/models/p-video.mdx @@ -0,0 +1,201 @@ +--- +title: "Pruna Video" +sidebarTitle: "Pruna Video" +description: "Premium AI video generation from text, images, and audio with fast generation times." +--- + +Pruna Video is a premium AI video generation model that creates videos from text prompts, images, or audio in under 10 seconds. It supports multiple resolutions up to 1080p, various aspect ratios, and optional audio conditioning for synchronized video generation. + + + Test Pruna Video in the Runpod Hub playground. + + +| | | +|---|---| +| **Endpoint** | `https://api.runpod.ai/v2/p-video/runsync` | +| **Pricing** | \$0.02/s (720p), \$0.04/s (1080p) | +| **Type** | Video generation | + +## Request + +All parameters are passed within the `input` object in the request body. + + + Text description of the desired video content. + + + + URL of an input image for image-to-video generation. Supports jpg, jpeg, png, webp. When provided, `aspect_ratio` is ignored. + + + + URL of an audio file for audio-conditioned generation. Supports flac, mp3, wav. When provided, `duration` is ignored and the video matches audio length. + + + + Video duration in seconds (1-10). Ignored when audio is provided. + + + + Video resolution. Options: `720p`, `1080p`. + + + + Frames per second. Options: `24`, `48`. + + + + Output aspect ratio. Options: `16:9`, `9:16`, `4:3`, `3:4`, `3:2`, `2:3`, `1:1`. Ignored when image is provided. + + + + Seed for reproducible generation. + + + + Enable draft mode for faster, lower-quality preview. Reduces cost by 75%. + + + + Include audio in the output video. + + + + Automatically enhance the prompt for better results. + + + +```bash cURL +curl -X POST "https://api.runpod.ai/v2/p-video/runsync" \ + -H "Authorization: Bearer $RUNPOD_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "input": { + "prompt": "A timelapse of clouds moving over a mountain range at sunset", + "duration": 5, + "resolution": "720p", + "aspect_ratio": "16:9" + } + }' +``` + +```python Python +import requests + +response = requests.post( + "https://api.runpod.ai/v2/p-video/runsync", + headers={ + "Authorization": f"Bearer {RUNPOD_API_KEY}", + "Content-Type": "application/json", + }, + json={ + "input": { + "prompt": "A timelapse of clouds moving over a mountain range at sunset", + "duration": 5, + "resolution": "720p", + "aspect_ratio": "16:9", + } + }, +) + +result = response.json() +print(result["output"]["video_url"]) +``` + +```javascript JavaScript +const response = await fetch( + "https://api.runpod.ai/v2/p-video/runsync", + { + method: "POST", + headers: { + Authorization: `Bearer ${RUNPOD_API_KEY}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + input: { + prompt: "A timelapse of clouds moving over a mountain range at sunset", + duration: 5, + resolution: "720p", + aspect_ratio: "16:9", + }, + }), + } +); + +const result = await response.json(); +console.log(result.output.video_url); +``` + + +## Response + + + Unique identifier for the request. + + + + Request status. Returns `COMPLETED` on success, `FAILED` on error. + + + + Time in milliseconds the request spent in queue before processing began. + + + + Time in milliseconds the model took to generate the video. + + + + The generation result containing the video URL and cost. + + + URL of the generated video. This URL expires after 7 days. + + + + Cost of the generation in USD. + + + + +```json 200 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "COMPLETED", + "delayTime": 15, + "executionTime": 8542, + "output": { + "video_url": "https://video.runpod.ai/abc123/output.mp4", + "cost": 0.10 + } +} +``` + +```json 400 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "FAILED", + "error": "Invalid audio format" +} +``` + + + +Video URLs expire after 7 days. Download and store generated videos immediately if you need to keep them. + + +## Cost calculation + +Pruna Video pricing varies by resolution and draft mode: + +| Resolution | Standard | Draft mode | +|------------|----------|------------| +| 720p | \$0.02 per second | \$0.005 per second | +| 1080p | \$0.04 per second | \$0.01 per second | + +Example costs (standard mode): + +| Resolution | 5 seconds | 10 seconds | +|------------|-----------|------------| +| 720p | \$0.10 | \$0.20 | +| 1080p | \$0.20 | \$0.40 | diff --git a/public-endpoints/models/vidu-q3-i2v.mdx b/public-endpoints/models/vidu-q3-i2v.mdx new file mode 100644 index 00000000..2ff6b67c --- /dev/null +++ b/public-endpoints/models/vidu-q3-i2v.mdx @@ -0,0 +1,195 @@ +--- +title: "Vidu Q3 I2V" +sidebarTitle: "Vidu Q3 I2V" +description: "Animate reference images into videos with text-driven motion and optional audio generation." +--- + +Vidu Q3 Image-to-Video animates a reference image into a video with motion driven by a text prompt. It supports multiple resolutions up to 1080p, adjustable duration up to 16 seconds, and optional synchronized audio generation with background music. + + + Test Vidu Q3 I2V in the Runpod Hub playground. + + +| | | +|---|---| +| **Endpoint** | `https://api.runpod.ai/v2/vidu-q3-i2v/runsync` | +| **Pricing** | \$0.15 per second | +| **Type** | Video generation | + +## Request + +All parameters are passed within the `input` object in the request body. + + + URL of the reference image to animate. + + + + Text description of the desired motion and action. + + + + Output video resolution. Options: `540p`, `720p`, `1080p`. + + + + Video length in seconds (1-16). + + + + Motion intensity control. Options: `auto`, `small`, `medium`, `large`. + + + + Enable synchronized audio generation. + + + + Enable background music. + + + + Random seed for reproducibility. Set to -1 for random. + + + +```bash cURL +curl -X POST "https://api.runpod.ai/v2/vidu-q3-i2v/runsync" \ + -H "Authorization: Bearer $RUNPOD_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "input": { + "image": "https://example.com/portrait.jpg", + "prompt": "Person turns head slowly and smiles at camera", + "resolution": "720p", + "duration": 5, + "movement_amplitude": "medium", + "generate_audio": true, + "bgm": true, + "seed": -1 + } + }' +``` + +```python Python +import requests + +response = requests.post( + "https://api.runpod.ai/v2/vidu-q3-i2v/runsync", + headers={ + "Authorization": f"Bearer {RUNPOD_API_KEY}", + "Content-Type": "application/json", + }, + json={ + "input": { + "image": "https://example.com/portrait.jpg", + "prompt": "Person turns head slowly and smiles at camera", + "resolution": "720p", + "duration": 5, + "movement_amplitude": "medium", + "generate_audio": True, + "bgm": True, + "seed": -1, + } + }, +) + +result = response.json() +print(result["output"]["video_url"]) +``` + +```javascript JavaScript +const response = await fetch( + "https://api.runpod.ai/v2/vidu-q3-i2v/runsync", + { + method: "POST", + headers: { + Authorization: `Bearer ${RUNPOD_API_KEY}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + input: { + image: "https://example.com/portrait.jpg", + prompt: "Person turns head slowly and smiles at camera", + resolution: "720p", + duration: 5, + movement_amplitude: "medium", + generate_audio: true, + bgm: true, + seed: -1, + }, + }), + } +); + +const result = await response.json(); +console.log(result.output.video_url); +``` + + +## Response + + + Unique identifier for the request. + + + + Request status. Returns `COMPLETED` on success, `FAILED` on error. + + + + Time in milliseconds the request spent in queue before processing began. + + + + Time in milliseconds the model took to generate the video. + + + + The generation result containing the video URL and cost. + + + URL of the generated video. This URL expires after 7 days. + + + + Cost of the generation in USD. + + + + +```json 200 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "COMPLETED", + "delayTime": 32, + "executionTime": 45678, + "output": { + "video_url": "https://video.runpod.ai/abc123/output.mp4", + "cost": 0.75 + } +} +``` + +```json 400 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "FAILED", + "error": "Invalid image URL" +} +``` + + + +Video URLs expire after 7 days. Download and store generated videos immediately if you need to keep them. + + +## Cost calculation + +Vidu Q3 I2V charges \$0.15 per second of video generated. + +| Duration | Cost | +|----------|------| +| 5 seconds | \$0.75 | +| 10 seconds | \$1.50 | +| 16 seconds | \$2.40 | diff --git a/public-endpoints/models/vidu-q3-t2v.mdx b/public-endpoints/models/vidu-q3-t2v.mdx new file mode 100644 index 00000000..5739a372 --- /dev/null +++ b/public-endpoints/models/vidu-q3-t2v.mdx @@ -0,0 +1,202 @@ +--- +title: "Vidu Q3 T2V" +sidebarTitle: "Vidu Q3 T2V" +description: "Generate high-quality videos from text descriptions with multiple resolutions and audio support." +--- + +Vidu Q3 Text-to-Video generates high-quality videos from text descriptions with support for multiple resolutions up to 1080p, various aspect ratios, and optional audio generation. It includes style options for general or anime aesthetics. + + + Test Vidu Q3 T2V in the Runpod Hub playground. + + +| | | +|---|---| +| **Endpoint** | `https://api.runpod.ai/v2/vidu-q3-t2v/runsync` | +| **Pricing** | \$0.15 per second | +| **Type** | Video generation | + +## Request + +All parameters are passed within the `input` object in the request body. + + + Text description of the video scene and action. + + + + Visual aesthetic style. Options: `general`, `anime`. + + + + Output video resolution. Options: `540p`, `720p`, `1080p`. + + + + Video length in seconds (1-16). + + + + Output aspect ratio. Options: `16:9`, `9:16`, `4:3`, `3:4`, `1:1`. + + + + Motion intensity control. Options: `auto`, `small`, `medium`, `large`. + + + + Enable synchronized audio generation. + + + + Enable background music. + + + + Random seed for reproducibility. Set to -1 for random. + + + +```bash cURL +curl -X POST "https://api.runpod.ai/v2/vidu-q3-t2v/runsync" \ + -H "Authorization: Bearer $RUNPOD_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "input": { + "prompt": "A futuristic city at night with flying cars and neon lights", + "style": "general", + "resolution": "720p", + "duration": 5, + "aspect_ratio": "16:9", + "movement_amplitude": "auto", + "generate_audio": true, + "bgm": true, + "seed": -1 + } + }' +``` + +```python Python +import requests + +response = requests.post( + "https://api.runpod.ai/v2/vidu-q3-t2v/runsync", + headers={ + "Authorization": f"Bearer {RUNPOD_API_KEY}", + "Content-Type": "application/json", + }, + json={ + "input": { + "prompt": "A futuristic city at night with flying cars and neon lights", + "style": "general", + "resolution": "720p", + "duration": 5, + "aspect_ratio": "16:9", + "movement_amplitude": "auto", + "generate_audio": True, + "bgm": True, + "seed": -1, + } + }, +) + +result = response.json() +print(result["output"]["video_url"]) +``` + +```javascript JavaScript +const response = await fetch( + "https://api.runpod.ai/v2/vidu-q3-t2v/runsync", + { + method: "POST", + headers: { + Authorization: `Bearer ${RUNPOD_API_KEY}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + input: { + prompt: "A futuristic city at night with flying cars and neon lights", + style: "general", + resolution: "720p", + duration: 5, + aspect_ratio: "16:9", + movement_amplitude: "auto", + generate_audio: true, + bgm: true, + seed: -1, + }, + }), + } +); + +const result = await response.json(); +console.log(result.output.video_url); +``` + + +## Response + + + Unique identifier for the request. + + + + Request status. Returns `COMPLETED` on success, `FAILED` on error. + + + + Time in milliseconds the request spent in queue before processing began. + + + + Time in milliseconds the model took to generate the video. + + + + The generation result containing the video URL and cost. + + + URL of the generated video. This URL expires after 7 days. + + + + Cost of the generation in USD. + + + + +```json 200 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "COMPLETED", + "delayTime": 28, + "executionTime": 52341, + "output": { + "video_url": "https://video.runpod.ai/abc123/output.mp4", + "cost": 0.75 + } +} +``` + +```json 400 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "FAILED", + "error": "Invalid prompt" +} +``` + + + +Video URLs expire after 7 days. Download and store generated videos immediately if you need to keep them. + + +## Cost calculation + +Vidu Q3 T2V charges \$0.15 per second of video generated. + +| Duration | Cost | +|----------|------| +| 5 seconds | \$0.75 | +| 10 seconds | \$1.50 | +| 16 seconds | \$2.40 | diff --git a/public-endpoints/models/wan-2-6-i2v.mdx b/public-endpoints/models/wan-2-6-i2v.mdx new file mode 100644 index 00000000..ab16c0f8 --- /dev/null +++ b/public-endpoints/models/wan-2-6-i2v.mdx @@ -0,0 +1,206 @@ +--- +title: "WAN 2.6 I2V" +sidebarTitle: "WAN 2.6 I2V" +description: "Image-to-video generation with audio support and resolutions up to 1080p." +--- + +WAN 2.6 Image-to-Video transforms static images into dynamic videos with support for audio integration, multiple resolutions up to 1080p, and durations up to 15 seconds. It features optional prompt expansion and multi-shot composition modes. + + + Test WAN 2.6 I2V in the Runpod Hub playground. + + +| | | +|---|---| +| **Endpoint** | `https://api.runpod.ai/v2/wan-2-6-i2v/runsync` | +| **Pricing** | \$0.10/s (720p), \$0.15/s (1080p) | +| **Type** | Video generation | + +## Request + +All parameters are passed within the `input` object in the request body. + + + Text description of the desired video motion and content. + + + + URL of the source image to animate. + + + + URL of an audio file to include in the video. + + + + Elements to exclude from the generated video. + + + + Video resolution. Options: `1280*720`, `1920*1080`. + + + + Video duration in seconds. Options: `5`, `10`, `15`. + + + + Shot composition mode. Options: `single`, `multi`. + + + + Seed for reproducible results. Set to -1 for random. + + + + Automatically expand and enhance the prompt. + + + + Enable content safety checking. + + + +```bash cURL +curl -X POST "https://api.runpod.ai/v2/wan-2-6-i2v/runsync" \ + -H "Authorization: Bearer $RUNPOD_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "input": { + "prompt": "A person walking through a sunny park, leaves gently swaying", + "image": "https://example.com/person-park.jpg", + "size": "1280*720", + "duration": 5, + "shot_type": "single", + "seed": -1, + "enable_prompt_expansion": false + } + }' +``` + +```python Python +import requests + +response = requests.post( + "https://api.runpod.ai/v2/wan-2-6-i2v/runsync", + headers={ + "Authorization": f"Bearer {RUNPOD_API_KEY}", + "Content-Type": "application/json", + }, + json={ + "input": { + "prompt": "A person walking through a sunny park, leaves gently swaying", + "image": "https://example.com/person-park.jpg", + "size": "1280*720", + "duration": 5, + "shot_type": "single", + "seed": -1, + "enable_prompt_expansion": False, + } + }, +) + +result = response.json() +print(result["output"]["video_url"]) +``` + +```javascript JavaScript +const response = await fetch( + "https://api.runpod.ai/v2/wan-2-6-i2v/runsync", + { + method: "POST", + headers: { + Authorization: `Bearer ${RUNPOD_API_KEY}`, + "Content-Type": "application/json", + }, + body: JSON.stringify({ + input: { + prompt: "A person walking through a sunny park, leaves gently swaying", + image: "https://example.com/person-park.jpg", + size: "1280*720", + duration: 5, + shot_type: "single", + seed: -1, + enable_prompt_expansion: false, + }, + }), + } +); + +const result = await response.json(); +console.log(result.output.video_url); +``` + + +## Response + + + Unique identifier for the request. + + + + Request status. Returns `COMPLETED` on success, `FAILED` on error. + + + + Time in milliseconds the request spent in queue before processing began. + + + + Time in milliseconds the model took to generate the video. + + + + The generation result containing the video URL and cost. + + + URL of the generated video. This URL expires after 7 days. + + + + Cost of the generation in USD. + + + + +```json 200 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "COMPLETED", + "delayTime": 28, + "executionTime": 95432, + "output": { + "video_url": "https://video.runpod.ai/abc123/output.mp4", + "cost": 0.50 + } +} +``` + +```json 400 +{ + "id": "sync-a1b2c3d4-e5f6-7890-abcd-ef1234567890-u1", + "status": "FAILED", + "error": "Invalid image URL" +} +``` + + + +Video URLs expire after 7 days. Download and store generated videos immediately if you need to keep them. + + +## Cost calculation + +WAN 2.6 I2V pricing varies by resolution: + +| Resolution | Rate | +|------------|------| +| 720p (1280x720) | \$0.10 per second | +| 1080p (1920x1080) | \$0.15 per second | + +Example costs: + +| Resolution | 5 seconds | 10 seconds | 15 seconds | +|------------|-----------|------------|------------| +| 720p | \$0.50 | \$1.00 | \$1.50 | +| 1080p | \$0.75 | \$1.50 | \$2.25 | diff --git a/public-endpoints/reference.mdx b/public-endpoints/reference.mdx index e7811df6..9b894903 100644 --- a/public-endpoints/reference.mdx +++ b/public-endpoints/reference.mdx @@ -32,6 +32,7 @@ Generate and edit images with text prompts or reference images. | [Z-Image Turbo](/public-endpoints/models/z-image-turbo) | Fast 6B parameter image generation. | \$0.005/image | | [Nano Banana Edit](/public-endpoints/models/nano-banana-edit) | Google's model for combining multiple images. | \$0.038/image | | [Nano Banana Pro Edit](/public-endpoints/models/nano-banana-pro-edit) | Advanced multi-image editing with resolution options. | \$0.14–\$0.24/image | +| [Nano Banana 2 Edit](/public-endpoints/models/nano-banana-2-edit) | Google's latest multi-image editing with resolution options. | \$0.0875–\$0.175/image | ## Video models @@ -54,6 +55,10 @@ Create videos from images or text prompts. Pricing varies by resolution and dura | [WAN 2.2 T2V](/public-endpoints/models/wan-2-2-t2v) | Open-source text-to-video at 720p. | \$0.30/5s | | [WAN 2.1 I2V](/public-endpoints/models/wan-2-1-i2v) | Image-to-video at 720p. | \$0.30/5s | | [WAN 2.1 T2V](/public-endpoints/models/wan-2-1-t2v) | Text-to-video at 720p. | \$0.30/5s | +| [WAN 2.6 I2V](/public-endpoints/models/wan-2-6-i2v) | Image-to-video with audio support up to 1080p. | \$0.10/s (720p), \$0.15/s (1080p) | +| [Pruna Video](/public-endpoints/models/p-video) | Premium AI video from text, images, and audio. | \$0.02/s (720p), \$0.04/s (1080p) | +| [Vidu Q3 I2V](/public-endpoints/models/vidu-q3-i2v) | Animate reference images with text-driven motion. | \$0.15/second | +| [Vidu Q3 T2V](/public-endpoints/models/vidu-q3-t2v) | High-quality video from text with audio support. | \$0.15/second | ## Text models @@ -61,8 +66,6 @@ Generate text with large language models. | Model | Description | Price | |-------|-------------|-------| -| [Cogito 671B v2.1](/public-endpoints/models/cogito-671b) | 671B MoE model with FP8 dynamic quantization. | \$0.50/1M tokens | -| [GPT-OSS 120B](/public-endpoints/models/gpt-oss-120b) | OpenAI's open-weight 120B parameter model. | \$10.00/1M tokens | | [IBM Granite 4.0](/public-endpoints/models/granite-4) | 32B parameter long-context instruct model. | \$10.00/1M tokens | | [Qwen3 32B AWQ](/public-endpoints/models/qwen3-32b) | Advanced reasoning and multilingual support. OpenAI-compatible. | \$10.00/1M tokens |