alibaba / wan-2.7
text to video Alibaba WAN 2.7 Text-to-Video turns plain prompts into coherent, cinematic clips with crisp detail, stable motion, and strong instruction-following—great for ads, explainers, and social posts. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
150 credits Run →
google / nano-banana-pro
edit Google Nano Banana Pro (Gemini 3.0 Pro Image) Edit enables image editing with 4K-capable output. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
43 credits Run →
google / nano-banana-pro
text to image Google's Nano Banana pro (Gemini 3.0 Pro Image) is a cutting-edge text-to-image model enabling high-res 4K image generation optimized for phones. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
43 credits Run →
pixverse / pixverse-c1
text to video PixVerse C1 generates film-grade videos from text prompts with flexible duration (1-15s), multiple resolutions up to 1080p, and optional native audio generation. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
30 credits Run →
openai / gpt-image-2
text to image OpenAI's GPT Image 2 Text-to-Image generates high-quality images from natural-language prompts. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
18 credits Run →
wavespeed-ai / emu-3.5-image
text to image internal Internal model
300000000 credits Run →
wavespeed-ai / wan-2.2-i2v-lora-trainer
wan 2.2 i2v lora trainer Train custom Wan 2.2 I2V LoRA models 10x faster. Action training, motion training, video efect training. From concept to model in minutes, not hours. Upload a ZIP file containing videos to start!
1500 credits Run →
google / veo3.1
image to video Google Veo 3.1 is an Image-to-Video model that converts images into high-quality videos with native 1080P output for enhanced detail and creative flexibility. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
960 credits Run →
google / veo3.1
text to video Google Veo 3.1 converts text prompts into videos with synchronized audio at native 1080p for high-quality outputs. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
960 credits Run →
google / veo3.1
reference to video Google Veo3.1 Reference-to-Video performs image-to-video generation that preserves a specific subject's appearance and identity from provided reference images, enabling consistent character or product motion across frames. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
960 credits Run →
Google Veo3 is Google's flagship text-to-video model with built-in audio, producing synchronized video and sound from text prompts. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
960 credits Run →
google / veo3
image to video Google Veo 3 is Google's flagship image-to-video model that creates audio-enabled videos from images. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
960 credits Run →
wavespeed-ai / wan-2.2-image-lora-trainer
wan 2.2 image lora trainer Train custom Wan 2.2 character/style LoRA models 10x faster. Style training, character training, object training. From concept to model in minutes, not hours. Upload a ZIP file containing images to start!
900 credits Run →
google / veo3.1
video extend Extend and continue Veo 3.1 videos with smooth motion, preserved style, and strong scene coherence. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
840 credits Run →
Google Veo2 creates high-quality image-to-video outputs with realistic motion and extensive camera controls for customizable styles. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
750 credits Run →
video-effects / love-story
love story Turn couple photos into an AI-generated love story video with romantic scenes, smooth transitions, and cinematic flair. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
720 credits Run →
google / veo2
image to video Google Veo2 Image-to-Video creates high-quality videos with realistic motion, varied styles, and precise camera controls for cinematic results. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
661 credits Run →
kwaivgi / kling-v3.0-4k
image to video Kling 3.0 Pro delivers top-tier image-to-video generation with smooth motion, cinematic visuals, accurate prompt adherence, and native audio for ready-to-share clips.
630 credits Run →
kwaivgi / kling-video-o3-4k
image to video Kling Omni Video O3 Image-to-Video transforms static images into dynamic cinematic videos using MVL (Multi-modal Visual Language) technology. Maintains subject consistency while adding natural motion, physics simulation, and seamless scene dynamics. Supports audio generation. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.
630 credits Run →
kwaivgi / kling-video-o3-4k
reference to video Kling Omni Video O3 Reference-to-Video generates creative videos using character, prop, or scene references from multiple viewpoints. Extracts subject features and creates new video content while maintaining identity consistency across frames. Supports audio generation. Ready-to-use REST API, best performance, no cold starts, affordable pricing.
630 credits Run →
kwaivgi / kling-video-o3-4k
text to video Kling Omni Video O3 is Kuaishou's advanced unified multi-modal video model with MVL (Multi-modal Visual Language) technology. Text-to-Video mode generates cinematic videos from text prompts with subject consistency, natural physics simulation, and precise semantic understanding. Supports audio generation. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.
630 credits Run →
kwaivgi / kling-v3.0-4k
text to video Kling 3.0 Pro delivers top-tier text-to-video generation with smooth motion, cinematic visuals, accurate prompt adherence, and native audio for ready-to-share clips.
630 credits Run →
wavespeed-ai / wan-2.1-14b-lora-trainer
wan 2.1 14b lora trainer Train custom Wan 2.1 LoRA models 10x faster. Style training, character training, object training. From concept to model in minutes, not hours. Upload a ZIP file containing images to start!
450 credits Run →
kwaivgi / kling-v2.1-i2v-master
kling v2.1 i2v master Kling 2.1 Master is a premium image-to-video endpoint delivering fluid motion, cinematic visuals, and precise prompt-driven control. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.
390 credits Run →
kwaivgi / kling-v2.1-t2v-master
kling v2.1 t2v master Kling v2.1 creates cinematic 5-10s videos at 720p or 1080p from a single image or text prompt with improved motion fidelity and coherence. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
390 credits Run →
kwaivgi / kling-v2.0-i2v-master
kling v2.0 i2v master Kling 2.0 master elevates image-to-video with improved prompts, richer character motion, better visuals and a Multi-Elements Editor. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
390 credits Run →
kwaivgi / kling-v2.0-t2v-master
kling v2.0 t2v master Kling 2.0 Master is a Text-to-Video model featuring a Multi-Elements Editor, improved prompt understanding, and refined character motion. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
390 credits Run →
wavespeed-ai / flux-dev-lora-trainer-turbo
flux dev lora trainer turbo Flux-dev LoRA Trainer Turbo – accelerate LoRA training for FLUX with optimized pipelines, shorter epochs and rapid experiment cycles for production-ready style models.
375 credits Run →
wavespeed-ai / z-image-lora-trainer
z image lora trainer Z-Image-LoRA-Trainer – train custom image LoRA models from your own dataset, with zip uploads, auto-tuned defaults and fast iteration for brand, character or IP looks. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
375 credits Run →
wavespeed-ai / z-image
base lora trainer Z-Image Base LoRA Trainer – train custom image LoRA models from your own dataset, with zip uploads, auto-tuned defaults and fast iteration for brand, character or IP looks. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
375 credits Run →
google / veo3.1-fast
image to video Google Veo 3.1 Fast is an Image-to-Video model with native 1080p output for high-detail videos from images and fast performance. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
google / veo3.1-fast
text to video Google Veo 3.1 Fast creates text-to-video with native 1080p and synchronized audio, delivering high-quality videos for creators. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
google / veo3-fast
veo3 fast Google Veo 3 Fast creates text-to-video with synchronized audio, delivering faster, more cost-effective results than standard Veo 3; commercial use allowed and pricing starts at $0.25/second. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
google / veo3-fast
image to video Google Veo3 Fast provides faster, more cost-effective Image-to-Video generation vs Veo 3, with commercial use allowed and $0.25/sec pricing. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
openai / sora-2
image to video pro OpenAI Sora 2 Image-to-Video Pro creates physics-aware, realistic videos with synchronized audio and greater steerability. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
openai / sora-2
text to video pro OpenAI Sora 2 Text-to-Video Pro creates high-fidelity videos with synchronized audio, realistic physics, and enhanced steerability. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
openai / sora-2-pro
text to video openai/sora2
360 credits Run →
openai / sora-2-pro
image to video OpenAI Sora 2 Image-to-Video Pro creates physics-aware, realistic videos with synchronized audio and greater steerability. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
360 credits Run →
google / veo3.1-fast
video extend Extend Veo 3.1 videos in 7-second steps with the Fast endpoint—quick, coherent continuation that preserves style and motion, output as a single merged clip. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
315 credits Run →
wavespeed-ai / patina
material extract PATINA Material Extract turns any photograph or reference image into a complete seamlessly tiling PBR material set (basecolor, normal, roughness, metalness, height), guided by a text prompt. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
306 credits Run →
wavespeed-ai / qwen-image-lora-trainer
qwen image lora trainer Train custom Qwen-Image LoRA models 10x faster. Style training, character training, object training. From concept to model in minutes, not hours. Upload a ZIP file containing images to start!
300 credits Run →
wavespeed-ai / qwen-image-2512-lora-trainer
qwen image 2512 lora trainer Train custom Qwen-Image-2512 LoRA models 10x faster. Style training, character training, object training. From concept to model in minutes, not hours. Upload a ZIP file containing images to start!
300 credits Run →
wavespeed-ai / flux-dev-lora-trainer
flux dev lora trainer Flux-dev LoRA Trainer – train custom FLUX styles from your own dataset, with zip uploads, auto-tuned defaults and fast iteration for brand, character or IP looks.
300 credits Run →
bytedance / seedance-2.0
video edit turbo Seedance 2.0 (Video-Edit Turbo) is the turbo tier for editing an input video from a natural-language prompt — faster, more affordable high-resolution output while preserving subject identity, composition, and motion.
285 credits Run →
runwayml / gen4-aleph
gen4 aleph RunwayML Gen4 Aleph is a Video-to-Video model for editing, transforming, and generating video at $0.18 per second. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
270 credits Run →
bytedance / seedance-2.0-fast
video edit turbo Seedance 2.0 Fast (Video-Edit Turbo) is the fastest, cheapest turbo tier for editing an input video from a natural-language prompt — high-resolution output with optimized cost and speed.
255 credits Run →
kwaivgi / kling-v3.0-pro
motion control Kling 3.0 Standard delivers top-tier video-to-video generation with smooth motion, cinematic visuals, accurate prompt adherence, and native audio for ready-to-share clips. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
252 credits Run →
kwaivgi / kling-video-o3-pro
video edit Kling Omni Video O3 Video-Edit enables conversational video editing through natural language commands. Remove objects, change backgrounds, modify styles, adjust weather/lighting, and transform scenes with simple text instructions like 'remove pedestrians' or 'change daytime to dusk'. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.
252 credits Run →
kwaivgi / kling-video-o1
video edit Kling Omni Video O1 Video-Edit enables conversational video editing through natural language commands. Remove objects, change backgrounds, modify styles, adjust weather/lighting, and transform scenes with simple text instructions like 'remove pedestrians' or 'change daytime to dusk'. Ready-to-use REST API, best performance, no coldstarts, affordable pricing.
252 credits Run →
Realistic lipsync with refined human emotion capabilities
251 credits Run →
wavespeed-ai / tiktok-video-generator
tiktok video generator WaveSpeed TikTok Video Generator creates viral-ready videos from text prompts and optional reference images with native audio, dynamic transitions, and scroll-stopping motion. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
240 credits Run →
wavespeed-ai / ugc-video-generator
ugc video generator WaveSpeed UGC Video Generator creates authentic, creator-style videos from text prompts and optional reference images with native audio, natural motion, and relatable aesthetics. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
240 credits Run →
wavespeed-ai / short-video-generator
short video generator WaveSpeed Short Video Generator creates professional short-form videos from text prompts and optional reference images with native audio, smooth motion, and versatile aspect ratios. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
240 credits Run →
wavespeed-ai / cinematic-video-generator
cinematic video generator WaveSpeed Cinematic Video Generator creates Hollywood-grade videos from text prompts and optional reference images with native audio, director-level camera control, and real-world physics. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
240 credits Run →
alibaba / wan-2.2
t2v plus 1080p Alibaba WAN 2.2 t2v-plus-1080p converts text prompts into high-quality 1080p videos for unlimited AI video creation. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
240 credits Run →
alibaba / wan-2.2
i2v plus 1080p Alibaba WAN 2.2 i2v-plus-1080p turns images into polished 1080p videos for content creation and prototyping. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.
240 credits Run →
elevenlabs / dubbing
dubbing ElevenLabs Dubbing automatically translates and dubs video/audio content into different languages while preserving the original speakers' voices. Ready-to-use REST inference API, best performance, no cold starts, affordable pricing.
240 credits Run →
wavespeed-ai / meshy6
image to 3d Meshy 6 Preview converts 2D images into high-quality 3D models with accurate geometry reconstruction and superior texture quality.
240 credits Run →
wavespeed-ai / meshy6
text to 3d Meshy 6 generates high-quality 3D models from text descriptions with accurate geometry and superior texture quality.
240 credits Run →
bytedance / seedance-2.0
video edit Seedance 2.0 (Video-Edit) edits an input video from a natural-language prompt. The reference video drives subject identity, composition, and motion while the model rewrites lighting, style, weather, environment, or specific elements as instructed. Built on ByteDance Seed's unified multimodal architecture for cinematic, motion-stable output.
225 credits Run →
Showing first 60 of 916 — narrow your search to find more.