LTX-2
Lightricks
The predecessor to 2.3. 19B params, native 4K + audio, 30 camera moves. Still excellent and widely deployed.
Wan 2.2
Alibaba (Tongyi Lab)
MoE architecture with 27B total params but only 14B active. Trained on 65% more images and 83% more video than 2.1. Outperforms leading closed-source models on Wan-Bench 2.0.
Pick LTX-2 if…
You want production video pipelines, camera-controlled generation, or depth/pose-driven workflows.
Pick Wan 2.2 if…
You want cinematic style control, speech-to-video, or consumer GPU deployment (TI2V-5B).
Specifications
Strengths & Trade-offs
LTX-2
Strengths
- +19B params (14B video + 5B audio)
- +native 4K at 50fps
- +first open model with unified audio-video
- +30 cinematic camera moves
- +depth-aware generation
Trade-offs
- -Superseded by 2.3 on detail and audio quality
- -LoRAs not compatible with 2.3
- -texture drift every 8-10 frames
- -in-scene text issues
Best For
- →Production video pipelines
- →camera-controlled generation
- →depth/pose-driven workflows
- →budget 4K content
Wan 2.2
Strengths
- +First MoE in video diffusion
- +27B total but only 14B active per step
- +high-noise expert for layout + low-noise for detail
- ++65.6% more images and +83.2% more video training data vs 2.1
- +cinematic aesthetic control (lighting, composition, contrast, color tone)
Trade-offs
- -720p cap
- -MoE needs careful threshold tuning (SNR-based)
- -no native audio in base model (S2V is separate)
- -newer ecosystem than 2.1
Best For
- →Self-hosted production
- →cinematic style control
- →speech-to-video
- →consumer GPU deployment (TI2V-5B)
Run these models on Floyo
Browser-based ComfyUI. No setup, no GPU required.
LTX 2 19B Fast Text to Video
3.6k runs
Wan 2.2 Animate Preprocess (Kijai)
Wan 2.2 + Qwen V2V Restyle
Wan 2.2 T2V with UnifiedRew
Wan 2.2 Animate Character Replacement