← Back to Papers

WorldWarp: Propagating 3D Geometry with Asynchronous Video Diffusion

7.20 2512.19678 · 2025-12-22

Authors

Hanyang Kong; Xingyi Yang; Xiaoxu Zheng; Xinchao Wang

Scores

7.7
Novelty
7.3
Technical
6.0
Transferability
8.0
Momentum
7.0
Evidence
6.7
Breakthrough

Rationale

WorldWarp introduces a novel approach to long-range, geometrically consistent video generation by combining 3D geometric anchoring with a 2D generative refiner, addressing the disconnect between 3D geometry and latent space operations. The use of a spatio-temporal varying noise schedule in the diffusion model is an innovative method to handle occlusions and maintain consistency. The approach is technically significant for improving video generation fidelity, though its applicability across domains may be limited to tasks involving video or similar spatio-temporal data. The paper aligns well with current research trends in video generation and diffusion models. Empirical evidence is solid with indications of achieving state-of-the-art fidelity, though longer-term impact remains to be seen.