Improve model card with metadata and paper links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +38 -7
README.md CHANGED
@@ -1,7 +1,38 @@
1
- ---
2
- license: apache-2.0
3
-
4
- ---
5
-
6
- UniFuture
7
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-to-video
4
+ tags:
5
+ - autonomous-driving
6
+ - world-model
7
+ - computer-vision
8
+ - 4D
9
+ ---
10
+
11
+ # UniFuture: A 4D Driving World Model for Future Generation and Perception
12
+
13
+ UniFuture is a unified 4D Driving World Model designed to simulate the dynamic evolution of the 3D physical world. Unlike existing driving world models that focus solely on 2D pixel-level video generation or static perception, UniFuture bridges appearance and geometry to construct a holistic 4D representation.
14
+
15
+ - **Paper:** [UniFuture: A 4D Driving World Model for Future Generation and Perception](https://arxiv.org/abs/2503.13587)
16
+ - **Project Page:** [https://dk-liang.github.io/UniFuture/](https://dk-liang.github.io/UniFuture/)
17
+ - **Repository:** [https://github.com/dk-liang/unifuture](https://github.com/dk-liang/unifuture)
18
+
19
+ ## Introduction
20
+
21
+ UniFuture treats future RGB images and depth maps as coupled projections of the same 4D reality and models them jointly within a single framework. To achieve this, it introduces two key components:
22
+ - **Dual-Latent Sharing (DLS):** A scheme that maps visual and geometric modalities into a shared spatio-temporal latent space, implicitly entangling texture with structure.
23
+ - **Multi-scale Latent Interaction (MLI):** A mechanism that enforces bidirectional consistency: geometry constrains visual synthesis to prevent structural hallucinations, while visual semantics refine geometric estimation.
24
+
25
+ During inference, UniFuture can forecast high-fidelity, geometrically consistent 4D scene sequences (image-depth pairs) from a single current frame.
26
+
27
+ ## Citation
28
+
29
+ If you find this work useful in your research, please consider citing:
30
+
31
+ ```bibtex
32
+ @inproceedings{liang2026UniFuture,
33
+ title={UniFuture: A 4D Driving World Model for Future Generation and Perception},
34
+ author={Liang, Dingkang and Zhang, Dingyuan and Zhou, Xin and Tu, Sifan and Feng, Tianrui and Li, Xiaofan and Zhang, Yumeng and Du, Mingyang and Tan, Xiao and Bai, Xiang},
35
+ booktitle={IEEE International Conference on Robotics and Automation},
36
+ year={2026}
37
+ }
38
+ ```