Enhance OpenDWM dataset card: Add metadata, links, and usage example for CVD-STORM

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +60 -5
README.md CHANGED
@@ -1,5 +1,60 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- Data packages used by the OpenDWM project.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-to-video
5
+ - text-to-3d
6
+ language: en
7
+ tags:
8
+ - autonomous-driving
9
+ - world-model
10
+ ---
11
+
12
+ # OpenDWM Dataset
13
+
14
+ This repository contains data packages used by the [Open Driving World Models (OpenDWM)](https://github.com/SenseTime-FVG/OpenDWM) project. The OpenDWM project is an open-source initiative focused on autonomous driving video generation.
15
+
16
+ The data packages support the development of generative models for world modeling, environment simulation, and future state prediction in autonomous driving. They are utilized in research such as the paper [CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving](https://huggingface.co/papers/2510.07944), which proposes a cross-view video diffusion model leveraging a spatial-temporal reconstruction VAE to generate long-term, multi-view videos with 4D reconstruction capabilities under various control inputs.
17
+
18
+ - **Paper**: [CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving](https://huggingface.co/papers/2510.07944)
19
+ - **Project Page**: https://sensetime-fvg.github.io/CVD-STORM/
20
+ - **Code**: https://github.com/SenseTime-FVG/OpenDWM
21
+
22
+ ## Sample Usage
23
+
24
+ To get started with OpenDWM and utilize these data packages for generation, follow the setup instructions and then use the provided example code for video generation.
25
+
26
+ ### Setup
27
+
28
+ Hardware requirement:
29
+
30
+ * Training and testing multi-view image generation or short video (<= 6 frames per iteration) generation requires 32GB GPU memory (e.g. V100)
31
+ * Training and testing multi-view long video (6 ~ 40 frames per iteration) generation requires 80GB GPU memory (e.g. A100, H100)
32
+
33
+ Software requirement:
34
+
35
+ * git (>= 2.25)
36
+ * python (>= 3.9)
37
+
38
+ Install the [PyTorch](https://pytorch.org/) >= 2.5:
39
+
40
+ ```bash
41
+ python -m pip install torch==2.5.1 torchvision==0.20.1
42
+ ```
43
+
44
+ Clone the repository, then install the dependencies.
45
+
46
+ ```bash
47
+ cd OpenDWM
48
+ git submodule update --init --recursive
49
+ python -m pip install -r requirements.txt
50
+ ```
51
+
52
+ ### T2I, T2V generation with CTSD pipeline
53
+
54
+ Download base model (for VAE, text encoders, scheduler config) and driving generation model checkpoint, and edit the [path](https://github.com/SenseTime-FVG/OpenDWM/blob/main/examples/ctsd_35_6views_image_generation.json#L102) and [prompts](https://github.com/SenseTime-FVG/OpenDWM/blob/main/examples/ctsd_35_6views_image_generation.json#L221) in the JSON config, then run this command.
55
+
56
+ ```bash
57
+ PYTHONPATH=src python examples/ctsd_generation_example.py -c examples/ctsd_35_6views_image_generation.json -o output/ctsd_35_6views_image_generation
58
+ ```
59
+
60
+ For more advanced examples, including layout-conditioned T2V and LiDAR generation, please refer to the [OpenDWM GitHub repository's examples section](https://github.com/SenseTime-FVG/OpenDWM#examples).