Improve dataset card: Add paper, project page, code links, task categories, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +58 -5
README.md CHANGED
@@ -1,5 +1,58 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- Data packages used by the OpenDWM project.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-to-3d
5
+ - text-to-video
6
+ tags:
7
+ - autonomous-driving
8
+ - video-generation
9
+ - 4d-reconstruction
10
+ - world-model
11
+ - lidar
12
+ ---
13
+
14
+ # OpenDWM Data Packages
15
+
16
+ This repository contains data packages used by the [Open Driving World Models (OpenDWM)](https://github.com/SenseTime-FVG/OpenDWM) project. The OpenDWM initiative focuses on autonomous driving video generation, enabling the creation of multi-view images or videos of driving scenes based on text and road environment layout conditions.
17
+
18
+ The [CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving](https://huggingface.co/papers/2510.07944) paper, part of the OpenDWM efforts, proposes a cross-view video diffusion model utilizing a spatial-temporal reconstruction Variational Autoencoder (VAE) that generates long-term, multi-view videos with 4D reconstruction capabilities under various control inputs. This dataset facilitates the training of such models, enhancing their ability to encode 3D structures and temporal dynamics for comprehensive scene understanding.
19
+
20
+ ## Links
21
+ - **Paper:** [CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving](https://huggingface.co/papers/2510.07944)
22
+ - **Project Page:** [OpenDWM](https://sensetime-fvg.github.io/CVD-STORM/)
23
+ - **Code:** [GitHub - SenseTime-FVG/OpenDWM](https://github.com/SenseTime-FVG/OpenDWM)
24
+
25
+ ## Dataset Description
26
+
27
+ The data packages are designed to support various autonomous driving tasks, including high-fidelity video generation and 4D scene reconstruction, as explored within the OpenDWM framework. They are integral for training and evaluating generative models capable of producing diverse and meaningful information, such as depth estimation and LiDAR data.
28
+
29
+ ## Sample Usage
30
+
31
+ The OpenDWM project provides examples for generating videos and LiDAR data conditioned on text and layout information using the models trained with these data packages.
32
+
33
+ ### Setup
34
+ First, install the necessary Python packages and clone the repository:
35
+ ```bash
36
+ cd OpenDWM
37
+ git submodule update --init --recursive
38
+ python -m pip install -r requirements.txt
39
+ ```
40
+
41
+ ### Layout conditioned T2V generation with CTSD pipeline
42
+
43
+ 1. Download base model (for VAE, text encoders, scheduler config) and driving generation model checkpoint, and edit the [path](examples/ctsd_35_6views_video_generation_with_layout.json#L156) in the JSON config.
44
+ 2. Download layout resource package ([`nuscenes_scene-0627_package.zip`](https://huggingface.co/datasets/wzhgba/opendwm-data/resolve/main/nuscenes_scene-0627_package.zip?download=true), or [`carla_town04_package.zip`](https://huggingface.co/datasets/wzhgba/opendwm-data/resolve/main/carla_town04_package.zip?download=true)) and unzip to the `{RESOURCE_PATH}`. Then edit the meta [path](examples/ctsd_35_6views_video_generation_with_layout.json#L162) as `{RESOURCE_PATH}/data.json` in the JSON config.
45
+ 3. Run this command to generate the video.
46
+ ```bash
47
+ PYTHONPATH=src python src/dwm/preview.py -c examples/ctsd_35_6views_video_generation_with_layout.json -o output/ctsd_35_6views_video_generation_with_layout
48
+ ```
49
+
50
+ ### Layout conditioned LiDAR generation with Diffusion pipeline
51
+
52
+ 1. Download LiDAR VAE and LiDAR Diffusion generation model checkpoint.
53
+ 2. Prepare the dataset ( [`nuscenes_scene-0627_lidar_package.zip`](https://huggingface.co/datasets/wzhgba/opendwm-data/resolve/main/nuscenes_scene-0627_lidar_package.zip?download=true) ).
54
+ 3. Modify the values of `json_file`, `autoencoder_ckpt_path`, and `diffusion_model_ckpt_path` to the paths of your dataset and checkpoints in the json file `examples/lidar_diffusion_temporal_preview.json`.
55
+ 4. Run the following command to generate LiDAR data according to the reference frame autoregressively.
56
+ ```bash
57
+ PYTHONPATH=src python3 -m torch.distributed.run --nnodes 1 --nproc-per-node 2 --node-rank 0 --master-addr 127.0.0.1 --master-port 29000 src/dwm/preview.py -c examples/lidar_diffusion_temporal_preview.json -o output/temporal_diffusion
58
+ ```