opendwm-data / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper, project page, code, task categories, tags, and sample usage
887d7d5 verified
|
raw
history blame
6.16 kB
metadata
license: apache-2.0
task_categories:
  - text-to-video
  - text-to-3d
tags:
  - autonomous-driving
  - video-generation
  - 3d-reconstruction
  - diffusion-models
  - lidar

OpenDWM Data Packages

This repository contains data packages used by the Open Driving World Models (OpenDWM) project. The OpenDWM initiative focuses on autonomous driving video generation, offering a high-quality, controllable tool for generating multi-view videos and 4D reconstructions.

The data packages support research presented in papers such as CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving. This work proposes CVD-STORM, a cross-view video diffusion model leveraging a spatial-temporal reconstruction Variational Autoencoder (VAE) to generate long-term, multi-view videos with 4D reconstruction capabilities under various control inputs. This is crucial for enabling generative models in world modeling for environment simulation and future state prediction in autonomous driving.

Paper: CVD-STORM: Cross-View Video Diffusion with Spatial-Temporal Reconstruction Model for Autonomous Driving Project Page: https://sensetime-fvg.github.io/CVD-STORM/ Code: https://github.com/SenseTime-FVG/OpenDWM

Data Packages

These data packages are essential resources for training and running examples within the OpenDWM project, particularly for layout-conditioned video and LiDAR generation tasks. Examples of such packages, referenced in the OpenDWM GitHub repository, include:

Sample Usage

This section provides instructions to set up the OpenDWM environment and run examples for generating driving videos and LiDAR data using the models and code from the OpenDWM GitHub repository.

1. Setup

First, ensure you have PyTorch (>= 2.5) installed. Then, clone the OpenDWM repository and install its dependencies:

python -m pip install torch==2.5.1 torchvision==0.20.1 # Or a compatible PyTorch version
cd OpenDWM
git submodule update --init --recursive
python -m pip install -r requirements.txt

2. T2I, T2V generation with CTSD pipeline

Download a base model (for VAE, text encoders, scheduler config) and a driving generation model checkpoint (e.g., from wzhgba/opendwm-models). Edit the path and prompts in your JSON config file (e.g., examples/ctsd_35_6views_image_generation.json), then run this command:

PYTHONPATH=src python examples/ctsd_generation_example.py -c examples/ctsd_35_6views_image_generation.json -o output/ctsd_35_6views_image_generation

3. Layout conditioned T2V generation with CTSD pipeline

  1. Download a base model and driving generation model checkpoint, and edit the path in your JSON config file.
  2. Download a layout resource package (e.g., nuscenes_scene-0627_package.zip from this dataset, or carla_town04_package) and unzip it to a specified {RESOURCE_PATH}. Then, edit the meta path in the JSON config as {RESOURCE_PATH}/data.json.
  3. Run this command to generate the video:
PYTHONPATH=src python src/dwm/preview.py -c examples/ctsd_35_6views_video_generation_with_layout.json -o output/ctsd_35_6views_video_generation_with_layout

4. Layout conditioned LiDAR generation with Diffusion pipeline

  1. Download LiDAR VAE and LiDAR Diffusion generation model checkpoints (e.g., from wzhgba/opendwm-models).
  2. Prepare the dataset (e.g., nuscenes_scene-0627_lidar_package.zip from this dataset).
  3. Modify the values of json_file, autoencoder_ckpt_path, and diffusion_model_ckpt_path to the paths of your dataset and checkpoints in the json file examples/lidar_diffusion_temporal_preview.json.
  4. Run the following command to generate LiDAR data according to the reference frame autoregressively:
PYTHONPATH=src python3 -m torch.distributed.run --nnodes 1 --nproc-per-node 2 --node-rank 0 --master-addr 127.0.0.1 --master-port 29000 src/dwm/preview.py -c examples/lidar_diffusion_temporal_preview.json -o output/temporal_diffusion

Citation

If you find the OpenDWM project useful in your research or refer to the provided baseline results, please star :star: its repository and consider citing:

@misc{opendwm,
  Year = {2025},
  Note = {https://github.com/SenseTime-FVG/OpenDWM},
  Title = {OpenDWM: Open Driving World Models}
}