Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
x
float64
-26.79
44.8
y
float64
-28.45
38.3
5.039331
1.865542
5.005246
1.829022
4.97116
1.792502
4.937075
1.755982
4.90299
1.719462
4.868904
1.682942
4.834819
1.646422
4.800734
1.609902
4.766648
1.573382
4.732563
1.536862
4.698477
1.500342
4.664392
1.463822
4.630307
1.427302
4.596221
1.390782
4.562136
1.354262
4.528051
1.317742
4.496474
1.284223
4.464589
1.250998
4.432396
1.21807
4.3999
1.185442
4.367102
1.153117
4.334007
1.121097
4.300616
1.089385
4.266932
1.057984
4.232959
1.026896
4.198699
0.996125
4.164155
0.965673
4.129331
0.935542
4.092324
0.903822
4.055317
0.872102
4.01831
0.840382
3.981303
0.808662
3.944297
0.776942
3.90729
0.745221
3.870283
0.713501
3.833276
0.681781
3.795328
0.654098
3.753523
0.63268
3.708887
0.61805
3.662515
0.610569
3.615542
0.61042
3.569123
0.617605
3.524395
0.63195
3.482455
0.653102
3.444331
0.680542
3.407188
0.711971
3.370045
0.7434
3.332902
0.774828
3.295759
0.806257
3.258617
0.837685
3.221474
0.869114
3.184331
0.900542
3.149234
0.930754
3.114653
0.961554
3.080599
0.992936
3.047081
1.02489
3.014108
1.057407
2.981691
1.090477
2.949838
1.124091
2.918559
1.15824
2.887862
1.192913
2.857757
1.228101
2.828252
1.263794
2.798768
1.300082
2.769284
1.33637
2.7398
1.372658
2.710316
1.408946
2.680831
1.445234
2.651347
1.481522
2.621863
1.51781
2.592379
1.554098
2.562895
1.590386
2.533411
1.626675
2.50164
1.659588
2.464736
1.686621
2.423774
1.706987
2.379945
1.720093
2.334527
1.725557
2.28884
1.72322
2.244216
1.71315
2.201954
1.69564
2.163284
1.6712
2.129331
1.640542
2.096343
1.605198
2.063355
1.569854
2.030367
1.53451
1.997379
1.499166
1.964392
1.463822
1.931404
1.428478
1.898416
1.393134
1.865428
1.357789
1.83244
1.322445
1.800068
1.288509
1.766964
1.255285
1.733146
1.22279
1.698627
1.191039
1.663426
1.160047
1.627558
1.129829
1.591041
1.100399
1.553892
1.07177
End of preview. Expand in Data Studio

OmniRoam - InteriorGS Rendering Pipeline

A batch rendering tool for rendering panoramic videos from 3D Gaussian Splatting (3DGS) datasets using Blender.

Prerequisites

System Requirements

  • Linux (tested on Ubuntu 20.04+)
  • GPU with CUDA support (recommended for faster rendering)
  • Python 3.11 (included with Blender 4.5.6)
  • Node.js >= 18.0.0
  • FFmpeg

Required Software

  1. Blender 4.5.6 (Linux x64)
  2. Gaussian Splatting Blender Addon
  3. splat-transform (npm package for PLY decompression)
  4. Python packages: tqdm, scipy, huggingface_hub

Installation

Step 1: Install Blender 4.5.6

wget https://www.blender.org/download/release/Blender4.5/blender-4.5.6-linux-x64.tar.xz

tar -xf blender-4.5.6-linux-x64.tar.xz

This will create a blender-4.5.6-linux-x64 directory containing Blender.

Step 2: Install Gaussian Splatting Blender Addon

git clone git@github.com:ReshotAI/gaussian-splatting-blender-addon.git

cd gaussian-splatting-blender-addon
./zip.sh

This creates blender-addon.zip in the addon directory.

Move the addon to your project root:

cd ..
mkdir -p gaussian-splatting-blender-addon
cp gaussian-splatting-blender-addon/blender-addon.zip gaussian-splatting-blender-addon/

Step 3: Install Python Dependencies

The script will automatically install required Python packages (tqdm, scipy) to Blender's Python environment on first run. To manually install:

./blender-4.5.6-linux-x64/4.5/python/bin/python3.11 -m pip install tqdm scipy huggingface_hub

Step 4: Install Node.js and splat-transform

The script will attempt to install Node.js via nvm if not present. To manually install:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 20
nvm use 20

npm install -g @playcanvas/splat-transform

Step 5: Project Structure

Ensure your project directory has this structure:

your-project/
β”œβ”€β”€ blender-4.5.6-linux-x64/
β”‚   └── blender
β”œβ”€β”€ gaussian-splatting-blender-addon/
β”‚   └── blender-addon.zip
β”œβ”€β”€ render.py
β”œβ”€β”€ run_simple.sh
β”œβ”€β”€ valid_datasets.txt
β”œβ”€β”€ datasets/              (will be created automatically)
└── output/                (will be created automatically)

Dataset Download

The dataset files come from two separate Hugging Face repositories and require manual setup.

Dataset Sources

  1. 3D Gaussian Splatting (3DGS) Files:

    • Repository: spatialverse/InteriorGS (private, requires HF token)
    • File: 3dgs_compressed.ply (will auto-download and decompress)
  2. Camera Path Files:

    • Repository: Yuheng02/OmniRoam-InteriorGS-Path
    • File: path_3.zip (requires manual download and extraction)

Setup Instructions

Step 1: Configure Hugging Face Token

The InteriorGS dataset is private. You need to:

  1. Get your Hugging Face token from https://huggingface.co/settings/tokens
  2. Edit render.py and set your token:
    HF_TOKEN = "hf_xxxxxxxxxxxxxxxxxxxxx"  # Your token here
    

Or provide it via command line:

./blender-4.5.6-linux-x64/blender --background --python render.py -- \
    --dataset-file valid_datasets.txt \
    --hf-token hf_xxxxxxxxxxxxxxxxxxxxx

Step 2: Download Camera Path Files

The camera trajectories must be downloaded manually:

  1. Download from Hugging Face:

    # Visit: https://huggingface.co/datasets/Yuheng02/OmniRoam-InteriorGS-Path
    # Download path_3.zip
    wget https://huggingface.co/datasets/Yuheng02/OmniRoam-InteriorGS-Path/resolve/main/path_3.zip
    
  2. Extract to datasets directory:

    unzip path_3.zip -d datasets/
    

    This will create datasets/<dataset_id>/path.json for each dataset.

Usage

Run Rendering

Single Process Mode

Process all datasets sequentially:

./blender-4.5.6-linux-x64/blender --background --python render.py -- --dataset-file valid_datasets.txt

Multi-Process Parallel Mode

Process multiple splits in parallel using the shell script:

./run_simple.sh 1 2 3 4 5 200

This command:

  • Processes splits 1, 2, 3, 4, 5 (out of 200 total splits)
  • Uses 5 parallel processes
  • Each process handles its assigned subset of datasets

Manual Python Command

For more control, use the Python script directly:

./blender-4.5.6-linux-x64/blender --background --python render_clean.py -- \
    --dataset-file valid_datasets.txt \
    --split 1 2 3 \
    --total-split 200

Command Line Options

--datasets ID1 ID2 ...        Dataset ID list (space separated)
--dataset-file FILE           Text file with dataset IDs (one per line)
--base-path PATH              Local dataset cache path (default: datasets)
--output-dir PATH             Output directory (default: output)
--target-frames N             Target video frame count (default: 800)
--hf-token TOKEN              Hugging Face token (for private repos)
--split N1 N2 ...             Split numbers to process
--total-split N               Total number of splits
--dry-run                     Validate datasets only, don't render
--help, -h                    Show help message

Output

For each dataset, the rendering pipeline produces:

output/<dataset_id>/
β”œβ”€β”€ pano_camera0/
β”‚   β”œβ”€β”€ frame_0001.png    (PNG image sequence)
β”‚   β”œβ”€β”€ frame_0002.png
β”‚   β”œβ”€β”€ ...
β”‚   └── frame_0800.png
β”œβ”€β”€ video.mp4              (Compiled video, 30fps)
└── transforms.json        (Camera positions and transformations)

Output Files

  1. PNG Sequence (pano_camera0/frame_*.png)

    • Format: PNG, RGB, 8-bit
    • Resolution: 1920x960 (configurable)
    • Compression level: 15
    • Count: 800 frames (default)
  2. Video (video.mp4)

    • Codec: H.264 (libx264) with CRF 18
    • Fallback: libopenh264 or mpeg4
    • Frame rate: 30 fps
    • Pixel format: yuv420p
  3. Camera Data (transforms.json)

    • Camera-to-world transformation matrices (4x4)
    • Camera positions (world coordinates)
    • Camera rotations (Euler angles)
    • Format: OpenCV/Nerfstudio compatible

transforms.json Structure

{
  "coordinate_convention": "OpenCV: x_cam = R * X_world + t (Rcw/tcw)",
  "twc_convention": "Nerfstudio: X_world -> cam with inverse(Twc); stored transform_matrix = Twc (cam-to-world)",
  "dataset_id": "0001_839920",
  "num_images": 800,
  "per_image": {
    "pano_camera0/frame_0001.png": {
      "transform_matrix": [[...], [...], [...], [...]],
      "location": [x, y, z],
      "rotation": [rx, ry, rz],
      "frame": 1
    },
    ...
  }
}

Configuration

Edit these variables in render.py for customization:

dataset_base_path = "datasets"          # Local dataset cache
output_base_dir = "output"              # Output directory
target_frames = 800                     # Video length
fps = 30                                # Frame rate
target_speed = 0.02                     # Camera movement speed (m/frame)

Render settings (resolution, samples, etc.) can be modified in setup_render_settings().

Acknowledgments

License

The codebase is released under the Adobe Research License for noncommercial research purposes only.

Downloads last month
6