MVRefer / README.md
sosppxo's picture
Update README.md
aa381ee verified
metadata
language:
  - en
tags:
  - 3d
  - multi-view
  - vision-language
  - scanrefer
pretty_name: MVRefer

MVRefer Dataset

1. Dataset Overview

MVRefer is the first benchmark dataset specifically designed for the Multi-view 3D Referring Expression Segmentation (MV-3DRES) task. It is built to align 3D language grounding with the realistic sensing conditions of embodied agents, which typically perceive environments through only a few casually captured RGB views rather than idealized dense point clouds.

2. Dataset Construction & Features

The MVRefer benchmark is constructed based on ScanRefer and ScanNet sequences:

  • Sparse Multi-view Sampling: For each language-object pair in ScanRefer, we sample N=8 RGB frames from the raw ScanNet video stream at uniform temporal intervals to approximate sparse, on-the-fly observations.
  • Visibility Validation: To ensure each sample remains resolvable, we perform a visibility validation step. If none of the initial eight images contain the target, we replace one no-target frame with a target-visible frame. This guarantees at least one positive view while naturally preserving a high proportion of no-target views.
  • Difficulty Splits: To evaluate robustness under varying signal sparsity, we define two difficulty splits based on the target's 2D pixel ratio:
    • Hard Split: The target occupies less than 5% of pixels in all its visible views.
    • Easy Split: At least one view contains at least 5% target pixels.

3. Data Format

The dataset annotations are provided in JSON format. The hierarchical structure is organized by scene, target object, and the sampled frames along with their corresponding 2D pixel ratios.

Here is an example of the data structure:

{
  "scene0011_00": {
    "5": {
      "0": 0.0,
      "339": 0.0,
      "678": 0.0,
      "1017": 0.0,
      "1356": 0.0,
      "1695": 0.0,
      "2034": 0.014394258238955208,
      "2373": 0.0
    },
    "13": { ... }
  }
}

Paper Link: You can find more details in our paper on Hugging Face: www.huggingface.co/papers/2601.06874