Datasets:

ArXiv:
pixmmvp / README.md
mennasiam's picture
Update README.md
35a0717 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: test
        path:
          - Objects.csv
          - Segmentations.json
          - visual_patterns.csv
task_categories:
  - image-referring-segmentation
  - image-segmentation

PixMMVP Benchmark

Project Page | Paper | GitHub

The PixMMVP dataset augments the MMVP benchmark with referring expressions and corresponding segmentation masks for the objects of interest in their respective questions within the original VQA task.

The goal of this benchmark is to evaluate the pixel-level visual grounding and visual question answering capabilities of recent pixel-level MLLMs (e.g., OMG-Llava, Llava-G, GLAMM, and LISA).

Acknowledgements

I acknowledge the use of MMVP dataset's images and questions/choices part of building this dataset, the original MMVP.

Citation

Please cite the following work if you find the dataset useful:

@article{siam2025pixfoundation,
  title={PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?},
  author={Siam, Mennatullah},
  journal={arXiv preprint arXiv:2502.04192},
  year={2025}
}