Datasets:

ArXiv:
File size: 1,344 Bytes
4cbcd29
 
 
 
 
2d2a878
 
 
 
 
35a0717
2d2a878
4cbcd29
 
 
 
2d2a878
 
 
 
 
4cbcd29
 
2d2a878
4cbcd29
2d2a878
 
 
4cbcd29
 
 
 
 
 
2d2a878
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
configs:
- config_name: default
  data_files:
  - split: test
    path:
    - Objects.csv
    - Segmentations.json
    - visual_patterns.csv
task_categories:
- image-referring-segmentation
- image-segmentation
---

# PixMMVP Benchmark

[Project Page](https://msiam.github.io/PixFoundationSeries/) | [Paper](https://huggingface.co/papers/2502.04192) | [GitHub](https://github.com/msiam/pixfoundation)

The PixMMVP dataset augments the [MMVP](https://huggingface.co/datasets/MMVP/MMVP) benchmark with referring expressions and corresponding segmentation masks for the objects of interest in their respective questions within the original VQA task.

The goal of this benchmark is to evaluate the pixel-level visual grounding and visual question answering capabilities of recent pixel-level MLLMs (e.g., OMG-Llava, Llava-G, GLAMM, and LISA).

# Acknowledgements
I acknowledge the use of MMVP dataset's images and questions/choices part of building this dataset, the original [MMVP](https://huggingface.co/datasets/MMVP/MMVP).

# Citation
Please cite the following work if you find the dataset useful:
```bibtex
@article{siam2025pixfoundation,
  title={PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?},
  author={Siam, Mennatullah},
  journal={arXiv preprint arXiv:2502.04192},
  year={2025}
}
```