Datasets:

ArXiv:
nielsr HF Staff commited on
Commit
0074018
·
verified ·
1 Parent(s): 4cbcd29

Add task categories, paper link, and project links

Browse files

Hi! I'm Niels from the community science team at Hugging Face.

I've updated the dataset card to improve its documentation and discoverability:
- Added `task_categories`: `image-text-to-text` and `image-segmentation`.
- Included links to the official project page, the paper ([arXiv:2502.04192](https://huggingface.co/papers/2502.04192)), and the GitHub repository.
- Expanded the description to clarify how this benchmark augments the original MMVP dataset with referring expressions and segmentation masks.

This should help users better understand the context of the benchmark and find the associated resources!

Files changed (1) hide show
  1. README.md +17 -10
README.md CHANGED
@@ -3,26 +3,33 @@ configs:
3
  - config_name: default
4
  data_files:
5
  - split: test
6
- path:
7
- - "Objects.csv"
8
- - "Segmentations.json"
9
- - "visual_patterns.csv"
 
 
 
10
  ---
11
 
12
  # PixMMVP Benchmark
13
 
14
- The dataset annotations augmenting MMVP with referring expressions and corresponding segmentation masks for the objects of interest in their respective questions within the original VQA task.
 
 
 
 
15
 
16
  # Acknowledgements
17
- I acknowledge the use of MMVP dataset's images and questions/choices part of building this dataset, the original [MMVP](https://huggingface.co/MMVP).
18
 
19
- # References
20
- Please city my work if you find the dataset useful
21
- ```
22
  @article{siam2025pixfoundation,
23
  title={PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?},
24
  author={Siam, Mennatullah},
25
  journal={arXiv preprint arXiv:2502.04192},
26
  year={2025}
27
  }
28
- ```
 
3
  - config_name: default
4
  data_files:
5
  - split: test
6
+ path:
7
+ - Objects.csv
8
+ - Segmentations.json
9
+ - visual_patterns.csv
10
+ task_categories:
11
+ - image-text-to-text
12
+ - image-segmentation
13
  ---
14
 
15
  # PixMMVP Benchmark
16
 
17
+ [Project Page](https://msiam.github.io/PixFoundationSeries/) | [Paper](https://huggingface.co/papers/2502.04192) | [GitHub](https://github.com/msiam/pixfoundation)
18
+
19
+ The PixMMVP dataset augments the [MMVP](https://huggingface.co/datasets/MMVP/MMVP) benchmark with referring expressions and corresponding segmentation masks for the objects of interest in their respective questions within the original VQA task.
20
+
21
+ The goal of this benchmark is to evaluate the pixel-level visual grounding and visual question answering capabilities of recent pixel-level MLLMs (e.g., OMG-Llava, Llava-G, GLAMM, and LISA).
22
 
23
  # Acknowledgements
24
+ I acknowledge the use of MMVP dataset's images and questions/choices part of building this dataset, the original [MMVP](https://huggingface.co/datasets/MMVP/MMVP).
25
 
26
+ # Citation
27
+ Please cite the following work if you find the dataset useful:
28
+ ```bibtex
29
  @article{siam2025pixfoundation,
30
  title={PixFoundation: Are We Heading in the Right Direction with Pixel-level Vision Foundation Models?},
31
  author={Siam, Mennatullah},
32
  journal={arXiv preprint arXiv:2502.04192},
33
  year={2025}
34
  }
35
+ ```