Datasets:
| license: apache-2.0 | |
| configs: | |
| - config_name: default | |
| data_files: | |
| - split: test | |
| path: GroundingSuite-Eval.jsonl | |
| task_categories: | |
| - image-segmentation | |
| # π GSEval - A Comprehensive Grounding Evaluation Benchmark | |
| GSEval is a meticulously curated evaluation benchmark consisting of 3,800 images, designed to assess the performance of pixel-level and bounding box-level grounding models. It evaluates how well AI systems can understand and localize objects or regions in images based on natural language descriptions. | |
| ## π Results | |
| <div align="center"> | |
| <img src="./assets/gseval.png"> | |
| </div> | |
| <div align="center"> | |
| <img src="./assets/gseval_box.png"> | |
| </div> | |
| ## π Download GSEval | |
| ``` | |
| git lfs install | |
| git clone https://huggingface.co/datasets/hustvl/GSEval | |
| ``` | |
| ## π Additional Resources | |
| - **Paper:** [ArXiv](https://arxiv.org/abs/2503.10596) | |
| - **GitHub Repository:** [GitHub - GroundingSuite](https://github.com/hustvl/GroundingSuite) |