Add task category, GitHub link, and usage instructions
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,7 +1,11 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
language:
|
| 4 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
tags:
|
| 6 |
- agent
|
| 7 |
- vision-language-models
|
|
@@ -9,38 +13,44 @@ tags:
|
|
| 9 |
- vlm
|
| 10 |
- multimodal
|
| 11 |
- sft
|
| 12 |
-
size_categories:
|
| 13 |
-
- 1M<n<10M
|
| 14 |
---
|
| 15 |
|
| 16 |
# VisGym Dataset
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
**VisGym** consists of 17 diverse, long-horizon environments designed to systematically evaluate, diagnose, and train Vision-Language Models (VLMs) on visually interactive tasks. In these environments, agents must select actions conditioned on both their past actions and observation history, challenging their ability to handle complex, multimodal sequences.
|
| 21 |
|
| 22 |
-
## Project Resources
|
| 23 |
-
- **Website:** [https://visgym.github.io/](https://visgym.github.io/)
|
| 24 |
-
- **Paper:** [VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents](https://visgym.github.io/visgym.pdf)
|
| 25 |
-
|
| 26 |
## Dataset Summary
|
| 27 |
This dataset contains trajectories and interaction data generated from the VisGym suites, intended for training and benchmarking multimodal agents. The environments are designed to be:
|
| 28 |
* **Diverse:** Covering 17 distinct task categories.
|
| 29 |
* **Customizable:** Allowing for various configurations of task difficulty and visual settings.
|
| 30 |
* **Scalable:** Suitable for large-scale training of VLMs and Reinforcement Learning agents.
|
| 31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
## Citation
|
| 33 |
|
| 34 |
If you use this dataset, please cite:
|
| 35 |
|
| 36 |
```bibtex
|
| 37 |
-
@article{
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
primaryClass={cs.CV},
|
| 44 |
-
url={https://arxiv.org/abs/2601.16973},
|
| 45 |
}
|
| 46 |
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1M<n<10M
|
| 7 |
+
task_categories:
|
| 8 |
+
- image-text-to-text
|
| 9 |
tags:
|
| 10 |
- agent
|
| 11 |
- vision-language-models
|
|
|
|
| 13 |
- vlm
|
| 14 |
- multimodal
|
| 15 |
- sft
|
|
|
|
|
|
|
| 16 |
---
|
| 17 |
|
| 18 |
# VisGym Dataset
|
| 19 |
|
| 20 |
+
[**Project Page**](https://visgym.github.io/) | [**Paper**](https://huggingface.co/papers/2601.16973) | [**GitHub**](https://github.com/visgym/VIsGym)
|
| 21 |
|
| 22 |
**VisGym** consists of 17 diverse, long-horizon environments designed to systematically evaluate, diagnose, and train Vision-Language Models (VLMs) on visually interactive tasks. In these environments, agents must select actions conditioned on both their past actions and observation history, challenging their ability to handle complex, multimodal sequences.
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
## Dataset Summary
|
| 25 |
This dataset contains trajectories and interaction data generated from the VisGym suites, intended for training and benchmarking multimodal agents. The environments are designed to be:
|
| 26 |
* **Diverse:** Covering 17 distinct task categories.
|
| 27 |
* **Customizable:** Allowing for various configurations of task difficulty and visual settings.
|
| 28 |
* **Scalable:** Suitable for large-scale training of VLMs and Reinforcement Learning agents.
|
| 29 |
|
| 30 |
+
## Usage
|
| 31 |
+
|
| 32 |
+
You can download the dataset assets and metadata using the `huggingface-cli`:
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
# Install huggingface-cli
|
| 36 |
+
pip install -U "huggingface_hub[cli]"
|
| 37 |
+
|
| 38 |
+
# Download the dataset to local
|
| 39 |
+
# This will download 'assets/' and 'metadata/' folder into local dir
|
| 40 |
+
mkdir -p inference_dataset
|
| 41 |
+
huggingface-cli download VisGym/inference-dataset --repo-type dataset --local-dir ./inference_dataset
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
## Citation
|
| 45 |
|
| 46 |
If you use this dataset, please cite:
|
| 47 |
|
| 48 |
```bibtex
|
| 49 |
+
@article{wang2026visgym,
|
| 50 |
+
title = {VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents},
|
| 51 |
+
author = {Wang, Zirui and Zhang, Junyi and Ge, Jiaxin and Lian, Long and Fu, Letian and Dunlap, Lisa and Goldberg, Ken and Wang, Xudong and Stoica, Ion and Chan, David M. and Min, Sewon and Gonzalez, Joseph E.},
|
| 52 |
+
journal = {arXiv preprint arXiv:2601.16973},
|
| 53 |
+
year = {2026},
|
| 54 |
+
url = {https://arxiv.org/abs/2601.16973}
|
|
|
|
|
|
|
| 55 |
}
|
| 56 |
```
|