visgym_data / README.md
zwcolin's picture
Update README.md
fb5c701 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - agent
  - vision-language-models
  - reinforcement-learning
  - vlm
  - multimodal
  - sft
size_categories:
  - 1M<n<10M

VisGym Dataset

This is the official dataset accompanying the VisGym project.

VisGym consists of 17 diverse, long-horizon environments designed to systematically evaluate, diagnose, and train Vision-Language Models (VLMs) on visually interactive tasks. In these environments, agents must select actions conditioned on both their past actions and observation history, challenging their ability to handle complex, multimodal sequences.

Project Resources

Dataset Summary

This dataset contains trajectories and interaction data generated from the VisGym suites, intended for training and benchmarking multimodal agents. The environments are designed to be:

  • Diverse: Covering 17 distinct task categories.
  • Customizable: Allowing for various configurations of task difficulty and visual settings.
  • Scalable: Suitable for large-scale training of VLMs and Reinforcement Learning agents.

Citation

If you use this dataset, please cite:

@article{wang2026visgymdiversecustomizablescalable,
      title={VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents}, 
      author={Zirui Wang and Junyi Zhang and Jiaxin Ge and Long Lian and Letian Fu and Lisa Dunlap and Ken Goldberg and XuDong Wang and Ion Stoica and David M. Chan and Sewon Min and Joseph E. Gonzalez},
      year={2026},
      eprint={2601.16973},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2601.16973}, 
}