Image-Text-to-Text
Transformers
Safetensors
visgym_model / README.md
nielsr's picture
nielsr HF Staff
Add model card metadata and resource links
6bb10c5 verified
|
raw
history blame
1.85 kB
metadata
license: apache-2.0
library_name: transformers
pipeline_tag: image-text-to-text

VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents

VisGym is a gymnasium of 17 visually interactive, long-horizon environments for evaluating, diagnosing, and training vision–language models (VLMs) in multi-step visual decision-making across symbolic puzzles, real-image understanding, navigation, and manipulation.

This repository contains model checkpoints described in the paper VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents.

Description

Modern Vision-Language Models (VLMs) remain poorly characterized in multi-step visual interactions, particularly in how they integrate perception, memory, and action over long horizons. VisGym provides 17 environments for evaluating and training VLMs, offering flexible controls over difficulty, input representation, planning horizon, and feedback. The suite spans symbolic puzzles, real-image understanding, navigation, and manipulation.

Citation

If you use this model, please cite:

@article{wang2026visgym,
  title        = {VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents},
  author       = {Wang, Zirui and Zhang, Junyi and Ge, Jiaxin and Lian, Long and Fu, Letian and Dunlap, Lisa and Goldberg, Ken and Wang, Xudong and Stoica, Ion and Chan, David M. and Min, Sewon and Gonzalez, Joseph E.},
  journal      = {arXiv preprint arXiv:2601.16973},
  year         = {2026},
  url          = {https://arxiv.org/abs/2601.16973}
}