Instructions to use VisGym/visgym_model with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use VisGym/visgym_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="VisGym/visgym_model")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("VisGym/visgym_model", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use VisGym/visgym_model with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "VisGym/visgym_model" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VisGym/visgym_model", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/VisGym/visgym_model
- SGLang
How to use VisGym/visgym_model with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "VisGym/visgym_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VisGym/visgym_model", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "VisGym/visgym_model" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VisGym/visgym_model", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use VisGym/visgym_model with Docker Model Runner:
docker model run hf.co/VisGym/visgym_model
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("VisGym/visgym_model", dtype="auto")VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents
VisGym is a gymnasium of 17 visually interactive, long-horizon environments for evaluating, diagnosing, and training vision–language models (VLMs) in multi-step visual decision-making across symbolic puzzles, real-image understanding, navigation, and manipulation.
This repository contains model checkpoints described in the paper VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents.
- Project Page: https://visgym.github.io/
- Code: https://github.com/visgym/VIsGym
- Paper: https://arxiv.org/abs/2601.16973
Description
Modern Vision-Language Models (VLMs) remain poorly characterized in multi-step visual interactions, particularly in how they integrate perception, memory, and action over long horizons. VisGym provides 17 environments for evaluating and training VLMs, offering flexible controls over difficulty, input representation, planning horizon, and feedback. The suite spans symbolic puzzles, real-image understanding, navigation, and manipulation.
Citation
If you use this model, please cite:
@article{wang2026visgym,
title = {VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents},
author = {Wang, Zirui and Zhang, Junyi and Ge, Jiaxin and Lian, Long and Fu, Letian and Dunlap, Lisa and Goldberg, Ken and Wang, Xudong and Stoica, Ion and Chan, David M. and Min, Sewon and Gonzalez, Joseph E.},
journal = {arXiv preprint arXiv:2601.16973},
year = {2026},
url = {https://arxiv.org/abs/2601.16973}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="VisGym/visgym_model")