VisualToolBench / README.md
utkarsh4430's picture
Upload folder using huggingface_hub
8fed9dd verified
|
raw
history blame
2.54 kB

VisToolBench Dataset

A benchmark dataset for evaluating vision-language models on tool-use tasks.

Dataset Statistics

  • Total samples: 1204
  • Single-turn: 603
  • Multi-turn: 601

Schema

Column Type Description
id string Unique task identifier
turncase string Either "single-turn" or "multi-turn"
num_turns int Number of conversation turns (1 for single-turn)
prompt_category string Task category (e.g., "medical", "scientific", "general")
eval_focus string What aspect is being evaluated (e.g., "visual_reasoning", "tool_use")
prompt string The user prompt/question. For multi-turn, turns are prefixed with [Turn N]
golden_answer string The reference/ground-truth answer
image Image Primary image for the task (displayed in HF viewer)
images List[Image] All images associated with the task
num_images int Total number of images
tool_trajectory string JSON string of tool calls made (if applicable)
rubrics string JSON string of evaluation rubrics with weights and metadata

Rubrics Format

Each rubric entry contains:

  • description: What the rubric evaluates
  • weight: Importance weight (1-5)
  • objective/subjective: Whether evaluation is objective or subjective
  • explicit/implicit: Whether the answer is explicit or implicit in the image
  • category: List of categories (e.g., "instruction following", "truthfulness")
  • critical: Whether this is a critical rubric ("yes"/"no")
  • final_answer: Whether this relates to the final answer ("yes"/"no")

Usage

from datasets import load_dataset

# Load the dataset
ds = load_dataset("path/to/dataset")

# Access a sample
sample = ds['train'][0]
print(sample['prompt'])
print(sample['image'])  # PIL Image

# Parse rubrics
import json
rubrics = json.loads(sample['rubrics'])
for rubric_id, rubric in rubrics.items():
    print(f"{rubric['description']} (weight: {rubric['weight']})")

Splits

  • train: Full dataset (1204 samples)

Citation

@article{guo2025beyond,
  title={Beyond seeing: Evaluating multimodal llms on tool-enabled image perception, transformation, and reasoning},
  author={Guo, Xingang and Tyagi, Utkarsh and Gosai, Advait and Vergara, Paula and Park, Jayeon and Montoya, Ernesto Gabriel Hern{\'a}ndez and Zhang, Chen Bo Calvin and Hu, Bin and He, Yunzhong and Liu, Bing and others},
  journal={arXiv preprint arXiv:2510.12712},
  year={2025}
}