ORBIT / README.md
Abk802's picture
Updated README.md
b552564 verified
metadata
license: mit

ORBIT: An Object Property Reasoning Benchmark for Visual Inference Tasks

Inspired by human object categorization, this repository introduces ORBIT, a comprehensive benchmark designed to evaluate the abilities of Vision–Language Models (VLMs) to reason about abstract object properties. ORBIT spans four object property dimensions (physical, taxonomic, functional, relational), three levels of reasoning complexity (direct recognition, property inference, counterfactual reasoning), and three visual domains (photographic/real, animated, AI-generated).

📄 Paper | 📚 arXiv | 💻 Code

Uses:

This dataset is designed to test VLMs through count-based questions, enabling evaluation of how well they reason about abstract object properties across different visual contexts.

Fields:

  • image_id: the image number (120 under each image_type).
  • image_type: the three visual domains (photographic/real, animated, AI-generated).
  • theme: label to indicate image theme.
  • reasoning_complexity: the three reasoning complexity levels (direct recognition, property inference, counterfactual).
  • question_id: the question number (three levels of reasoning_complexity).
  • question: the task for the specific level.
  • answer: the ground truth/correct answer to the question.
  • property_category: the four property dimensions (physical, taxonomic, functional, relational).
  • objects: list of detected objects part of ground truth (reasoning for answer).