You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Towards Open-World Referring Expression Comprehension: A Benchmark with Training-free Multi-task Consistency Checker

OpenRef features three key advancements: 1) Diverse visual scenarios; 2) Variable target counts; 3) Rich vocabulary types

πŸ“‚ Dataset Schema & Field Definitions

To support open-world setting in generalized REC, each entry in OpenRef follows the structured format below:

Field Type Description
image string The filename of the source image.
relation_type string The relationship category: single, multi, or none (for none-target cases).
proposal list[dict] A list of bounding boxes. Each dict contains normalized [x, y, w, h] and original_size.
positive string The referring expression that describes the target(s). The query expression. For single-target, multi-target cases, it refers to present target(s); for none-target cases, it describes an absent object to test model robustness.
negative string A distracting/hallucinated expression (primarily used in multi-target, single-target relation tasks).
bbox_num int Total count of target bounding boxes. 0 indicates a none-target scenario.

πŸ” Field Details

  • relation_type:
    • single: Exactly one target exists for the positive expression.
    • multi: Multiple instances (e.g., "yellow cars") are present; all relevant bboxes are in proposal.
    • none: The positive expression describes something not present in the image (used to test model hallucination).
  • proposal:
    • Coordinates x, y, width, height are typically normalized to [0, 100].
    • original_width/height are provided to allow re-scaling to raw image pixels.
  • bbox_num:
    • The bumber of bboxes in a given expression. If bbox_num is 0, proposal must be an empty list [].

image

Downloads last month
25