aRefCOCO / README.md
zhenjiemao's picture
Update README.md
bd263b0 verified
metadata
pretty_name: aRefCOCO
language:
  - en
license: cc-by-4.0
tags:
  - referring image segmentation
  - referential ambiguity
  - vision-language
  - multimodal
  - dataset
task_categories:
  - image-segmentation
size_categories:
  - 100K<n<1M
dataset_info:
  features:
    - name: entity_id
      dtype: int64
    - name: category_name
      dtype: string
    - name: bbox
      sequence: float64
    - name: descriptions
      sequence: string
    - name: split
      dtype: string
    - name: image
      dtype: image
    - name: mask
      dtype: image
  splits:
    - name: train
      num_examples: 110818
    - name: test
      num_examples: 7050

Dataset Card for aRefCOCO

Project Page Paper arXiv SaFiRe Model aRefCOCO Dataset aRefCOCO Dataset License: CC BY 4.0

aRefCOCO (Ambiguous RefCOCO) is a dataset specifically constructed for Referring Image Segmentation (RIS), focusing on referential ambiguity that frequently arises in real-world application. It introduces object-distracting expressions, which involve multiple entities with contextual cues, and category-implicit expressions, where the object class is not explicitly stated. Each entity is paired with an image, a target segmentation mask, multiple referring descriptions, and supporting metadata such as bounding boxes and category labels. In addition to the original benchmark used for evaluation, aRefCOCO now provides an extended train split to support model training and further research on referential ambiguity in referring segmentation and related tasks.

Dataset Structure

Each sample contains the following fields:

  • entity_id: Unique identifier for the entity
  • category_name: Object category name
  • bbox: Bounding box coordinates [x, y, width, height]
  • descriptions: List of referring expressions
  • split: Dataset split (train/test)
  • image: PIL Image object
  • mask: PIL Image object (segmentation mask)

How to use aRefCOCO Dataset

We provide an example to show how to use this data.

Basic Usage

from datasets import load_from_disk

# Load the dataset
train_ds = load_from_disk("/path/to/hf_datasets/train")
test_ds = load_from_disk("/path/to/hf_datasets/test")

# Print the total number of samples
print(f"Total number of train samples: {len(train_ds)}")
print(f"Total number of test samples: {len(test_ds)}")

# Get the first sample
sample = train_ds[0]

Example: Basic Data Access

# Retrieve sample data
sample = train_ds[0]

# Access the three core elements
image = sample['image']
mask = sample['mask']
descriptions = sample['descriptions']

# Print sample information
print(f"Entity ID: {sample['entity_id']}")
print(f"Category: {sample['category_name']}")
print(f"BBox: {sample['bbox']}")
print(f"Descriptions: {sample['descriptions']}")

Alternative: PyTorch Dataset & Raw Images

This Hugging Face repository contains the dataset in Parquet/Arrow format for easy loading.

For alternative formats and implementations, please visit the GitHub Repository which includes:

  • Custom PyTorch Dataset class (refdataset/refdataset.py)
  • Source images and masks in original quality
  • JSONL metadata files
  • Additional example scripts

Citations

If you find our work helpful for your research, please consider citing our work.

@article{mao2025safire,
  title={SaFiRe: Saccade-Fixation Reiteration with Mamba for Referring Image Segmentation}, 
  author={Zhenjie Mao and Yuhuan Yang and Chaofan Ma and Dongsheng Jiang and Jiangchao Yao and Ya Zhang and Yanfeng Wang},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2025}
}

We also recommend other highly related works:

@article{yang2024remamber,
  title   = {ReMamber: Referring Image Segmentation with Mamba Twister},
  author  = {Yuhuan Yang and Chaofan Ma and Jiangchao Yao and Zhun Zhong and Ya Zhang and Yanfeng Wang},
  year    = {2024},
  journal = {European Conference on Computer Vision (ECCV)}
}