SushantGautam's picture
Update README.md
1389f82 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: image_source
      dtype: string
    - name: image
      dtype: image
    - name: category
      dtype: string
  splits:
    - name: test
      num_bytes: 206451775
      num_examples: 1265
  download_size: 142098662
  dataset_size: 206451775
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

lmms-lab_POPE-problematic

🔥🔥 Update: Seems like some other researchers have already figured out this: https://arxiv.org/abs/2504.15707v1. They also have published corrected annotations at: https://github.com/YanNeu/RePOPE/tree/main/annotations. See https://huggingface.co/datasets/SushantGautam/RePOPE for the corrected version

This dataset contains potentially incorrect annotations from the POPE dataset that were automatically flagged during experimentation with vision-language models.

The goal of this dataset is community review and verification. The examples included here are suspected to have incorrect answers, but they are not guaranteed to be wrong. Contributors and researchers are encouraged to inspect them and determine whether the original annotation is indeed incorrect.


Motivation

While working with the original dataset:

https://huggingface.co/datasets/lmms-lab/POPE

I noticed that some examples appear to have incorrect ground-truth answers when visually inspected.

To investigate this further, I ran a filtering pipeline to detect potential annotation mismatches between the image and the labeled answer. The filtered samples are collected in this dataset for manual inspection and discussion.

The intention is not to replace the original dataset, but to highlight examples that may require re-evaluation or correction.


How the samples were filtered

The suspected problematic samples were identified using:

Model: Qwen2-VL

Method:
The model was prompted with the image and the POPE question. If the model's prediction strongly disagreed with the labeled answer, the sample was flagged as potentially problematic.

Important notes:

  • This filtering was performed as part of a separate experiment on POPE.
  • Qwen2-VL is not treated as a ground truth verifier.
  • Some flagged samples may still be correctly labeled.

Future verification with stronger models or human review may help determine the true correctness.


Intended Use

This dataset is meant for:

  • Manual dataset auditing
  • Community review
  • Benchmark quality analysis
  • Studying annotation errors in vision-language datasets

Possible workflows include:

  • Human verification of each flagged sample
  • Cross-model agreement analysis
  • Dataset cleaning experiments
  • Robustness evaluation of VLM hallucination benchmarks

Dataset Structure

The dataset contains a subset of samples from the original POPE dataset that were flagged as suspicious.

Each example includes fields similar to the original dataset:

  • id
  • question_id
  • question
  • answer
  • image_source
  • image
  • category

These correspond directly to entries in the original dataset.


Important Disclaimer

⚠️ These samples are only suspected to be problematic.

They were filtered automatically and may include:

  • genuine annotation errors
  • model mistakes
  • ambiguous images
  • borderline cases

Human verification is required before any conclusions are drawn.