ContextualBench / README.md
ToughStone's picture
Update README.md
528f8cf verified
---
license: apache-2.0
task_categories:
- image-to-text
- visual-question-answering
language:
- en
tags:
- context-violating
- visual-language
pretty_name: ContextualBench
---
<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images
</h1>
<p align='center' style="text-align:center;font-size:1.25em;">
<a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Hongxi&nbsp;Li</a>,&nbsp;
<a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Yuyang&nbsp;Chen</a>,&nbsp;
<a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Yayun&nbsp;Qi</a>,&nbsp;
<a href="https://wuxinxiao.github.io/" target="_blank" style="text-decoration: none;">Xinxiao&nbsp;Wu</a>,&nbsp;<br/>
&nbsp;Beijing Institute of Technology<br/>
<em>arXiv 2024</em><br/>
<a href="https://wuxinxiao.github.io/" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website (Comming soon)</a> |
<a href="https://github.com/Tough-Stone/Counter-Context-Reasoning" title="Code" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
<a href="https://wuxinxiao.github.io/" title="arXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv (Comming soon)</a> |
<a href="https://huggingface.co/spaces/ToughStone/ContextualBench_Leaderboard" title="Leaderboard" target="_blank" rel="nofollow" style="text-decoration: none;">🏆 Leaderboard</a>
</p>
<!-- # Dataset Card for ContextualBench
- [Dataset Description](#dataset-description)
- [Dataset Construction](#dataset-construction)
- [Image Generation](#data-fields)
- [Task Design](#task-design)
- [Image Annotation](#image-annotation)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information) -->
<p align='center'>
<img src="description.png" alt="dataset description" align='center' width="850" height="200">
</p>
# Dataset Description
<b>ContextualBench</b> consists of 6(categories) × 12 (instances) = 72 context instances, with each context instances containing 7 context-consistent images and 7 context-violating images. We design 4 visual reasoning tasks and collect human annotations.
# Dataset Construction
## Image Generation
We collect context instances from external database and formulate them as key-value pairs(as shown in <i>‘Constraint’</i> field of <b>Dataset Viewer</b>), then edit a subset of these pairs to make context-violating images. We generate images using text-to-image models.
Each image is generated iteratively until the following three conditions are met:
1. The image conforms to the given constraints.
2. There are no visual illusions in the image.
3. The image is free of ambiguity and potential offensiveness.
## Task Design
We design 3 visual reasoning tasks for all images in <b>ContextualBench</b>: visual question answer, image caption, and image identification. An additional image explanation task is designed for context-violating images.
## Image Annotation
For image caption, image identification, and image explanation, we collect 5 annotations from different human. For visual question answer, we generate Q-A pairs based on image caption following Q2 pipeline.
# Licensing Information
1. **Purpose:** The dataset was primarily designed for use as a test set.
2. **Commercial Use:** Commercially, the dataset may be used as a test set, but it's prohibited to use it as a training set.
3. **Rights on Images:** All rights to the images within the dataset are retained by the <b>ContextualBench</b> authors.
# Citation Information
@article{hongxi2024challenging,
title={Challenging and Enhancing the Reasoning Capacity of Multimodal LLMs in Context-violating Images},
author={},
journal={},
year={2024}
}