Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code
Libraries:
Datasets
pandas
License:
File size: 4,587 Bytes
51d998e
ecdfc50
 
 
 
 
 
 
51d998e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1a16c0
51d998e
 
e1a16c0
51d998e
 
 
 
 
e1a16c0
 
51d998e
e1a16c0
 
 
ecdfc50
e1a16c0
 
 
 
 
 
8a3465c
 
 
e1a16c0
 
 
 
8a3465c
 
 
 
 
 
 
e1a16c0
 
 
 
 
8a3465c
e1a16c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
language:
- en
license: apache-2.0
size_categories:
- n<1K
task_categories:
- image-text-to-text
dataset_info:
  features:
  - name: qid
    dtype: string
  - name: ground_truth_solution
    dtype: string
  - name: ground_truth_diagram_description
    dtype: string
  - name: test_script
    dtype: string
  - name: function_signature
    dtype: string
  - name: diagram
    dtype: image
  - name: capability_aspects
    struct:
    - name: Common Sense
      sequence: string
    - name: Data Structures
      sequence: string
    - name: Dynamic Patterns
      sequence: string
    - name: Geometric Objects
      sequence: string
    - name: Mathematical Operations
      sequence: string
    - name: Spatial Transformations
      sequence: string
    - name: Topological Relations
      sequence: string
  - name: task_type
    dtype: string
  splits:
  - name: test
    num_bytes: 32915902
    num_examples: 253
  download_size: 32012630
  dataset_size: 32915902
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
tags:
- code
---

## HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks
<p align="left">
    <a href="https://huggingface.co/papers/2410.12381">πŸ“„ Paper </a> β€’
    <a href="https://humaneval-v.github.io">🏠 Home Page</a> β€’
    <a href="https://github.com/HumanEval-V/HumanEval-V-Benchmark">πŸ’» GitHub Repository </a> β€’
    <a href="https://humaneval-v.github.io/#leaderboard">πŸ† Leaderboard</a> β€’
    <a href="https://huggingface.co/spaces/HumanEval-V/HumanEval-V-Benchmark-Viewer">πŸ€— Dataset Viewer</a> 
</p>

**HumanEval-V** is a novel benchmark designed to evaluate the diagram understanding and reasoning capabilities of Large Multimodal Models (LMMs) in programming contexts. Unlike existing benchmarks, HumanEval-V focuses on coding tasks that require sophisticated visual reasoning over complex diagrams, pushing the boundaries of LMMs' ability to comprehend and process visual information. The dataset includes **253 human-annotated Python coding tasks**, each featuring a critical, self-explanatory diagram with minimal textual clues. These tasks require LMMs to generate Python code based on the visual context and predefined function signatures.


<div style="text-align: center;">
<img src="task_example.png" alt="" width="650"/>
</div>

## Key features:
- **Complex diagram understanding** that is indispensable for solving coding tasks.
- **Real-world problem contexts** with diverse diagram types and spatial reasoning challenges.
- **Code generation tasks**, moving beyond multiple-choice or short-answer questions to evaluate deeper visual and logical reasoning capabilities.
- **Two-stage evaluation pipeline** that separates diagram description generation and code implementation for more accurate visual reasoning assessment.
- **Handcrafted test cases** for rigorous execution-based evaluation through the **pass@k** metric.


<div style="text-align: center;">
<img src="task_type_and_capability_aspects.png" alt="" width="1000"/>
</div>


## Dataset Structure
Each task in the dataset consists of the following fields:

- **qid**: A unique identifier for each coding task (e.g., _q1_, with mutated versions like _q1-2_, _q1-3_).
- **diagram**: A single diagram that provides the essential visual context required to solve the task.
- **function_signature**: Includes necessary imports and the function signature that the LMMs must complete.
- **test_script**: The test cases used to validate the correctness of the generated code.
- **ground_truth_solution**: The human-annotated code solutions for the task.
- **ground_truth_diagram_description**: Human-annotated descriptions of the diagram.
- **task_type**: The type of the task, which falls into one of six categories, as shown in **Figure 2**.
- **capability_aspects**: The capabilities required to understand the diagram in the task, which include seven dimensions and their sub-aspects, as shown in **Figure 3**.

## Usage
You can easily load the dataset using the Hugging Face `datasets` library.

```python
from datasets import load_dataset
humaneval_v = load_dataset("HumanEval-V/HumanEval-V-Benchmark", split="test")
```

## Citation
```bibtex
@article{zhang2024humanevalv,
  title={HumanEval-V: Benchmarking High-Level Visual Reasoning with Complex Diagrams in Coding Tasks}, 
  author={Zhang, Fengji and Wu, Linquan and Bai, Huiyu and Lin, Guancheng and Li, Xiao and Yu, Xiao and Wang, Yue and Chen, Bei and Keung, Jacky},
  journal={arXiv preprint arXiv:2410.12381},
  year={2024},
}
```