| | --- |
| | dataset_info: |
| | config_name: tikz |
| | features: |
| | - name: difficulty_ast |
| | sequence: float32 |
| | - name: id |
| | dtype: string |
| | - name: code |
| | dtype: string |
| | - name: commented_code |
| | dtype: string |
| | - name: instruction |
| | dtype: string |
| | - name: result_description |
| | dtype: string |
| | - name: difficulty |
| | dtype: string |
| | - name: modification_type |
| | dtype: string |
| | - name: type |
| | dtype: string |
| | - name: patch |
| | sequence: string |
| | - name: template_solution_code |
| | sequence: string |
| | - name: code_solution |
| | sequence: string |
| | - name: image_solution |
| | sequence: image |
| | - name: image_input |
| | dtype: image |
| | splits: |
| | - name: benchmark |
| | num_bytes: 7779734.0 |
| | num_examples: 100 |
| | - name: test |
| | num_bytes: 27150.0 |
| | num_examples: 2 |
| | download_size: 86778966 |
| | dataset_size: 7806884.0 |
| | configs: |
| | - config_name: tikz |
| | data_files: |
| | - split: benchmark |
| | path: tikz/benchmark-* |
| | - split: test |
| | path: tikz/test-* |
| | --- |
| | |
| | # vTikZ: A Benchmark for Visually-Grounded Code Editing |
| |
|
| | [📄 Paper](https://arxiv.org/abs/2505.04670) • [💻 Code](https://github.com/IV2C/VTikZ) • [🌐 Website](https://iv2c.github.io/VTikZ/) |
| |
|
| | ## Dataset Summary |
| |
|
| | **vTikZ** is the first benchmark explicitly designed to evaluate Large Language Models (LLMs) on code editing tasks with visual intent. It focuses on scenarios where natural language instructions are used to modify code that generates diagrams or figures, such as TikZ. The dataset targets the core challenges of this task: localizing relevant code (feature location), generating valid code variants, and ensuring visual consistency with user intent. |
| |
|
| | ## Features |
| |
|
| | ### Human-Annotated |
| |
|
| | * **Original Code**: The base diagram-generating code written by humans. |
| | * **Parameterized Solution Code**: Templates capturing acceptable variations for a given customization. |
| | * **Instruction**: Natural language directive describing the intended visual change. |
| | * **Perceived Difficulty**: Subjective difficulty score assigned to each customization task. |
| | * **Result Description**: Human-written description of the desired visual outcome. |
| | * **Modification Type**: Type of code change—`add`, `remove`, or `update`. |
| | * **Type**: Category of the original diagram (`scientific` or `animal`). |
| |
|
| | ### Automatically Computed |
| |
|
| | * **Perfect Variants**: Set of code solutions generated from the parameterized template. |
| | * **Patch**: Context-free unidiff patch between the original and perfect variant(s). |
| | * **Image Input**: Rendered image from the original code. |
| | * **Images Solution**: Rendered images of all perfect variants. |
| | * **AST Difficulty**: Tree edit distance (TED) between original and variant code \[Zhang & Shasha, 1989]. |
| |
|
| |
|
| | ## Citation |
| |
|
| | If you use this dataset, please cite: |
| |
|
| | ``` |
| | @inproceedings{reux_llmvisualcutomization_2025, |
| | author = {Reux, Charly and Acher, Mathieu and Khelladi, Djamel Eddine and Barais, Olivier and Quinton, Cl{'e}ment}, |
| | title = {LLM Code Customization with Visual Results: A Benchmark on TikZ}, |
| | booktitle = {Proceedings of {The} 29th {International} {Conference} on {Evaluation} and {Assessment} in {Software} {Engineering} ({EASE} 2025).}, |
| | year = {2025}, |
| | } |
| | ``` |
| |
|